common.title

Docs
Quantum Circuit
TYTAN CLOUD

QUANTUM GAMING


autoQAOA
Desktop RAG

Overview
Terms of service

Privacy policy

Contact
Research

Sign in
Sign up
common.title

Developed QUBO Annealing with PyTorch! Introducing Torch Tytan!

Yuichiro Minato

2024/02/09 02:25

#Featured

Hello. The future of quantum computing seems quite uncertain, doesn't it? In the midst of this uncertainty, I've developed a tool to improve the outlook. It's called Torch Tytan! Still in Japan Annealing for combinatorial optimization is popular.

Within the realm of quantum computing, there's a type of computation known as quantum annealing, which solves combinatorial optimization problems by creating formulas called QUBOs. These tools are currently available as SDKs in Python, but they are known for being discontinued or becoming difficult to access. In order to maintain quantum computing tools sustainably, I've developed a new initiative called Torch Tytan.

Torch Tytan is built upon a deep learning framework known as PyTorch.

PyTorch

https://pytorch.org/

This framework is used worldwide and has a large user base due to the rising popularity of deep learning. Based on PyTorch, we've created an SDK for the fields of quantum annealing, QUBO annealing, and Ising machines. There are many benefits to this approach.

First, a key feature of conventional deep learning frameworks is the use of GPUs. Deep learning has evolved alongside GPU hardware, and Torch Tytan benefits greatly from this. Previous solvers have gone through great lengths to modify GPUs or parallelize CPUs. With Torch Tytan, optimization problems are solved using PyTorch functions, making it easy to introduce parallelization and other features internally. This will make solver development more enjoyable.

Various techniques are required for the formulation part, but this will be collaboratively developed with the Tytan SDK, using PyTorch as its backend to broadly support optimization using QUBO.

With its vast user base, PyTorch users can easily delve into quantum combinatorial optimization, reducing the learning cost and lessening concerns about market contraction.

Torch Tytan will be integrated and released with the Tytan SDK in near future, so those interested should definitely check it out!

Tytan SDK

https://github.com/tytansdk

I'll conduct a benchmark test.

Quantum Bits: 1000

QUBO Matrix: Random

CPU: 2x Intel Xeon Gold 6132

GPU: 1x NVIDIA V100 16G

I'll try with this setup. Honestly, the CPU is quite fast as well.

The initial values for annealing and the number of iterations are arbitrarily set by me.

Here are the results. They are displayed in the order of mode, speed (seconds), and cost function value.

CPU MODE
15.264014720916748
Objective function value: -5375.50390625

GPU MODE
22.13276958465576
Objective function value: -5120.6298828125

2000 qubits

CPU MODE
20.21289300918579
Objective function value: -15316.19140625

GPU MODE
19.873959064483643
Objective function value: -15804.3359375

3000 qubits

CPU MODE
23.65611505508423
Objective function value: -28199.31640625

GPU MODE
29.1566903591156
Objective function value: -27384.81640625

5000 qubits

CPU MODE
113.20319724082947
Objective function value: -58014.515625

GPU MODE
38.09103536605835
Objective function value: -59421.4765625

10,000 qubits

The CPU could no longer finish within the time limit.

GPU MODE
157.4690477848053
Objective function value: -171032.09375

The total benchmark is

Indeed, PyTorch is impressive. Even with just 16G of VRAM, handling 10,000 qubits was no problem at all. It seems we can easily increase the number of qubits.

© 2025, blueqat Inc. All rights reserved