Hello, it seems that there are often problems in educational and development fields that have no existing solvers or solutions. Let's create new tools to address these issues.
This time, I'm discussing the Lattice Folding Simulation of Peptide by Quantum Computation, 2023:
https://blueqat.com/bqresearch/7fc45c52-5b97-4386-9cde-f5d61a0b9b17
This is a student thesis on the protein folding problem. When using existing QUBO solvers to solve it, it appears necessary to convert the HOBO (Higher Order Binary Optimization) equations into QUBO (Quadratic Unconstrained Binary Optimization) equations. This conversion process introduces significant complexity and unnecessary tasks.
In quantum computing simulations like QAOA, there is a transformation of the time evolution operator, so decomposition is unnecessary. However, many people still believe they must convert to QUBO due to past constraints. I proposed solving the problem directly as HOBO.
For QUBO, the cost can be solved using a QUBO matrix and a vector as follows:
C=xTQxC = x^T Q xC=xTQx
But in higher dimensions, this becomes difficult to visualize. It's better to generalize instead of sticking to matrix notation.
By using tensor network notation, we can easily extend theories previously used for QUBO to higher orders. For example, for a term up to the 5th order, the cost function is:
C=∑xxxxxHC = \sum xxxxxHC=∑xxxxxH
Since HOBO is not strictly a matrix but a tensor, we will refer to it as the HOBO tensor.
When using tensors in HOBO, various developments can be considered in the future. Let's look at a simple implementation method first. I have described how to create HOBO tensors in a preprint.
Tensor Network Based HOBO Solver, July 2024
https://blueqat.com/bqresearch/2e6a2fb2-3607-4bd7-8696-13f8658aec35
Example for the 3rd order in PyTorch:
import torch
T = torch.tensor([
[[-10., 1, -4], [0, 0, -1], [0, 0, 0]],
[[0, 0, 0], [0, 7, 8], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0]]
])
x = torch.tensor([1., 0, 1], dtype=torch.float32)
cost = torch.einsum('i,j,k,ijk->', x, x, x, T)
print(cost.item())
You prepare the tensor, prepare the vector, and contract them.
Example for the 3rd order in Sympy:
import sympy as sp
Define the symbolic variables
x0, x1, x2 = sp.symbols('x0 x1 x2')
Define the tensor T as a symbolic 3D array
T = sp.Array([
[[-10, 1, -4], [0, 0, -1], [0, 0, 0]],
[[0, 0, 0], [0, 7, 8], [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0]]
])
Define the symbolic vector x
x = sp.Array([x0, x1, x2])
Calculate the cost using Einstein summation
cost = sp.tensorcontraction(sp.tensorproduct(T, x, x, x), (0, 3), (1, 4), (2, 5))
cost_simplified = sp.simplify(cost)
Print the result
print(cost_simplified)
People might find it easy to imagine up to the 3rd order, but it's difficult beyond that. However, you can do the same for higher orders.
Example for the 4th order in PyTorch:
import torch
Define a 4th order tensor
T = torch.tensor([
[[[-10., 0., 0.],
[1., 0., 0.],
[-4., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[-1., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]]],
[[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[7., 0., 0.],
[8., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]]],
[[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]]]])
Define vector x
x = torch.tensor([1., 0, 1], dtype=torch.float32)
Perform the einsum operation
cost = torch.einsum('i,j,k,l,ijkl->', x, x, x, x, T)
Print the cost
print(cost.item())
While it was simple to adjust upper triangular matrices and diagonal elements for QUBO, HOBO tensors are more complex. Although I have written how to formulate it in the preprint, efficient methods for creation and implementation need to be proposed by the community in the future.
HOBO tensors contain a lot of redundancy. Researching whether efficient calculations can be achieved using various tensor network technologies seems promising.
I am considering implementing this in Torch Tytan: