This will be the fifth technical guidance for interns in 2022. So far we have looked at classical machine learning, optimization, quantum combinatorial optimization, and quantum machine learning. The last part will be machine learning with quantum optimization.
The two models we will be performing are clustering and linear regression calculations. These models can be implemented using the Ising model. For optimization problems, the problem can be mapped directly or a model can be created to solve the problem indirectly.
Clustering
Clustering is the problem of dividing the nodes into specified groups according to conditions, and in this case we will consider the problem of dividing the nodes into specified number of groups according to the distance between them. In this case, we will use the Ising model, which requires as many qubits as the number of nodes * number of clusters.
QUBO
The formulation is as follows. This time, clustering can be set up by making copies of nodes into multiple clusters, with 1 for nodes belonging to a cluster and 0 for nodes not belonging to a cluster. There are two terms. This one is the cost function that minimizes the sum of the distances in the clusters.
HA=∑dijqiqj(qi∈{0,1})
Also, the constraint condition is that one node must be the only one in all clusters.
HB=∑(i∑nqi−1)2
Here, quantum entanglement can be used to eliminate the second constraint. In that case, the following is used as the second Hamiltonian. This is an expression that exchanges the values of |01> and|10>.
HXY=(X0X1+Y0Y1)/2=0000001001000000
Copy
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from blueqat import Circuit
from blueqat.utils import qaoa
from blueqat.pauli import X,Y,Z
from blueqat.pauli import qubo_bit as q
Copy
import networkx as nx
import matplotlib.pyplot as plt
n = 10
m = 30
seed = 15
options = {'node_size': 100}
G = nx.gnm_random_graph(n, m, seed = seed)
nx.draw(G, **options)
<Figure size 432x288 with 1 Axes>
Copy
hamiltonian = sum(q(e[0])*q(e[1]) + q(e[0]+n)*q(e[1]+n) for e in G.edges)
step = 1
#mixer and init state
mixer = 0.0
init = Circuit()
for i in range(n):
mixer += 0.5*X[i]*X[i+n] + 0.5*Y[i]*Y[i+n]
init.h[i].cx[i,i+n].x[i]
result = qaoa(hamiltonian, step, init, mixer)
b = result.circuit.run(shots=10)
sample = b.most_common(1)[0][0]
print("sample:"+ str(sample))
nx.draw(G, **options, node_color=[int(s) for s in list(sample[:n])])
Linear regression calculation
Next is the linear regression problem. This one has a construction method in the paper.
First, consider a vector Y of objective variables, a matrix X of explanatory variables and a vector of weights w. We consider the model to minimize the error between the explanatory variable X and Y predicted from the weights w.
minwE(x)=∣∣Xw−Y∣∣2
now this will be
minwE(x)=wTXTXw−2wTXTY+YTY
Here YTY is constant variable so can be removed from optimization
minwE(x)=wTXTXw−2wTXTY
And we solve it. Here we have a problem that QUBO is using 01 variables. So we use w=Pw^ and encode continuous variables.
minwE(x)=w^TPTXTXPw^−2w^TPTXTY
Finally with A=PTXTXP and b=PTXTY, we get,
minwE(x)=w^TAw^+w^Tb
We just have to minimize this equation.
First to make w we set variable K
Copy
import numpy as np
K = 2 # bit number for weight
d = 3 # Number of features
p = [2 ** (-i-1) for i in range(K)]
I = np.eye(d)
P = np.kron(I, p)
Copy
w = np.array([0.25, 0.75, 0.5])
X = np.random.rand(100, 3)
y = X @ w + np.random.normal(scale = 0.05, size = X.shape[0])
Copy
A = np.dot(np.dot(P.T, X.T), np.dot(X, P))
b = -2 * np.dot(np.dot(P.T, X.T), y)
QUBO = A + np.diag(b)
QUBO = np.triu(QUBO + QUBO.T) - np.eye(QUBO.shape[0]) * QUBO
Copy
from blueqat.utils import qaoa
from blueqat.pauli import from_qubo
step = 2
hamiltonian = from_qubo(QUBO)
result = qaoa(hamiltonian, step)
b = result.circuit.run(shots=100)
sample = b.most_common(1)[0][0]
print(sample)
011111
Copy
w_qa = P @ [int(i) for i in list(sample)]
print("Predicted weight:", w_qa)
print("True weight:", w)