common.title
Cloud support

Nobisuke

Dekisugi

RAG


autoQAOA
RAG for dev
Fortune telling app
Annealing
DEEPSCORE
Translation

Overview
Service overview
Terms of service

Privacy policy

Contact
Research

Sign in
Sign up
common.title

Internship lecture 5 : Machine Learning using Ising Model

Yuichiro Minato

2022/09/13 01:41

This will be the fifth technical guidance for interns in 2022. So far we have looked at classical machine learning, optimization, quantum combinatorial optimization, and quantum machine learning. The last part will be machine learning with quantum optimization.

The two models we will be performing are clustering and linear regression calculations. These models can be implemented using the Ising model. For optimization problems, the problem can be mapped directly or a model can be created to solve the problem indirectly.

Clustering

Clustering is the problem of dividing the nodes into specified groups according to conditions, and in this case we will consider the problem of dividing the nodes into specified number of groups according to the distance between them. In this case, we will use the Ising model, which requires as many qubits as the number of nodes * number of clusters.

QUBO

The formulation is as follows. This time, clustering can be set up by making copies of nodes into multiple clusters, with 1 for nodes belonging to a cluster and 0 for nodes not belonging to a cluster. There are two terms. This one is the cost function that minimizes the sum of the distances in the clusters.

HA=dijqiqj    (qi{0,1})H_A = \sum d_{ij} q_i q_j\ \ \ \ (q_i \in \{0, 1\})

Also, the constraint condition is that one node must be the only one in all clusters.

HB=(inqi1)2H_B = \sum(\sum_{i}^n q_i - 1)^2

Here, quantum entanglement can be used to eliminate the second constraint. In that case, the following is used as the second Hamiltonian. This is an expression that exchanges the values of |01> and|10>.

HXY=(X0X1+Y0Y1)/2=(0000001001000000)H_{XY} = (X_0 X_1 + Y_0 Y_1)/2 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}
import numpy as np import matplotlib.pyplot as plt %matplotlib inline from blueqat import Circuit from blueqat.utils import qaoa from blueqat.pauli import X,Y,Z from blueqat.pauli import qubo_bit as q
import networkx as nx import matplotlib.pyplot as plt n = 10 m = 30 seed = 15 options = {'node_size': 100} G = nx.gnm_random_graph(n, m, seed = seed) nx.draw(G, **options)
<Figure size 432x288 with 1 Axes>output
hamiltonian = sum(q(e[0])*q(e[1]) + q(e[0]+n)*q(e[1]+n) for e in G.edges) step = 1 #mixer and init state mixer = 0.0 init = Circuit() for i in range(n): mixer += 0.5*X[i]*X[i+n] + 0.5*Y[i]*Y[i+n] init.h[i].cx[i,i+n].x[i] result = qaoa(hamiltonian, step, init, mixer) b = result.circuit.run(shots=10) sample = b.most_common(1)[0][0] print("sample:"+ str(sample)) nx.draw(G, **options, node_color=[int(s) for s in list(sample[:n])])

Linear regression calculation

Next is the linear regression problem. This one has a construction method in the paper.

Reference: https://arxiv.org/abs/2008.02355

First, consider a vector Y of objective variables, a matrix X of explanatory variables and a vector of weights w. We consider the model to minimize the error between the explanatory variable X and Y predicted from the weights w.

minwE(x)=XwY2min_wE(x) = ||Xw - Y||^2

now this will be

minwE(x)=wTXTXw2wTXTY+YTYmin_wE(x) = w^TX^TXw -2w^TX^TY + Y^TY

Here YTYY^TY is constant variable so can be removed from optimization

minwE(x)=wTXTXw2wTXTYmin_wE(x) = w^TX^TXw -2w^TX^TY

And we solve it. Here we have a problem that QUBO is using 01 variables. So we use w=Pw^w = P \hat w and encode continuous variables.

minwE(x)=w^TPTXTXPw^2w^TPTXTYmin_wE(x) = \hat w^TP^TX^TX P \hat w -2 \hat w^TP^TX^TY

Finally with A=PTXTXPA=P^TX^TXP and b=PTXTYb=P^TX^TY, we get,

minwE(x)=w^TAw^+w^Tbmin_w E(x) = \hat w^T A \hat w + \hat w^T b

We just have to minimize this equation. First to make w we set variable K

import numpy as np K = 2 # bit number for weight d = 3 # Number of features p = [2 ** (-i-1) for i in range(K)] I = np.eye(d) P = np.kron(I, p)
w = np.array([0.25, 0.75, 0.5]) X = np.random.rand(100, 3) y = X @ w + np.random.normal(scale = 0.05, size = X.shape[0])
A = np.dot(np.dot(P.T, X.T), np.dot(X, P)) b = -2 * np.dot(np.dot(P.T, X.T), y) QUBO = A + np.diag(b) QUBO = np.triu(QUBO + QUBO.T) - np.eye(QUBO.shape[0]) * QUBO
from blueqat.utils import qaoa from blueqat.pauli import from_qubo step = 2 hamiltonian = from_qubo(QUBO) result = qaoa(hamiltonian, step) b = result.circuit.run(shots=100) sample = b.most_common(1)[0][0] print(sample)
011111
w_qa = P @ [int(i) for i in list(sample)] print("Predicted weight:", w_qa) print("True weight:", w)
Predicted weight: [0.25 0.75 0.75]

True weight: [0.25 0.75 0.5 ]

Occasionally we get result

© 2024, blueqat Inc. All rights reserved