common.title

Docs
Quantum Circuit
TYTAN CLOUD

QUANTUM GAMING


Desktop RAG

Overview
Terms of service

Privacy policy

Contact
Research

Sign in
Sign up
common.title

Generative model with QCBM

Yuichiro Minato

2022/06/04 16:24

Sampling is one of the areas in which quantum computers excel, and we will introduce a generative model that learns the distribution of data and returns a sampling that is in line with that distribution.

In this article, we will introduce a generative model for quantum computers called QCBM. First, load the tool.

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

Next, let's create the distribution we will study.

#num of qubits
N = 4

#num of bins
n_bins = 2**N
print(n_bins)

samples = np.random.normal(0, 1, 1000)
hist = plt.hist(samples, bins = n_bins)
data_prob = hist[0]
print(data_prob)
16
[  3.   8.  14.  27.  65.  94. 125. 141. 160. 134.  96.  58.  39.  28.
   6.   2.]
<Figure size 432x288 with 1 Axes>

image

This time we will use a general neural net circuit. Fix the random seed and determine the number of iterations for this NN circuit.
The number of parameters is determined once the number of iterations and the number of qubits are determined.

from blueqat import Circuit
import time

np.random.seed(30)

#num of circuit repeat
n_repeat = 3

#num of params
n_params = N*3*n_repeat
print(n_params)

#initial parameters
param_init = [np.random.rand()*np.pi*2 for i in range(n_params)]
36
#arbitrary operation
def arbi(para):
    circ1 = Circuit()
    for i in range(N):
        circ1.rx(para[0+i*3])[i].ry(para[1+i*3])[i].rz(para[2+i*3])[i]
    return circ1
    
#connection between qubits
def loop():
    circ2 = Circuit()
    for i in range(N):
        circ2.cx[i, (i+1)%N]
    return circ2

#QCBM circuit
def qcbm(a):
    u = Circuit()
    for i in range(n_repeat):
        s_param = i*3*N
        e_param = (i+1)*3*N 
        u += arbi(a[s_param:e_param])
        u += loop()
    return u

#get loss from sampling
def nnl_loss(data, sampled, shots):
    D = np.sum(data)
    nnl_cost = 0
    eps = (1/shots)
    for i in range(n_bins):
        key = format(i, '04b')
        prob = sampled[key] / shots
        cost = - np.log(max(eps, prob)) / D
        nnl_cost += cost * data[i]
    return nnl_cost

#initial parameters
param = param_init.copy()

#result list
loss_hist = []

h = 0.01
e = 0.01

#iterations
nsteps = 100

start = time.time()
shots = 4096

Let's do the calculations right away.

for i in range(nsteps):
    c = qcbm(param)
    res = c.m[:].run(shots = shots)
    loss = nnl_loss(data_prob, res, shots)

    if i%10 == 0:
        print(loss)
    loss_hist.append(loss)

    new_param = [0 for i in range(len(param))]   
    for j in range(len(param)):
        _param = param.copy()
        _param[j] += h
        c = qcbm(_param)
        res = c.m[:].run(shots = shots)
        _loss = nnl_loss(data_prob, res, shots)
        new_param[j] = param[j] - e*(_loss - loss)/h

    param = new_param

plt.plot(loss_hist)
plt.show()

print(time.time() - start)
3.080950736331575
2.997602031143021
2.931538621183997
2.796660192315474
2.78776062008557
2.762643180497116
2.696617208035424
2.6903078473919115
2.6783984056098062
2.658556654441279
<Figure size 432x288 with 1 Axes>

image

1189.0689253807068

Now that we have the results, let's look at the difference between the initial random starting distribution and the slightly learned distribution.

c = qcbm(param_init)
res = c.m[:].run(shots = shots)

before_learning = []
for i in range(n_bins):
    key = format(i, '04b')
    before_learning.append(res[key])
plt.bar([i for i in range(n_bins)], before_learning, 1)
plt.title("Output distribusion from parameters before learning")
Text(0.5, 1.0, 'Output distribusion from parameters before learning')
<Figure size 432x288 with 1 Axes>

image

c = qcbm(param)
res = c.m[:].run(shots = shots)

after_learning = []
for i in range(n_bins):
    key = format(i, '04b')
    after_learning.append(res[key])

plt.bar([i for i in range(n_bins)], after_learning, 1)
plt.title("Output distribusion from parameters after learning")
Text(0.5, 1.0, 'Output distribusion from parameters after learning')
<Figure size 432x288 with 1 Axes>

image

I hope I have confirmed that some progress has been made in learning visually. That is all.

© 2025, blueqat Inc. All rights reserved