Our current main focus in our work is the utilization of tensor networks in quantum circuits and machine learning. This area can be quite challenging due to the lack of available texts, so I intend to document it in this way for our blog.
This time we use google tensornetwork liblary
Basic Knowledge
Tensor
Tensor networks consist of nodes and edges, with each node/edge carrying a specific meaning.
A single node represents a scalar quantity.
*
A single node with one leg represents a vector.
*-
A single node with two legs represent a matrix.
-*-
With larger number of legs represent k-order tensor
-*-
|
Contraction
You can perform computations by connecting different tensors and performing contractions.
ex) When you perform calculations involving vector and matrix, the result is a vector.
-*- -*
-*--*
-*
ex) When you perform calculations involving matrix and matrix, the result is a matrix.
-*- -*-
-*--*-
-*-
Decomposition
You can also decompose tensors using various algorithms, with one of the most representative ones being SVD (Singular Value Decomposition).
a = tn.Node(np.ones((5)))
b = tn.Node(np.ones((5,5)))
edge = a[0] ^ b[0]
final_node = tn.contract(edge)
print(final_node.tensor)
[5. 5. 5. 5. 5.]
matrix and matrix -> matrix
Copy
a = tn.Node(np.ones((5,3)))
b = tn.Node(np.ones((5,2)))
edge = a[0] ^ b[0]
final_node = tn.contract(edge)
print(final_node.tensor)
[[5. 5.]
[5. 5.]
[5. 5.]]
TensorNetwork as quantum circuit backend
You can simulate quantum circuits by mapping tensor network vectors to quantum bits and applying matrices or tensors to quantum gates.
This time we use quimb for simulating quantum circuit
Copy
!pip install quimb
Requirement already satisfied: quimb in /opt/conda/lib/python3.10/site-packages (1.4.0)
Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.10/site-packages (from quimb) (1.23.5)
Requirement already satisfied: scipy>=1.0.0 in /opt/conda/lib/python3.10/site-packages (from quimb) (1.10.1)
Requirement already satisfied: numba>=0.39 in /opt/conda/lib/python3.10/site-packages (from quimb) (0.57.0)
Requirement already satisfied: psutil>=4.3.1 in /opt/conda/lib/python3.10/site-packages (from quimb) (5.9.5)
Requirement already satisfied: cytoolz>=0.8.0 in /opt/conda/lib/python3.10/site-packages (from quimb) (0.12.1)
Requirement already satisfied: tqdm>=4 in /opt/conda/lib/python3.10/site-packages (from quimb) (4.65.0)
Requirement already satisfied: toolz>=0.8.0 in /opt/conda/lib/python3.10/site-packages (from cytoolz>=0.8.0->quimb) (0.12.0)
Requirement already satisfied: llvmlite<0.41,>=0.40.0dev0 in /opt/conda/lib/python3.10/site-packages (from numba>=0.39->quimb) (0.40.0)
Copy
%config InlineBackend.figure_formats = ['svg']
import quimb as qu
import quimb.tensor as qtn
Prepare 80 qubits and create a GHZ state.
Copy
#number of qubits
N = 80
#initialization of circuit
circ = qtn.Circuit(N)
#apply Hgate to the first qubit
circ.h(0)
# making GHZ using CX chain
for i in range(N-1):
circ.cx(i, i+1)
# get sampling from quantum state
Counter(circ.sample(1))
circ = qtn.Circuit(10)
for i in range(10):
circ.apply_gate('H', i, gate_round=0)
for r in range(1, 9):
# even pairs
for i in range(0, 10, 2):
circ.apply_gate('CNOT', i, i + 1, gate_round=r)
# Y-rotations
for i in range(10):
circ.apply_gate('RZ', 1.234, i, gate_round=r)
# odd pairs
for i in range(1, 9, 2):
circ.apply_gate('CZ', i, i + 1, gate_round=r)
# X-rotations
for i in range(10):
circ.apply_gate('RX', 1.234, i, gate_round=r)
# h gate
for i in range(10):
circ.apply_gate('H', i, gate_round=r + 1)
circ
Here, quantum gates are actually expected to be represented by matrices, but unitary matrices are decomposed into two order-3 tensors. In tensor networks, unitary matrices are not always used. Additionally, the state vector of quantum bits is represented with the vectors of each quantum bit being depicted independently.
State Vector, Amplitude, Expectation Value and Sampling
When performing quantum circuit calculations with tensor networks, it's necessary to determine the objective beforehand, such as state vectors, probability amplitudes, expected values, or sampling.
import tensorflow as tf
import tensornetwork as tn
import matplotlib.pyplot as plt
tn.set_default_backend("tensorflow")
Next, we create a TNlayer that decomposes those nodes into nodes named 'a' and 'b'.
Copy
class TNLayer(tf.keras.layers.Layer):
def __init__(self):
super(TNLayer, self).__init__()
# Create the variables for the layer.
self.a_var = tf.Variable(tf.random.normal(
shape=(32, 32, 2), stddev=1.0/32.0),
name="a", trainable=True)
self.b_var = tf.Variable(tf.random.normal(shape=(32, 32, 2), stddev=1.0/32.0),
name="b", trainable=True)
self.bias = tf.Variable(tf.zeros(shape=(32, 32)), name="bias", trainable=True)
def call(self, inputs):
# Define the contraction.
# We break it out so we can parallelize a batch using
# tf.vectorized_map (see below).
def f(input_vec, a_var, b_var, bias_var):
# Reshape to a matrix instead of a vector.
input_vec = tf.reshape(input_vec, (32,32))
# Now we create the network.
a = tn.Node(a_var, backend="tensorflow")
b = tn.Node(b_var, backend="tensorflow")
x_node = tn.Node(input_vec, backend="tensorflow")
a[1] ^ x_node[0]
b[1] ^ x_node[1]
a[2] ^ b[2]
# The TN should now look like this
# | |
# a --- b
# \ /
# x
# Now we begin the contraction.
c = a @ x_node
result = (c @ b).tensor
# To make the code shorter, we also could've used Ncon.
# The above few lines of code is the same as this:
# result = tn.ncon([x, a_var, b_var], [[1, 2], [-1, 1, 3], [-2, 2, 3]])
# Finally, add bias.
return result + bias_var
# To deal with a batch of items, we can use the tf.vectorized_map
# function.
# https://www.tensorflow.org/api_docs/python/tf/vectorized_map
result = tf.vectorized_map(
lambda vec: f(vec, self.a_var, self.b_var, self.bias), inputs)
return tf.nn.relu(tf.reshape(result, (-1, 1024)))
First, let's examine two layers, each with 1024 nodes.
tn_model = tf.keras.Sequential(
[
tf.keras.Input(shape=(2,)),
Dense(1024, activation=tf.nn.relu),
# Here use a TN layer instead of the dense layer.
TNLayer(),
Dense(1, activation=None)
]
)
tn_model.summary()
h = 1.0
x_min, x_max = X[:, 0].min() - 5, X[:, 0].max() + 5
y_min, y_max = X[:, 1].min() - 5, X[:, 1].max() + 5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# here "model" is your model's prediction (classification) function
Z = fc_model.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z)
plt.axis('off')
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)
<matplotlib.collections.PathCollection at 0x7f6126d13730>
<Figure size 640x480 with 1 Axes>
result plot for TN model
Copy
h = 1.0
x_min, x_max = X[:, 0].min() - 5, X[:, 0].max() + 5
y_min, y_max = X[:, 1].min() - 5, X[:, 1].max() + 5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# here "model" is your model's prediction (classification) function
Z = tn_model.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z)
plt.axis('off')
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)
import matplotlib.pyplot as plt
import torch.optim as optim
import torch
import numpy as np
%matplotlib inline
#qubit
x = torch.tensor([1., 0.])
#variational parameter
a = torch.tensor([0.2], requires_grad=True)
#list for result
arr = []
#the first variable is list of paramters.
op = optim.Adam([a],lr=0.05)
for _ in range(100):
y = [[torch.cos(a/2),-torch.sin(a/2)], [torch.sin(a/2),torch.cos(a/2)]]
z = [x[0]*y[0][0]+x[1]*y[0][1], x[0]*y[1][0]+x[1]*y[1][1]]
expt = torch.abs(z[0])**2 - torch.abs(z[1])**2
arr.append(expt.item()) # Add the item to the arr list
op.zero_grad()
expt.backward()
op.step()
plt.plot(arr)
plt.show()
<Figure size 640x480 with 1 Axes>
Understanding the fundamentals of tensor manipulation advances the comprehension of both quantum computing and machine learning.