common.title
Cloud support

Nobisuke

Dekisugi

RAG


autoQAOA
RAG for dev
Fortune telling app
Annealing
DEEPSCORE
Translation

Overview
Service overview
Terms of service

Privacy policy

Contact
Research

Sign in
Sign up
common.title

Internship: Basic operation on Tensor

Yuichiro Minato

2023/12/03 13:38

#Internship

Our current main focus in our work is the utilization of tensor networks in quantum circuits and machine learning. This area can be quite challenging due to the lack of available texts, so I intend to document it in this way for our blog.

This time we use google tensornetwork liblary

Basic Knowledge

Tensor

Tensor networks consist of nodes and edges, with each node/edge carrying a specific meaning.

A single node represents a scalar quantity.

 *

A single node with one leg represents a vector.

 *-

A single node with two legs represent a matrix.

-*-

With larger number of legs represent k-order tensor

-*-
 |

Contraction

You can perform computations by connecting different tensors and performing contractions.

ex) When you perform calculations involving vector and matrix, the result is a vector.

-*- -*
-*--*
-*

ex) When you perform calculations involving matrix and matrix, the result is a matrix.

-*- -*-
-*--*-
-*-

Decomposition

You can also decompose tensors using various algorithms, with one of the most representative ones being SVD (Singular Value Decomposition).

-*-
-*- -*-

Google Tensornetwork Library

Let's execute it using a library.

https://github.com/google/TensorNetwork

!pip install tensornetwork
Requirement already satisfied: tensornetwork in /opt/conda/lib/python3.10/site-packages (0.4.6)

Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.10/site-packages (from tensornetwork) (1.23.5)

Requirement already satisfied: graphviz>=0.11.1 in /opt/conda/lib/python3.10/site-packages (from tensornetwork) (0.20.1)

Requirement already satisfied: opt-einsum>=2.3.0 in /opt/conda/lib/python3.10/site-packages (from tensornetwork) (3.3.0)

Requirement already satisfied: h5py>=2.9.0 in /opt/conda/lib/python3.10/site-packages (from tensornetwork) (3.8.0)

Requirement already satisfied: scipy>=1.1 in /opt/conda/lib/python3.10/site-packages (from tensornetwork) (1.10.1)

First, load the tool and prepare two vectors.

import numpy as np import tensornetwork as tn a = tn.Node(np.ones((10,))) b = tn.Node(np.ones((10,)))

Next, connect nodes with a edge between them.

edge = a[0] ^ b[0]

Finally, upon specifying the edges and performing contraction, the computation is completed.

final_node = tn.contract(edge) print(final_node.tensor)
10.0

vector and matrix -> vector

a = tn.Node(np.ones((5))) b = tn.Node(np.ones((5,5))) edge = a[0] ^ b[0] final_node = tn.contract(edge) print(final_node.tensor)
[5. 5. 5. 5. 5.]

matrix and matrix -> matrix

a = tn.Node(np.ones((5,3))) b = tn.Node(np.ones((5,2))) edge = a[0] ^ b[0] final_node = tn.contract(edge) print(final_node.tensor)
[[5. 5.]

 [5. 5.]

 [5. 5.]]

TensorNetwork as quantum circuit backend

You can simulate quantum circuits by mapping tensor network vectors to quantum bits and applying matrices or tensors to quantum gates.

This time we use quimb for simulating quantum circuit

!pip install quimb
Requirement already satisfied: quimb in /opt/conda/lib/python3.10/site-packages (1.4.0)

Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.10/site-packages (from quimb) (1.23.5)

Requirement already satisfied: scipy>=1.0.0 in /opt/conda/lib/python3.10/site-packages (from quimb) (1.10.1)

Requirement already satisfied: numba>=0.39 in /opt/conda/lib/python3.10/site-packages (from quimb) (0.57.0)

Requirement already satisfied: psutil>=4.3.1 in /opt/conda/lib/python3.10/site-packages (from quimb) (5.9.5)

Requirement already satisfied: cytoolz>=0.8.0 in /opt/conda/lib/python3.10/site-packages (from quimb) (0.12.1)

Requirement already satisfied: tqdm>=4 in /opt/conda/lib/python3.10/site-packages (from quimb) (4.65.0)

Requirement already satisfied: toolz>=0.8.0 in /opt/conda/lib/python3.10/site-packages (from cytoolz>=0.8.0->quimb) (0.12.0)

Requirement already satisfied: llvmlite<0.41,>=0.40.0dev0 in /opt/conda/lib/python3.10/site-packages (from numba>=0.39->quimb) (0.40.0)
%config InlineBackend.figure_formats = ['svg'] import quimb as qu import quimb.tensor as qtn

Prepare 80 qubits and create a GHZ state.

#number of qubits N = 80 #initialization of circuit circ = qtn.Circuit(N) #apply Hgate to the first qubit circ.h(0) # making GHZ using CX chain for i in range(N-1): circ.cx(i, i+1) # get sampling from quantum state Counter(circ.sample(1))
Counter({'11111111111111111111111111111111111111111111111111111111111111111111111111111111': 1})

Take samples using the constructed quantum circuit.

%%time from collections import Counter # get 100samples Counter(circ.sample(100))
CPU times: user 802 ms, sys: 13.7 ms, total: 816 ms

Wall time: 953 ms
Counter({'00000000000000000000000000000000000000000000000000000000000000000000000000000000': 58,

         '11111111111111111111111111111111111111111111111111111111111111111111111111111111': 42})

Let's make larger circuit

circ = qtn.Circuit(10) for i in range(10): circ.apply_gate('H', i, gate_round=0) for r in range(1, 9): # even pairs for i in range(0, 10, 2): circ.apply_gate('CNOT', i, i + 1, gate_round=r) # Y-rotations for i in range(10): circ.apply_gate('RZ', 1.234, i, gate_round=r) # odd pairs for i in range(1, 9, 2): circ.apply_gate('CZ', i, i + 1, gate_round=r) # X-rotations for i in range(10): circ.apply_gate('RX', 1.234, i, gate_round=r) # h gate for i in range(10): circ.apply_gate('H', i, gate_round=r + 1) circ
<Circuit(n=10, num_gates=252, gate_opts={'contract': 'auto-split-gate', 'propagate_tags': 'register'})>

Next, let's draw the quantum circuit.

circ.psi.draw(color=['PSI0', 'H', 'CNOT', 'RZ', 'RX', 'CZ'])
<Figure size 600x600 with 1 Axes>

Here, quantum gates are actually expected to be represented by matrices, but unitary matrices are decomposed into two order-3 tensors. In tensor networks, unitary matrices are not always used. Additionally, the state vector of quantum bits is represented with the vectors of each quantum bit being depicted independently.

State Vector, Amplitude, Expectation Value and Sampling

When performing quantum circuit calculations with tensor networks, it's necessary to determine the objective beforehand, such as state vectors, probability amplitudes, expected values, or sampling.

State Vector

Contracting all nodes results in a single vector.

circ.to_dense()
[[ 0.022278+0.044826j]

 [ 0.047567+0.001852j]

 [-0.028239+0.01407j ]

 ...

 [ 0.016   -0.008447j]

 [-0.025437-0.015225j]

 [-0.033285-0.030653j]]

Amplitude

By specifying a bit string and connecting tensors, you can obtain the corresponding probability amplitude.

circ.amplitude('0000011111')
(0.004559038599179494+0.02661946964089579j)

Expectation Value

By manipulating the same quantum circuit with a single term of the corresponding Hamiltonian inserted, you can obtain the expected value.

circ.local_expectation(qu.pauli('X') & qu.pauli('Z'), (4, 5))
(-0.07785735654723336+3.903127820947816e-17j)

Sample

You can perform sampling.

for item in circ.sample(1): print(item)
1110101110

Speeding up neural networks using TensorNetwork in Keras

From this Google article, we will learn a technique that uses tensor networks for matrix decomposition to accelerate neural networks.

https://blog.tensorflow.org/2020/02/speeding-up-neural-networks-using-tensornetwork-in-keras.html

A normal full connect neural network

!pip install tensorflow
Collecting tensorflow

  Downloading tensorflow-2.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (475.2 MB)

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 475.2/475.2 MB 821.1 kB/s eta 0:00:0000:0100:01

[?25hCollecting absl-py>=1.0.0 (from tensorflow)

  Downloading absl_py-2.0.0-py3-none-any.whl (130 kB)

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 130.2/130.2 kB 2.3 MB/s eta 0:00:00a 0:00:01

[?25hCollecting astunparse>=1.6.0 (from tensorflow)

  Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)

Collecting flatbuffers>=23.5.26 (from tensorflow)

  Downloading flatbuffers-23.5.26-py2.py3-none-any.whl (26 kB)
import tensorflow as tf import tensornetwork as tn import matplotlib.pyplot as plt tn.set_default_backend("tensorflow")

Next, we create a TNlayer that decomposes those nodes into nodes named 'a' and 'b'.

class TNLayer(tf.keras.layers.Layer): def __init__(self): super(TNLayer, self).__init__() # Create the variables for the layer. self.a_var = tf.Variable(tf.random.normal( shape=(32, 32, 2), stddev=1.0/32.0), name="a", trainable=True) self.b_var = tf.Variable(tf.random.normal(shape=(32, 32, 2), stddev=1.0/32.0), name="b", trainable=True) self.bias = tf.Variable(tf.zeros(shape=(32, 32)), name="bias", trainable=True) def call(self, inputs): # Define the contraction. # We break it out so we can parallelize a batch using # tf.vectorized_map (see below). def f(input_vec, a_var, b_var, bias_var): # Reshape to a matrix instead of a vector. input_vec = tf.reshape(input_vec, (32,32)) # Now we create the network. a = tn.Node(a_var, backend="tensorflow") b = tn.Node(b_var, backend="tensorflow") x_node = tn.Node(input_vec, backend="tensorflow") a[1] ^ x_node[0] b[1] ^ x_node[1] a[2] ^ b[2] # The TN should now look like this # | | # a --- b # \ / # x # Now we begin the contraction. c = a @ x_node result = (c @ b).tensor # To make the code shorter, we also could've used Ncon. # The above few lines of code is the same as this: # result = tn.ncon([x, a_var, b_var], [[1, 2], [-1, 1, 3], [-2, 2, 3]]) # Finally, add bias. return result + bias_var # To deal with a batch of items, we can use the tf.vectorized_map # function. # https://www.tensorflow.org/api_docs/python/tf/vectorized_map result = tf.vectorized_map( lambda vec: f(vec, self.a_var, self.b_var, self.bias), inputs) return tf.nn.relu(tf.reshape(result, (-1, 1024)))

First, let's examine two layers, each with 1024 nodes.

Dense = tf.keras.layers.Dense fc_model = tf.keras.Sequential( [ tf.keras.Input(shape=(2,)), Dense(1024, activation=tf.nn.swish), Dense(1024, activation=tf.nn.swish), Dense(1, activation=None)]) fc_model.summary()
Model: "sequential_5"

_________________________________________________________________

 Layer (type)                Output Shape              Param #   

=================================================================

 dense_13 (Dense)            (None, 1024)              3072      

                                                                 

 dense_14 (Dense)            (None, 1024)              1049600   

                                                                 

 dense_15 (Dense)            (None, 1)                 1025      

                                                                 

We replace the previous layer with a TN.

tn_model = tf.keras.Sequential( [ tf.keras.Input(shape=(2,)), Dense(1024, activation=tf.nn.relu), # Here use a TN layer instead of the dense layer. TNLayer(), Dense(1, activation=None) ] ) tn_model.summary()
Model: "sequential_6"

_________________________________________________________________

 Layer (type)                Output Shape              Param #   

=================================================================

 dense_16 (Dense)            (None, 1024)              3072      

                                                                 

 tn_layer_2 (TNLayer)        (None, 1024)              5120      

                                                                 

 dense_17 (Dense)            (None, 1)                 1025      

                                                                 

You can verify if the number of parameters has decreased.

Next, we will proceed with training.

X = np.concatenate([np.random.randn(20, 2) + np.array([3, 3]), np.random.randn(20, 2) + np.array([-3, -3]), np.random.randn(20, 2) + np.array([-3, 3]), np.random.randn(20, 2) + np.array([3, -3]),]) Y = np.concatenate([np.ones((40)), -np.ones((40))])

First we fit the fc model

fc_model.compile(optimizer="adam", loss="mean_squared_error") fc_model.fit(X, Y, epochs=300, verbose=1)
Epoch 1/300

3/3 [==============================] - 1s 18ms/step - loss: 0.2224

Epoch 2/300

3/3 [==============================] - 0s 27ms/step - loss: 0.1684

Epoch 3/300

3/3 [==============================] - 0s 31ms/step - loss: 0.1041

Epoch 4/300

3/3 [==============================] - 0s 25ms/step - loss: 0.0508

Epoch 5/300

3/3 [==============================] - 0s 19ms/step - loss: 0.0572
<keras.src.callbacks.History at 0x7f61271938e0>

Next we fit the TN model

tn_model.compile(optimizer="adam", loss="mean_squared_error") tn_model.fit(X, Y, epochs=300, verbose=1)
Epoch 1/300

3/3 [==============================] - 1s 6ms/step - loss: 0.0076

Epoch 2/300

3/3 [==============================] - 0s 5ms/step - loss: 0.0063

Epoch 3/300

3/3 [==============================] - 0s 6ms/step - loss: 0.0047

Epoch 4/300

3/3 [==============================] - 0s 5ms/step - loss: 0.0043

Epoch 5/300

3/3 [==============================] - 0s 6ms/step - loss: 0.0035
<keras.src.callbacks.History at 0x7f6126a91750>

Let's check the result for prediction

result plot for FC model

h = 1.0 x_min, x_max = X[:, 0].min() - 5, X[:, 0].max() + 5 y_min, y_max = X[:, 1].min() - 5, X[:, 1].max() + 5 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # here "model" is your model's prediction (classification) function Z = fc_model.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z) plt.axis('off') # Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)
13/13 [==============================] - 0s 6ms/step
<matplotlib.collections.PathCollection at 0x7f6126d13730>
<Figure size 640x480 with 1 Axes>

result plot for TN model

h = 1.0 x_min, x_max = X[:, 0].min() - 5, X[:, 0].max() + 5 y_min, y_max = X[:, 1].min() - 5, X[:, 1].max() + 5 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # here "model" is your model's prediction (classification) function Z = tn_model.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z) plt.axis('off') # Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)
13/13 [==============================] - 0s 2ms/step
<matplotlib.collections.PathCollection at 0x7f6126ebbe80>
<Figure size 640x480 with 1 Axes>
!pip install torch
Collecting torch

  Downloading torch-2.1.1-cp310-cp310-manylinux1_x86_64.whl (670.2 MB)

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 670.2/670.2 MB 1.1 MB/s eta 0:00:0000:0100:03

[?25hCollecting filelock (from torch)

  Downloading filelock-3.13.1-py3-none-any.whl (11 kB)

Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.10/site-packages (from torch) (4.5.0)

Requirement already satisfied: sympy in /opt/conda/lib/python3.10/site-packages (from torch) (1.12)

Requirement already satisfied: networkx in /opt/conda/lib/python3.10/site-packages (from torch) (2.8.8)

Requirement already satisfied: jinja2 in /opt/conda/lib/python3.10/site-packages (from torch) (3.1.2)

Requirement already satisfied: fsspec in /opt/conda/lib/python3.10/site-packages (from torch) (2023.5.0)
import matplotlib.pyplot as plt import torch.optim as optim import torch import numpy as np %matplotlib inline #qubit x = torch.tensor([1., 0.]) #variational parameter a = torch.tensor([0.2], requires_grad=True) #list for result arr = [] #the first variable is list of paramters. op = optim.Adam([a],lr=0.05) for _ in range(100): y = [[torch.cos(a/2),-torch.sin(a/2)], [torch.sin(a/2),torch.cos(a/2)]] z = [x[0]*y[0][0]+x[1]*y[0][1], x[0]*y[1][0]+x[1]*y[1][1]] expt = torch.abs(z[0])**2 - torch.abs(z[1])**2 arr.append(expt.item()) # Add the item to the arr list op.zero_grad() expt.backward() op.step() plt.plot(arr) plt.show()
<Figure size 640x480 with 1 Axes>output

Understanding the fundamentals of tensor manipulation advances the comprehension of both quantum computing and machine learning.

© 2024, blueqat Inc. All rights reserved