common.title
Cloud support

Nobisuke

Dekisugi

RAG


autoQAOA
RAG for dev
Fortune telling app
Annealing
DEEPSCORE
Translation

Overview
Service overview
Terms of service

Privacy policy

Contact
Research

Sign in
Sign up
common.title

Internship: Entanglement and MPS

Yuichiro Minato

2023/12/20 12:58

Quantum Entanglement

When dealing with quantum entanglement efficiently in quantum computing problems, it is not always necessary to represent all quantum states as state vectors. Depending on the problem, one can streamline quantum circuits and quantum circuit simulations by reducing the unnecessary entanglement of quantum states.

MPS (Matrix Product States)

By employing tensor networks, one can examine methods for adjusting quantum entanglement using Matrix Product States (MPS).

from quimb.tensor import * p = MPS_rand_state(L=10, bond_dim=50) p.show()
 50 50 50 50 50 50 50 50 50 

●──●──●──●──●──●──●──●──●──●

│  │  │  │  │  │  │  │  │  │

Typically, quantum states are represented as large vectors, as the name suggests. In tensor network representations, a single node with one arm represents these states.

However, quantum states can be handled in various representations, and the above description utilizes a one-dimensional model known as Matrix Product States (MPS). In MPS, adjacent quantum bits are connected in a single row. From the perspective of a state vector, quantum bits can be separated and represented by performing consecutive operations such as Singular Value Decomposition (SVD).

Between the quantum bits mentioned above, there exists a value known as the 'bond dimension,' which represents the degree of entanglement between adjacent quantum bits. A higher bond dimension brings the representation closer to the full state vector, but in cases of low quantum entanglement, reducing the bond dimension may not significantly affect accuracy and can help minimize inefficiencies.

MPO

The above applies to quantum states, but quantum gates themselves can also be represented similarly as Matrix Product Operators (MPO).

A = MPO_rand_herm(10, bond_dim=50) A.show()
│50│50│50│50│50│50│50│50│50│

●──●──●──●──●──●──●──●──●──●

│  │  │  │  │  │  │  │  │  │

In this state, there are arms extending both upwards and downwards, allowing for the corresponding usage related to one-dimensional operations of connected quantum entities.

Creating MPS

MPS, also referred to as Tensor Train in machine learning, will be created. In this instance, we will explore its implementation using 'tensorly'.

pip install -U tensorly
Requirement already satisfied: tensorly in /opt/conda/lib/python3.10/site-packages (0.8.1)

Requirement already satisfied: numpy in /opt/conda/lib/python3.10/site-packages (from tensorly) (1.23.5)

Requirement already satisfied: scipy in /opt/conda/lib/python3.10/site-packages (from tensorly) (1.10.1)

Note: you may need to restart the kernel to use updated packages.

Let's start by importing the library.

import numpy as np import tensorly as tl from tensorly.decomposition import matrix_product_state

Create a tensor from numpy. This is a rank-3 tensor.

tensor = tl.tensor(np.arange(24).reshape((3, 4, 2)), dtype=tl.float64) print(tensor)
[[[ 0.  1.]

  [ 2.  3.]

  [ 4.  5.]

  [ 6.  7.]]



 [[ 8.  9.]

  [10. 11.]

  [12. 13.]

  [14. 15.]]


To decompose this, we use matrix_product_state with the specified bond dimensions.

ttcore = matrix_product_state(tensor, rank=[1,2,2,1]) print([c.shape for c in ttcore])
[(1, 3, 2), (2, 4, 2), (2, 2, 1)]

The number of arms has changed, resulting in a decomposition. I'm not entirely confident, but you can also inspect the contents.

np.array(ttcore,dtype=object)
array([array([[[ 0.16419877,  0.89798224],

               [ 0.50559916,  0.27875225],

               [ 0.84699956, -0.34047773]]]),

       array([[[ 0.39515015,  0.1924406 ],

               [ 0.46037119,  0.0836525 ],

               [ 0.52559222, -0.0251356 ],

               [ 0.59081325, -0.1339237 ]],



              [[-0.05990062,  0.56939785],

               [-0.02394228,  0.50941965],

               [ 0.01201607,  0.44944145],

               [ 0.04797442,  0.38946325]]]), array([[[44.97949845],

                                                      [47.95056535]],



                                                     [[-0.91908612],

                                                      [ 0.86213859]]])],

      dtype=object)

You can also revert it back to a tensor.

reconstructed_tensor = tl.tt_to_tensor(ttcore) print(reconstructed_tensor)
[[[3.41242664e-15 1.00000000e+00]

  [2.00000000e+00 3.00000000e+00]

  [4.00000000e+00 5.00000000e+00]

  [6.00000000e+00 7.00000000e+00]]



 [[8.00000000e+00 9.00000000e+00]

  [1.00000000e+01 1.10000000e+01]

  [1.20000000e+01 1.30000000e+01]

  [1.40000000e+01 1.50000000e+01]]


It may be slightly off, but I managed to reconstruct it. MPS and tensor networks have been covered in a few articles recently, so I encourage you to search for them and give it a try.

"MPS in quantum circuits"

MPS can be created not only in quantum computer simulators but also on actual quantum circuits. By stacking two-qubit gates in a stair-step fashion, entanglement can be generated between adjacent qubits, allowing the creation of MPS. Furthermore, increasing the number of qubits used allows for an increase in the bond dimension.

./img/231220mps.png image:https://iopscience.iop.org/article/10.1088/2058-9565/ab4eb5/pdf

© 2024, blueqat Inc. All rights reserved