common.title

Docs
Quantum Circuit
TYTAN CLOUD

QUANTUM GAMING


Overview
Contact
Event
Project
Research

Terms of service (Web service)

Terms of service (Quantum and ML Cloud service)

Privacy policy


Sign in
Sign up
common.title

PyTorchで量子計算を量子化したい!

Yuichiro Minato

2024/01/08 15:08

!pip --no-cache-dir install -U torch
Requirement already satisfied: torch in /opt/conda/lib/python3.10/site-packages (2.1.2)
Requirement already satisfied: filelock in /opt/conda/lib/python3.10/site-packages (from torch) (3.13.1)
Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.10/site-packages (from torch) (4.5.0)
Requirement already satisfied: sympy in /opt/conda/lib/python3.10/site-packages (from torch) (1.12)
Requirement already satisfied: networkx in /opt/conda/lib/python3.10/site-packages (from torch) (3.2.1)
Requirement already satisfied: jinja2 in /opt/conda/lib/python3.10/site-packages (from torch) (3.1.2)
Requirement already satisfied: fsspec in /opt/conda/lib/python3.10/site-packages (from torch) (2023.5.0)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /opt/conda/lib/python3.10/site-packages (from torch) (12.1.105)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /opt/conda/lib/python3.10/site-packages (from torch) (12.1.105)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /opt/conda/lib/python3.10/site-packages (from torch) (12.1.105)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /opt/conda/lib/python3.10/site-packages (from torch) (8.9.2.26)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /opt/conda/lib/python3.10/site-packages (from torch) (12.1.3.1)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /opt/conda/lib/python3.10/site-packages (from torch) (11.0.2.54)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /opt/conda/lib/python3.10/site-packages (from torch) (10.3.2.106)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /opt/conda/lib/python3.10/site-packages (from torch) (11.4.5.107)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /opt/conda/lib/python3.10/site-packages (from torch) (12.1.0.106)
Requirement already satisfied: nvidia-nccl-cu12==2.18.1 in /opt/conda/lib/python3.10/site-packages (from torch) (2.18.1)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /opt/conda/lib/python3.10/site-packages (from torch) (12.1.105)
Requirement already satisfied: triton==2.1.0 in /opt/conda/lib/python3.10/site-packages (from torch) (2.1.0)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /opt/conda/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch) (12.3.101)
Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/lib/python3.10/site-packages (from jinja2->torch) (2.1.2)
Requirement already satisfied: mpmath>=0.19 in /opt/conda/lib/python3.10/site-packages (from sympy->torch) (1.3.0)

せっかくPyTorchで量子計算ができるようになったので、テンソルネットワークで簡単に量子化ができるのではないかと考えました。
Hゲートを入れて、入力値としてq=[1,0]の量子状態を入れようかと思います。

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

class M(nn.Module):
    def __init__(self):
        super().__init__()
        
        #量子化操作
        self.quant = torch.ao.quantization.QuantStub()
        
        #アダマールゲート
        self.H = torch.tensor([[1,1],[1,1]])/np.sqrt(2)
        
        #一応使わない予定だけどdequant
        self.dequant = torch.ao.quantization.DeQuantStub()
        
    def forward(self, x):
        x = self.quant(x)
        
        #einsumで量子計算の予定。。。
        x = torch.einsum('a,ab->b',(x, self.quant(self.H)))
        return x

これで静的量子化を実行。

# create a model instance
model_fp32 = M()

# model must be set to eval mode for static quantization logic to work
model_fp32.eval()

model_fp32.qconfig = torch.quantization.get_default_qconfig('fbgemm')

model_fp32_prepared = torch.ao.quantization.prepare(model_fp32)

input_fp32 = torch.tensor([1.,0])
res1 = model_fp32_prepared(input_fp32)

model_int8 = torch.ao.quantization.convert(model_fp32_prepared)

res2 = model_int8(input_fp32)

エラーが出ました。どうやらQuantizationCPUはeinsum対応していないようです。

https://discuss.pytorch.org/t/fwfm-quantization/94144

こちらのディスカッションを見る限り、

"currently we don’t have a quantized kernel for einsum, we would be happy to review a PR if someone is interested in implementing. In the meanwhile, a workaround could be to dequantize → floating point einsum → quantize."
どうやら量子化カーネルはeinsumに対応していないみたいなので、世界で誰か時間があれば対応してほしいみたいですね。備忘録でした。

class M2(nn.Module):
    def __init__(self):
        super().__init__()
        
        #量子化操作
        self.quant = torch.ao.quantization.QuantStub()
        
        #アダマールゲート
        self.H = torch.tensor([[1,1],[1,-1]])/np.sqrt(2)
        
        
    def forward(self, x):
        x = self.quant(x)
        H = self.quant(self.H)
        
        #einsumで量子計算の予定。。。
        x = torch.tensor([H[0][0]*x[0] + H[0][1]*x[1]])
        return x
# create a model instance
model_fp32 = M2()

# model must be set to eval mode for static quantization logic to work
model_fp32.eval()

model_fp32.qconfig = torch.quantization.get_default_qconfig('fbgemm')

model_fp32_prepared = torch.ao.quantization.prepare(model_fp32)

input_fp32 = torch.tensor([1.,0])
res1 = model_fp32_prepared(input_fp32)

model_int8 = torch.ao.quantization.convert(model_fp32_prepared)

res2 = model_int8(input_fp32)
res2
tensor(0.9995, size=(), dtype=torch.quint8,
       quantization_scheme=torch.per_tensor_affine, scale=0.007870171219110489,
       zero_point=0)

© 2025, blueqat Inc. All rights reserved