common.title
Cloud support

Nobisuke

Dekisugi

RAG


autoQAOA
RAG for dev
Fortune telling app
Annealing
DEEPSCORE
Translation

Overview
Service overview
Terms of service

Privacy policy

Contact
Research

Sign in
Sign up
common.title

PyTorchで量子計算を量子化したい!

Yuichiro Minato

2024/01/08 15:08

!pip --no-cache-dir install -U torch
Requirement already satisfied: torch in /opt/conda/lib/python3.10/site-packages (2.1.2)

Requirement already satisfied: filelock in /opt/conda/lib/python3.10/site-packages (from torch) (3.13.1)

Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.10/site-packages (from torch) (4.5.0)

Requirement already satisfied: sympy in /opt/conda/lib/python3.10/site-packages (from torch) (1.12)

Requirement already satisfied: networkx in /opt/conda/lib/python3.10/site-packages (from torch) (3.2.1)

Requirement already satisfied: jinja2 in /opt/conda/lib/python3.10/site-packages (from torch) (3.1.2)

Requirement already satisfied: fsspec in /opt/conda/lib/python3.10/site-packages (from torch) (2023.5.0)

Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /opt/conda/lib/python3.10/site-packages (from torch) (12.1.105)

Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /opt/conda/lib/python3.10/site-packages (from torch) (12.1.105)

Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /opt/conda/lib/python3.10/site-packages (from torch) (12.1.105)

せっかくPyTorchで量子計算ができるようになったので、テンソルネットワークで簡単に量子化ができるのではないかと考えました。 Hゲートを入れて、入力値としてq=[1,0]の量子状態を入れようかと思います。

import torch import torch.nn as nn import torch.nn.functional as F import numpy as np class M(nn.Module): def __init__(self): super().__init__() #量子化操作 self.quant = torch.ao.quantization.QuantStub() #アダマールゲート self.H = torch.tensor([[1,1],[1,1]])/np.sqrt(2) #一応使わない予定だけどdequant self.dequant = torch.ao.quantization.DeQuantStub() def forward(self, x): x = self.quant(x) #einsumで量子計算の予定。。。 x = torch.einsum('a,ab->b',(x, self.quant(self.H))) return x

これで静的量子化を実行。

# create a model instance model_fp32 = M() # model must be set to eval mode for static quantization logic to work model_fp32.eval() model_fp32.qconfig = torch.quantization.get_default_qconfig('fbgemm') model_fp32_prepared = torch.ao.quantization.prepare(model_fp32) input_fp32 = torch.tensor([1.,0]) res1 = model_fp32_prepared(input_fp32) model_int8 = torch.ao.quantization.convert(model_fp32_prepared) res2 = model_int8(input_fp32)

エラーが出ました。どうやらQuantizationCPUはeinsum対応していないようです。

https://discuss.pytorch.org/t/fwfm-quantization/94144

こちらのディスカッションを見る限り、

"currently we don’t have a quantized kernel for einsum, we would be happy to review a PR if someone is interested in implementing. In the meanwhile, a workaround could be to dequantize → floating point einsum → quantize."

どうやら量子化カーネルはeinsumに対応していないみたいなので、世界で誰か時間があれば対応してほしいみたいですね。備忘録でした。

class M2(nn.Module): def __init__(self): super().__init__() #量子化操作 self.quant = torch.ao.quantization.QuantStub() #アダマールゲート self.H = torch.tensor([[1,1],[1,-1]])/np.sqrt(2) def forward(self, x): x = self.quant(x) H = self.quant(self.H) #einsumで量子計算の予定。。。 x = torch.tensor([H[0][0]*x[0] + H[0][1]*x[1]]) return x
# create a model instance model_fp32 = M2() # model must be set to eval mode for static quantization logic to work model_fp32.eval() model_fp32.qconfig = torch.quantization.get_default_qconfig('fbgemm') model_fp32_prepared = torch.ao.quantization.prepare(model_fp32) input_fp32 = torch.tensor([1.,0]) res1 = model_fp32_prepared(input_fp32) model_int8 = torch.ao.quantization.convert(model_fp32_prepared) res2 = model_int8(input_fp32)
res2
tensor(0.9995, size=(), dtype=torch.quint8,

       quantization_scheme=torch.per_tensor_affine, scale=0.007870171219110489,

       zero_point=0)

© 2024, blueqat Inc. All rights reserved