common.title

Docs
Quantum Circuit
TYTAN CLOUD

QUANTUM GAMING


autoQAOA
Desktop RAG

Overview
Terms of service

Privacy policy

Contact
Research

Sign in
Sign up
common.title

RAGで量子コンピュータ関連の記事を説明してくれるLLMを作る

Yuichiro Minato

2024/04/12 05:48

こんにちは。RAG使っていきましょう。今日はレポート作成です。最近の量子コンピュータや生成AIの記事を追うの大変ですよね。特に過去の記事に戻ったり知識を戻るのが大変です。今回はRAGを使ってレポート作成できないかみてみましょう。

今回利用するのはMistral + Langchain + Gradioです。

pip install --quiet transformers accelerate langchain langchain-community sentence-transformers faiss-gpu pypdf gradio

from transformers import AutoTokenizer, pipeline

model_id = "mistralai/Mistral-7B-Instruct-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_id)

pipe = pipeline("text-generation", model=model_id, tokenizer=tokenizer, device=0, max_new_tokens=300)

query = 'what is sandbox and softbank doing on quantum business?'
pipe(query)

このタイミングでは、

Sandbox is a subsidiary of SoftBank, and they are indeed working on quantum computing. They have a quantum computing division called Quantum Matter Inc.

というめちゃくちゃな答えをしています。明らかに間違っていてハルシネーションの嵐です。次に今回は練習としてWebサイトを読み込みます。

from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import FAISS
from langchain_community.embeddings.huggingface import HuggingFaceEmbeddings
from langchain_text_splitters import CharacterTextSplitter

loader = WebBaseLoader(["https://www.digicert.com/jp/faq/cryptography/what-is-post-quantum-cryptography#:~:text=耐量子暗号方式(量子,指している用語です。", "https://www.sbbit.jp/article/cont1/85249", "https://quantumcomputingreport.com/softbank-leverages-sandboxaqs-aqtive-guard-to-identify-it-infrastructure-vulnerabilities/"])

documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)

embeddings = HuggingFaceEmbeddings(
 model_name="intfloat/multilingual-e5-large"
)

db = FAISS.from_documents(docs, embeddings)
retriever = db.as_retriever()
print(db.index.ntotal)

いくつかのウェブサイトで関連しそうな情報を取得できるようにします。

index数は33となりました。

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain.llms import HuggingFacePipeline

llm = HuggingFacePipeline(pipeline=pipe)

template = """次のコンテキストを踏まえた上で日本語で答えて下さい:

{context}

Question: {question}
"""

prompt = ChatPromptTemplate.from_template(template)

def format_docs(docs):
 return "\n\n".join([d.page_content for d in docs])

chain = (
 {"context": retriever | format_docs, "question": RunnablePassthrough()}
 | prompt
 | llm
 | StrOutputParser()
)

query = 'What is the name of quantum computing product provided by sandbox in this article?'
response = chain.invoke(query)  
if "Answer:" in response:
 response = response.split("Answer: ")[1]
if "Question:" in response:
 response = response.split("Question: ")[0]
if "Japanese Translation:" in response:
 response = response.split("Japanese Translation: ")[1]

response = response.replace("\n\n", "")
response

クエリを変更してみましたが、

'AQtive Guard is the name of the quantum computing product provided by Sandbox in this article. It is a cryptography management platform that helps identify IT infrastructure vulnerabilities and supports compliance with NIST initiatives on post-quantum cryptography.'

かなり詳しく説明してくれました。

import gradio as gr
import os

def add_text(history, text):
 history = history + [(text, None)]
 return history, gr.Textbox(value="", interactive=False)

def bot(history):
 query = history[-1][0]
   
 response = chain.invoke(query)  
 if "Answer:" in response:
  response = response.split("Answer:")[1]
 if "Question:" in response:
  response = response.split("Question:")[0]
 if "Japanese Translation:" in response:
  response = response.split("Japanese Translation:")[1]
 response = response.replace("\n\n", "")
   
 history[-1][1] = ""
 for character in response:
  history[-1][1] += character
  yield history

with gr.Blocks() as demo:
 chatbot = gr.Chatbot([])
 with gr.Row():
  txt = gr.Textbox(
   scale=4,
   show_label = False,
   container = False
  )
 clear = gr.Button("Clear")

txt_msg = txt.submit(add_text, [chatbot, txt], [chatbot, txt], queue = False).then(bot, chatbot, chatbot)
 txt_msg.then(lambda: gr.Textbox(interactive = True), None, [txt], queue = False)
 clear.click(lambda: None, None, chatbot, queue=False)

demo.queue()  
demo.launch(share=True)

後はGradioは簡単ですね。

いい感じのインターフェースができました。今後は本格的なレポート作成も十分いけそうですね以上です。関連用語まできちんと説明してくれるの助かります。

© 2025, blueqat Inc. All rights reserved