LangChain(1)简介

LangChain 可包含的模块:

Prompt templates: 提示模板
Prompt templates are templates for different types of prompts. Like “chatbot” style templates, ELI5 question-answering, etc

LLMs: 大语言模型
Large language models like GPT-3, BLOOM, etc

Agents: 代理,代理可以决定执行什么操作
Agents use LLMs to decide what actions should be taken. Tools like web search or calculators can be used, and all are packaged into a logical loop of operations.

Memory: 缓存,可以通过缓存传入上下文
Short-term memory, long-term memory

定义模板 Creating Prompts in LangChain

!pip install langchain
rom langchain import PromptTemplate

template = """Question: {question} Answer: """
prompt = PromptTemplate(
template=template,
input_variables=['question']
)

# user question
question = "Which NFL team won the Super Bowl in the 2010 season?"

使用 Hugging Face Hub LLM

可以使用 Hugging Face 上的大语言模型

# 配置  Hugging Face account and API key
import os
os.environ['HUGGINGFACEHUB_API_TOKEN'] = 'YOUR HF_API_KEY'
!pip install huggingface_hub
from langchain import HuggingFaceHub, LLMChain

# initialize Hub LLM
hub_llm = HuggingFaceHub(
repo_id='google/flan-t5-xl',
model_kwargs={'temperature':1e-10}
)

# create prompt template > LLM chain
llm_chain = LLMChain(
prompt=prompt,
llm=hub_llm
)

# ask the user question about NFL 2010
print(llm_chain.run(question))

>>>green bay packers

有时可能需要一次性询问多个问题
询问多个问题有两种方式:1使用 generate 轮询;2将问题拼接到一个prompt ,需要LLM支持

多问题,法1 使用 generate

qs = [
{'question': "Which NFL team won the Super Bowl in the 2010 season?"},
{'question': "If I am 6 ft 4 inches, how tall am I in centimeters?"},
{'question': "Who was the 12th person on the moon?"},
{'question': "How many eyes does a blade of grass have?"}
]
res = llm_chain.generate(qs)
print(res)

# 回答效果取决于 LLM
>>>LLMResult(generations=[[Generation(text='green bay packers', generation_info=None)], [Generation(text='184', generation_info=None)], [Generation(text='john glenn', generation_info=None)], [Generation(text='one', generation_info=None)]], llm_output=None)

OpenAI LLMs

使用 OpenAI 的大语言模型

# 配置 OPENAI_API_KEY
import os
os.environ['OPENAI_API_TOKEN'] = 'OPENAI_API_KEY'
!pip install openai
from langchain.llms import OpenAI
davinci = OpenAI(model_name='text-davinci-003')
# 同样的使用方法,只是llm变化
llm_chain = LLMChain(
prompt=prompt,
llm=davinci
)
print(llm_chain.run(question))

>>>The Green Bay Packers won the Super Bowl in the 2010 season.

多问题,法2 拼接成一个 prompt

multi_template = """Answer the following questions one at a time. Questions: {questions} Answers: """
long_prompt = PromptTemplate(template=multi_template, input_variables=["questions"])
								llm_chain = LLMChain(
								prompt=long_prompt,
								llm=davinci)

qs_str = (
"Which NFL team won the Super Bowl in the 2010 season?\n" +
"If I am 6 ft 4 inches, how tall am I in centimeters?\n" +
"Who was the 12th person on the moon?" +
"How many eyes does a blade of grass have?"
)

print(llm_chain.run(qs_str))

参考

LangChain: Introduction and Getting Started

你可能感兴趣的:(Python,LLM,langchain,linux,openai,简介,llm)