总的来说,有六大核心模块:
# 让模型用指定语气进行回答
def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0,
)
return response.choices[0].message["content"]
customer_email = """
Arrr, I be fuming that me blender lid \
flew off and splattered me kitchen walls \
with smoothie! And to make matters worse,\
the warranty don't cover the cost of \
cleaning up me kitchen. I need yer help \
right now, matey!
"""
# 美式英语 + 平静、尊敬的语调
style = """American English \
in a calm and respectful tone
"""
response = get_completion(prompt)
# 如果用英文,要求模型根据给出的语调进行转化
prompt = f"""Translate the text \
that is delimited by triple backticks
into a style that is {style}.
text: ```{customer_email}```
"""
print(prompt)
# 如果用中文, 要求模型根据给出的语调进行转化
prompt = f"""把由三个反引号分隔的文本text\
翻译成一种{style}风格。
text: ```{customer_email}```
"""
print(prompt)
prompt_template.format_messages
得到提示消息ChatOpenAI
对象提取信息!pip install -q --upgrade langchain
from langchain.chat_models import ChatOpenAI
api_key = "..."
chat = ChatOpenAI(temperature=0.0, openai_api_key = api_key)
# 构造模板
template_string = """Translate the text \
that is delimited by triple backticks \
into a style that is {style}. \
text: ```{text}```
"""
# 中文
template_string = """把由三个反引号分隔的文本text\
翻译成一种{style}风格。\
text: ```{text}```
"""
# 需要安装最新版的 LangChain
from langchain.prompts import ChatPromptTemplate
prompt_template = ChatPromptTemplate.from_template(template_string)
# prompt_template.messages[0].prompt
# prompt_template.messages[0].prompt.input_variables
# ['style', 'text']
langchain提示模版prompt_template
需要两个输入变量: style
和 text
。 这里分别对应
customer_style
: 我们想要的顾客邮件风格customer_email
: 顾客的原始邮件文本。customer_style
和customer_email
, 我们可以使用提示模版prompt_template
的format_messages
方法生成想要的客户消息customer_messages
。customer_messages = prompt_template.format_messages(
style=customer_style,
text=customer_email)
customer_response = chat(customer_messages)
customer_response.content # str类型
其他提示模板举例:
prompt = """ 你的任务是判断学生的解决方案是正确的还是不正确的
要解决该问题,请执行以下操作:
- 首先,制定自己的问题解决方案
- 然后将您的解决方案与学生的解决方案进行比较
并评估学生的解决方案是否正确。
...
使用下面的格式:
问题:
```
问题文本
```
学生的解决方案:
```
学生的解决方案文本
```
实际解决方案:
```
...
制定解决方案的步骤以及您的解决方案请参见此处
```
学生的解决方案和实际解决方案是否相同 \
只计算:
```
是或者不是
```
学生的成绩
```
正确或者不正确
```
问题:
```
{question}
```
学生的解决方案:
```
{student's solution}
```
实际解决方案:
"""
在建立大模型应用时,通常希望模型的输出为给定的格式,比如在输出使用特定的关键词来让输出结构化。 下面为一个使用大模型进行链式思考推理例子,对于问题:What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?
, 通过使用LangChain库函数,输出采用"Thought"(思考)、“Action”(行动)、“Observation”(观察)作为链式思考推理的关键词,让输出结构化。在补充材料中,可以查看使用LangChain和OpenAI进行链式思考推理的另一个代码实例。
"""
Thought: I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of the area.
Action: Search[Colorado orogeny]
Observation: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.
Thought: It does not mention the eastern sector. So I need to look up eastern sector.
Action: Lookup[eastern sector]
Observation: (Result 1 / 1) The eastern sector extends into the High Plains and is called the Central Plains orogeny.
Thought: The eastern sector of Colorado orogeny extends into the High Plains. So I need to search High Plains and find its elevation range.
Action: Search[High Plains]
Observation: High Plains refers to one of two distinct land regions
Thought: I need to instead search High Plains (United States).
Action: Search[High Plains (United States)]
Observation: The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130 m).[3]
Thought: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft.
Action: Finish[1,800 to 7,000 ft]
"""
"""
想法:我需要搜索科罗拉多造山带,找到科罗拉多造山带东段延伸到的区域,然后找到该区域的高程范围。
行动:搜索[科罗拉多造山运动]
观察:科罗拉多造山运动是科罗拉多州及周边地区造山运动(造山运动)的一次事件。
想法:它没有提到东区。 所以我需要查找东区。
行动:查找[东区]
观察:(结果1 / 1)东段延伸至高原,称为中原造山运动。
想法:科罗拉多造山运动的东段延伸至高原。 所以我需要搜索高原并找到它的海拔范围。
行动:搜索[高地平原]
观察:高原是指两个不同的陆地区域之一
想法:我需要搜索高地平原(美国)。
行动:搜索[高地平原(美国)]
观察:高地平原是大平原的一个分区。 从东到西,高原的海拔从 1,800 英尺左右上升到 7,000 英尺(550 到 2,130 米)。[3]
想法:高原的海拔从大约 1,800 英尺上升到 7,000 英尺,所以答案是 1,800 到 7,000 英尺。
动作:完成[1,800 至 7,000 英尺]
"""
review_template = """\
For the following text, extract the following information:
gift: Was the item purchased as a gift for someone else? \
Answer True if yes, False if not or unknown.
delivery_days: How many days did it take for the product \
to arrive? If this information is not found, output -1.
price_value: Extract any sentences about the value or price,\
and output them as a comma separated Python list.
Format the output as JSON with the following keys:
gift
delivery_days
price_value
text: {text}
"""
使用Langchain输出解析器:
review_template_2 = """\
For the following text, extract the following information:
gift: Was the item purchased as a gift for someone else? \
Answer True if yes, False if not or unknown.
delivery_days: How many days did it take for the product\
to arrive? If this information is not found, output -1.
price_value: Extract any sentences about the value or price,\
and output them as a comma separated Python list.
text: {text}
{format_instructions}
"""
prompt = ChatPromptTemplate.from_template(template=review_template_2)
# 构造输出解析器
from langchain.output_parsers import ResponseSchema
from langchain.output_parsers import StructuredOutputParser
gift_schema = ResponseSchema(name="gift",
description="Was the item purchased\
as a gift for someone else? \
Answer True if yes,\
False if not or unknown.")
delivery_days_schema = ResponseSchema(name="delivery_days",
description="How many days\
did it take for the product\
to arrive? If this \
information is not found,\
output -1.")
price_value_schema = ResponseSchema(name="price_value",
description="Extract any\
sentences about the value or \
price, and output them as a \
comma separated Python list.")
response_schemas = [gift_schema,
delivery_days_schema,
price_value_schema]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
print(format_instructions)
# 利用模板得到提示消息
messages = prompt.format_messages(text=customer_review, format_instructions=format_instructions)
# 调用chat模型得到结果
response = chat(messages)
print(response.content)
# 使用输出解析器解析输出
output_dict = output_parser.parse(response.content)
output_dict
# {'gift': False, 'delivery_days': '2', 'price_value': '它比其他吹叶机稍微贵一点'}
# 这时就是dict,可以用get(key)
output_dict.get('delivery_days')
中文版本:
# 中文
review_template = """\
对于以下文本,请从中提取以下信息:
礼物:该商品是作为礼物送给别人的吗? \
如果是,则回答 是的;如果否或未知,则回答 不是。
交货天数:产品需要多少天\
到达? 如果没有找到该信息,则输出-1。
价钱:提取有关价值或价格的任何句子,\
并将它们输出为逗号分隔的 Python 列表。
使用以下键将输出格式化为 JSON:
礼物
交货天数
价钱
文本: {text}
"""
# 中文
review_template_2 = """\
对于以下文本,请从中提取以下信息::
礼物:该商品是作为礼物送给别人的吗?
如果是,则回答 是的;如果否或未知,则回答 不是。
交货天数:产品到达需要多少天? 如果没有找到该信息,则输出-1。
价钱:提取有关价值或价格的任何句子,并将它们输出为逗号分隔的 Python 列表。
文本: {text}
{format_instructions}
"""
# 中文
from langchain.output_parsers import ResponseSchema
from langchain.output_parsers import StructuredOutputParser
gift_schema = ResponseSchema(name="礼物",
description="这件物品是作为礼物送给别人的吗?\
如果是,则回答 是的,\
如果否或未知,则回答 不是。")
delivery_days_schema = ResponseSchema(name="交货天数",
description="产品需要多少天才能到达?\
如果没有找到该信息,则输出-1。")
price_value_schema = ResponseSchema(name="价钱",
description="提取有关价值或价格的任何句子,\
并将它们输出为逗号分隔的 Python 列表")
response_schemas = [gift_schema,
delivery_days_schema,
price_value_schema]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
print(format_instructions)
# 结果如下
The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "\`\`\`json" and "\`\`\`":
```json
{
"礼物": string // 这件物品是作为礼物送给别人的吗? 如果是,则回答 是的, 如果否或未知,则回答 不是。
"交货天数": string // 产品需要多少天才能到达? 如果没有找到该信息,则输出-1。
"价钱": string // 提取有关价值或价格的任何句子, 并将它们输出为逗号分隔的 Python 列表
}
!pip install -q wikipedia
from langchain.docstore.wikipedia import Wikipedia
from langchain.llms import OpenAI
from langchain.agents import initialize_agent, Tool, AgentExecutor
from langchain.agents.react.base import DocstoreExplorer
docstore=DocstoreExplorer(Wikipedia())
tools = [
Tool(
name="Search",
func=docstore.search,
description="Search for a term in the docstore.",
),
Tool(
name="Lookup",
func=docstore.lookup,
description="Lookup a term in the docstore.",
)
]
# 使用大语言模型
llm = OpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
openai_api_key = api_key
)
# 初始化ReAct代理
react = initialize_agent(tools, llm, agent="react-docstore", verbose=True)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=react.agent,
tools=tools,
verbose=True,
)
question = "Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?"
agent_executor.run(question)
dotenv模块使用解析:
import os
import warnings
warnings.filterwarnings('ignore')
# 读取本地的.env文件,并将其中的环境变量加载到代码的运行环境中,以便在代码中可以直接使用这些环境变量
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
OPENAI_API_KEY = "..."
llm = ChatOpenAI(temperature=0.0,openai_api_key=OPENAI_API_KEY)
# memory.buffer存储所有的对话内容, memory.load_memory_variables({})也可以
memory = ConversationBufferMemory()
# 新建一个对话链(关于链后面会提到更多的细节)
conversation = ConversationChain(
llm=llm,
memory = memory,
verbose=True #查看Langchain实际上在做什么,设为FALSE的话只给出回答,看到不到下面绿色的内容
)
conversation.predict(input="你好, 我叫山顶夕景")
conversation.predict(input="What is 1+1?")
conversation.predict(input="What is my name?") # 还记得名字
注意:
ConversationChain
的verbose
参数,是查看Langchain实际上在做什么,设为FALSE的话只给出回答,看到不到下面绿色的内容(prompt format + 完整对话内容)。memory.buffer
存储所有的对话内容, memory.load_memory_variables({})
也可以memory = ConversationBufferMemory() #新建一个空的对话缓存记忆
memory.save_context({"input": "Hi"}, #向缓存区添加指定对话的输入输出
{"output": "What's up"})
memory.load_memory_variables({}) #再次加载记忆变量, 内容不变
from langchain.memory import ConversationBufferWindowMemory
memory = ConversationBufferWindowMemory(k=1)
# k=1表明只保留一个对话记忆, 即上一轮信息
!pip install tiktoken
# 限制token数量, 用到之前定义的llm对象
memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=30)
memory.save_context({"input": "AI is what?!"},
{"output": "Amazing!"})
memory.save_context({"input": "Backpropagation is what?"},
{"output": "Beautiful!"})
memory.save_context({"input": "Chatbots are what?"},
{"output": "Charming!"})
ChatGPT使用一种基于字节对编码(Byte Pair Encoding,BPE)的方法来进行tokenization(将输入文本拆分为token)。
BPE是一种常见的tokenization技术,它将输入文本分割成较小的子词单元。
OpenAI在其官方GitHub上公开了一个最新的开源Python库:tiktoken,这个库主要是用来计算tokens数量的。相比较Hugging Face的tokenizer,其速度提升了好几倍 https://github.com/openai/tiktoken
具体token计算方式,特别是汉字和英文单词的token区别,参考 https://www.zhihu.com/question/594159910
ConversationSummaryBufferMemory
实例化memory对象from langchain.memory import ConversationSummaryBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=100)
[1] 【LLM】LangChain基础使用(构建LLM应用)
[2] 利用LangChain绕过OpenAI的Token限制,生成文本摘要
[3] LangChain 完整指南:使用大语言模型构建强大的应用程序
[4] https://python.langchain.com/docs/get_started/introduction.html
[5] NLP(五十五)LangChain入门
[6] ReAct (Reason+Act) prompting in OpenAI GPT and LangChain