什么是LangChain?
使用ChatGPT大家可能都是知道prompt,
(1)想像一下,如果我需要快速读一本书,想通过本书作为prompt,使用ChatGPT根据书本中来回答问题,我们需要怎么做?
(2)假设你需要一个问答任务用到prompt A,摘要任务要使用到prompt B,那如何管理这些prompt呢?因此需要用LangChain来管理这些prompt。
LangChain的出现,简化了我们在使用ChatGPT的工程复杂度。
LangChain中的模块,每个模块如何使用?
前提:运行一下代码,需要OPENAI_API_KEY(OpenAI申请的key),同时统一引入这些库:
# 导入LLM包装器 from langchain import OpenAI, ConversationChain from langchain.agents import initialize_agent from langchain.agents import load_tools from langchain.chains import LLMChain from langchain.prompts import PromptTemplate
LLM:从语言模型中输出预测结果,和直接使用OpenAI的接口一样,输入什么就返回什么。
llm = OpenAI(model_name="text-davinci-003", temperature=0.9) // 这些都是OpenAI的参数 text = "What would be a good company name for a company that makes colorful socks?" print(llm(text)) // 以上就是打印调用OpenAI接口的返回值,相当于接口的封装,实现的代码可以看看github.com/hwchase17/langchain/llms/openai.py的OpenAIChat
以上代码运行结果:
Cozy Colours Socks.
Prompt Templates:管理LLMs的Prompts,就像我们需要管理变量或者模板一样。
prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) // 以上是两个参数,一个输入变量,一个模板字符串,实现的代码可以看看github.com/hwchase17/langchain/prompts // PromptTemplate实际是基于StringPromptTemplate,可以支持字符串类型的模板,也可以支持文件类型的模板
以上代码运行结果:
What is a good name for a company that makes colorful socks?
Chains:将LLMs和prompts结合起来,前面提到提供了OpenAI的封装和你需要问的字符串模板,就可以执行获得返回了。
from langchain.chains import LLMChain chain = LLMChain(llm=llm, prompt=prompt) // 通过LLM的llm变量,Prompt Templates的prompt生成LLMChain chain.run("colorful socks") // 实际这里就变成了实际问题:What is a good name for a company that makes colorful socks?
Agents:基于用户输入动态地调用chains,LangChani可以将问题拆分为几个步骤,然后每个步骤可以根据提供个Agents做相关的事情。
# 导入一些tools,比如llm-math # llm-math是langchain里面的能做数学计算的模块 tools = load_tools(["llm-math"], llm=llm) # 初始化tools,models 和使用的agent agent = initialize_agent( tools, llm, agent="zero-shot-react-description", verbose=True) text = "12 raised to the 3 power and result raised to 2 power?" print("input text: ", text) agent.run(text)
通过如上的代码,运行结果(拆分为两个部分):
> Entering new AgentExecutor chain... I need to use the calculator for this Action: Calculator Action Input: 12^3 Observation: Answer: 1728 Thought: I need to then raise the previous result to the second power Action: Calculator Action Input: 1728^2 Observation: Answer: 2985984 Thought: I now know the final answer Final Answer: 2985984 > Finished chain.
Memory:就是提供对话的上下文存储,可以使用Langchain的ConversationChain,在LLM交互中记录交互的历史状态,并基于历史状态修正模型预测。
# ConversationChain用法 llm = OpenAI(temperature=0) # 将verbose设置为True,以便我们可以看到提示 conversation = ConversationChain(llm=llm, verbose=True) print("input text: conversation") conversation.predict(input="Hi there!") conversation.predict( input="I'm doing well! Just having a conversation with an AI.")
通过多轮运行以后,就会出现:
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: Hi there! It's nice to meet you. How can I help you today?
Human: I'm doing well! Just having a conversation with an AI.
AI: That's great! It's always nice to have a conversation with someone new. What would you like to talk about?
具体代码
如下:
# 导入LLM包装器 from langchain import OpenAI, ConversationChain from langchain.agents import initialize_agent from langchain.agents import load_tools from langchain.chains import LLMChain from langchain.prompts import PromptTemplate # 初始化包装器,temperature越高结果越随机 llm = OpenAI(temperature=0.9) # 进行调用 text = "What would be a good company name for a company that makes colorful socks?" print("input text: ", text) print(llm(text)) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) print("input text: product") print(prompt.format(product="colorful socks")) chain = LLMChain(llm=llm, prompt=prompt) chain.run("colorful socks") # 导入一些tools,比如llm-math # llm-math是langchain里面的能做数学计算的模块 tools = load_tools(["llm-math"], llm=llm) # 初始化tools,models 和使用的agent agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) text = "12 raised to the 3 power and result raised to 2 power?" print("input text: ", text) agent.run(text) # ConversationChain用法 llm = OpenAI(temperature=0) # 将verbose设置为True,以便我们可以看到提示 conversation = ConversationChain(llm=llm, verbose=True) print("input text: conversation") conversation.predict(input="Hi there!") conversation.predict( input="I'm doing well! Just having a conversation with an AI.")
参考资料
note.com/npaka/n/n15…
https://www.jb51.net/article/279343.htm
github.com/hwchase17/l…
以上就是LangChain简化ChatGPT工程复杂度使用详解的详细内容,更多关于LangChain简化ChatGPT的资料请关注脚本之家其它相关文章!