LangChain系列文章
您可以将一个可运行的任务传递给Agents。
通常情况下,从可运行的代理程序构建一个代理需要几个步骤:
from langchain.prompts import PromptTemplate
from langchain_community.chat_models import ChatOpenAI
from langchain_core.runnables import ConfigurableField
# We add in a string output parser here so the outputs between the two are the same type
from langchain_core.output_parsers import StrOutputParser
from langchain.prompts import ChatPromptTemplate
# Now lets create a chain with the normal OpenAI model
from langchain_community.llms import OpenAI
from operator import itemgetter
from langchain import hub
from langchain.agents import AgentExecutor, tool
from langchain.agents.output_parsers import XMLAgentOutputParser
from langchain_core.runnables import RunnablePassthrough
from dotenv import load_dotenv # 导入从 .env 文件加载环境变量的函数
load_dotenv() # 调用函数实际加载环境变量
from langchain.globals import set_debug # 导入在 langchain 中设置调试模式的函数
set_debug(True) # 启用 langchain 的调试模式
model = ChatOpenAI()
@tool
def search(query: str) -> str:
"""Search things about current events."""
return "32 degrees"
tool_list = [search]
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/xml-agent-convo")
# Logic for going from intermediate steps to a string to pass into model
# This is pretty tied to the prompt
def convert_intermediate_steps(intermediate_steps):
log = ""
for action, observation in intermediate_steps:
log += (
f" {action.tool} {action.tool_input}"
f" {observation}"
)
return log
# Logic for converting tools to string to go in prompt
def convert_tools(tools):
return "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: convert_intermediate_steps(
x["intermediate_steps"]
),
}
| prompt.partial(tools=convert_tools(tool_list))
| model.bind(stop=["", ""])
| XMLAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)
response = agent_executor.invoke({"input": "whats the weather in New york?"})
print('response >> ', response)
打印agent_executor
> Entering new AgentExecutor chain...
<tool>search</tool><tool_input>weather in New York32 degrees <tool>search</tool>
<tool_input>weather in New York32 degrees <final_answer>The weather in New York is 32 degrees
> Finished chain.
预期结果输出
{'input': 'whats the weather in New york?',
'output': 'The weather in New York is 32 degrees'}
实际运行,output解析还是有点错误
(.venv) zgpeace@zgpeaces-MacBook-Pro git:(develop) ✗[1] % python LCEL/agents.py ~/Workspace/LLM/langchain-llm-app
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
"input": "whats the weather in New york?"
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence] Entering Chain run with input:
{
"input": "whats the weather in New york?",
"intermediate_steps": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad>] Entering Chain run with input:
{
"input": "whats the weather in New york?",
"intermediate_steps": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad> > 4:chain:<lambda>] Entering Chain run with input:
{
"input": "whats the weather in New york?",
"intermediate_steps": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad> > 5:chain:<lambda>] Entering Chain run with input:
{
"input": "whats the weather in New york?",
"intermediate_steps": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad> > 4:chain:<lambda>] [9ms] Exiting Chain run with output:
{
"output": "whats the weather in New york?"
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad> > 5:chain:<lambda>] [17ms] Exiting Chain run with output:
{
"output": ""
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad>] [58ms] Exiting Chain run with output:
{
"input": "whats the weather in New york?",
"agent_scratchpad": ""
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 6:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
"input": "whats the weather in New york?",
"agent_scratchpad": ""
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 6:prompt:ChatPromptTemplate] [3ms] Exiting Prompt run with output:
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"ChatPromptValue"
],
"kwargs": {
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"HumanMessage"
],
"kwargs": {
"content": "You are a helpful assistant. Help the user answer any questions.\n\nYou have access to the following tools:\n\nsearch: search(query: str) -> str - Search things about current events.\n\nIn order to use a tool, you can use and tags. You will then get back a response in the form \nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\n\nsearch weather in SF \n64 degrees \n\nWhen you are done, respond with a final answer between . For example:\n\nThe weather in SF is 64 degrees \n\nBegin!\n\nPrevious Conversation:\n\n\nQuestion: whats the weather in New york?\n",
"additional_kwargs": {}
}
}
]
}
}
[llm/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 7:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"Human: You are a helpful assistant. Help the user answer any questions.\n\nYou have access to the following tools:\n\nsearch: search(query: str) -> str - Search things about current events.\n\nIn order to use a tool, you can use and tags. You will then get back a response in the form \nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\n\nsearch weather in SF \n64 degrees \n\nWhen you are done, respond with a final answer between . For example:\n\nThe weather in SF is 64 degrees \n\nBegin!\n\nPrevious Conversation:\n\n\nQuestion: whats the weather in New york?"
]
}
[llm/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 7:llm:ChatOpenAI] [2.17s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "search weather in New York" ,
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "search weather in New York" ,
"additional_kwargs": {}
}
}
}
]
],
"llm_output": {
"token_usage": {
"completion_tokens": 14,
"prompt_tokens": 191,
"total_tokens": 205
},
"model_name": "gpt-3.5-turbo",
"system_fingerprint": null
},
"run": null
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 8:parser:XMLAgentOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 8:parser:XMLAgentOutputParser] [1ms] Exiting Parser run with output:
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"agent",
"AgentAction"
],
"kwargs": {
"tool": "search",
"tool_input": "weather in New York",
"log": "search weather in New York"
}
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence] [2.25s] Exiting Chain run with output:
[outputs]
[tool/start] [1:chain:AgentExecutor > 9:tool:search] Entering Tool run with input:
"weather in New York"
[tool/end] [1:chain:AgentExecutor > 9:tool:search] [0ms] Exiting Tool run with output:
"32 degrees"
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad>] Entering Chain run with input:
[inputs]
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad> > 12:chain:<lambda>] Entering Chain run with input:
[inputs]
[chain/end] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad> > 12:chain:<lambda>] [3ms] Exiting Chain run with output:
{
"output": "whats the weather in New york?"
}
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad> > 13:chain:<lambda>] Entering Chain run with input:
[inputs]
[chain/end] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad> > 13:chain:<lambda>] [8ms] Exiting Chain run with output:
{
"output": "search weather in New York 32 degrees "
}
[chain/end] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad>] [18ms] Exiting Chain run with output:
{
"input": "whats the weather in New york?",
"agent_scratchpad": "search weather in New York 32 degrees "
}
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 14:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
"input": "whats the weather in New york?",
"agent_scratchpad": "search weather in New York 32 degrees "
}
[chain/end] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 14:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"ChatPromptValue"
],
"kwargs": {
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"HumanMessage"
],
"kwargs": {
"content": "You are a helpful assistant. Help the user answer any questions.\n\nYou have access to the following tools:\n\nsearch: search(query: str) -> str - Search things about current events.\n\nIn order to use a tool, you can use and tags. You will then get back a response in the form \nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\n\nsearch weather in SF \n64 degrees \n\nWhen you are done, respond with a final answer between . For example:\n\nThe weather in SF is 64 degrees \n\nBegin!\n\nPrevious Conversation:\n\n\nQuestion: whats the weather in New york?\nsearch weather in New York 32 degrees ",
"additional_kwargs": {}
}
}
]
}
}
[llm/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 15:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"Human: You are a helpful assistant. Help the user answer any questions.\n\nYou have access to the following tools:\n\nsearch: search(query: str) -> str - Search things about current events.\n\nIn order to use a tool, you can use and tags. You will then get back a response in the form \nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\n\nsearch weather in SF \n64 degrees \n\nWhen you are done, respond with a final answer between . For example:\n\nThe weather in SF is 64 degrees \n\nBegin!\n\nPrevious Conversation:\n\n\nQuestion: whats the weather in New york?\nsearch weather in New York 32 degrees "
]
}
[llm/end] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 15:llm:ChatOpenAI] [1.07s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "The weather in New York is 32 degrees.",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "The weather in New York is 32 degrees.",
"additional_kwargs": {}
}
}
}
]
],
"llm_output": {
"token_usage": {
"completion_tokens": 10,
"prompt_tokens": 216,
"total_tokens": 226
},
"model_name": "gpt-3.5-turbo",
"system_fingerprint": null
},
"run": null
}
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 16:parser:XMLAgentOutputParser] Entering Parser run with input:
[inputs]
[chain/error] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 16:parser:XMLAgentOutputParser] [8ms] Parser run errored with error:
"ValueError()Traceback (most recent call last):\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 975, in _call_with_config\n context.run(\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py\", line 323, in call_func_with_variable_args\n return func(input, **kwargs) # type: ignore[call-arg]\n ^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 168, in \n lambda inner_input: self.parse_result(\n ^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 219, in parse_result\n return self.parse(result[0].text)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py\", line 45, in parse\n raise ValueError\n\n\nValueError"
[chain/error] [1:chain:AgentExecutor > 10:chain:RunnableSequence] [1.10s] Chain run errored with error:
"ValueError()Traceback (most recent call last):\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 1762, in invoke\n input = step.invoke(\n ^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 167, in invoke\n return self._call_with_config(\n ^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 975, in _call_with_config\n context.run(\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py\", line 323, in call_func_with_variable_args\n return func(input, **kwargs) # type: ignore[call-arg]\n ^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 168, in \n lambda inner_input: self.parse_result(\n ^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 219, in parse_result\n return self.parse(result[0].text)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py\", line 45, in parse\n raise ValueError\n\n\nValueError"
[chain/error] [1:chain:AgentExecutor] [3.39s] Chain run errored with error:
"ValueError()Traceback (most recent call last):\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain/chains/base.py\", line 310, in __call__\n self._call(inputs, run_manager=run_manager)\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py\", line 1312, in _call\n next_step_output = self._take_next_step(\n ^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py\", line 1038, in _take_next_step\n [\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py\", line 1038, in \n [\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py\", line 1066, in _iter_next_step\n output = self.agent.plan(\n ^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py\", line 385, in plan\n output = self.runnable.invoke(inputs, config={\"callbacks\": callbacks})\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 1762, in invoke\n input = step.invoke(\n ^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 167, in invoke\n return self._call_with_config(\n ^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 975, in _call_with_config\n context.run(\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py\", line 323, in call_func_with_variable_args\n return func(input, **kwargs) # type: ignore[call-arg]\n ^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 168, in \n lambda inner_input: self.parse_result(\n ^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 219, in parse_result\n return self.parse(result[0].text)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/usr/local/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py\", line 45, in parse\n raise ValueError\n\n\nValueError"
Traceback (most recent call last):
File "/Users/zgpeace/Workspace/LLM/langchain-llm-app/LCEL/agents.py", line 59, in <module>
response = agent_executor.invoke({"input": "whats the weather in New york?"})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 93, in invoke
return self(
^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 316, in __call__
raise e
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 310, in __call__
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1312, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1038, in _take_next_step
[
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1038, in <listcomp>
[
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1066, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 385, in plan
output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1762, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 167, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 975, in _call_with_config
context.run(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 323, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 168, in <lambda>
lambda inner_input: self.parse_result(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 219, in parse_result
return self.parse(result[0].text)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py", line 45, in parse
raise ValueError
ValueError
https://github.com/zgpeace/pets-name-langchain/tree/develop
https://python.langchain.com/docs/expression_language/cookbook/agent