LangChain 64 深入理解LangChain 表达式语言27 添加审查 Moderation LangChain Expression Language (LCEL)

LangChain系列文章

  1. LangChain 50 深入理解LangChain 表达式语言十三 自定义pipeline函数 LangChain Expression Language (LCEL)
  2. LangChain 51 深入理解LangChain 表达式语言十四 自动修复配置RunnableConfig LangChain Expression Language (LCEL)
  3. LangChain 52 深入理解LangChain 表达式语言十五 Bind runtime args绑定运行时参数 LangChain Expression Language (LCEL)
  4. LangChain 53 深入理解LangChain 表达式语言十六 Dynamically route动态路由 LangChain Expression Language (LCEL)
  5. LangChain 54 深入理解LangChain 表达式语言十七 Chains Route动态路由 LangChain Expression Language (LCEL)
  6. LangChain 55 深入理解LangChain 表达式语言十八 function Route自定义动态路由 LangChain Expression Language (LCEL)
  7. LangChain 56 深入理解LangChain 表达式语言十九 config运行时选择大模型LLM LangChain Expression Language (LCEL)
  8. LangChain 57 深入理解LangChain 表达式语言二十 LLM Fallbacks速率限制备份大模型 LangChain Expression Language (LCEL)
  9. LangChain 58 深入理解LangChain 表达式语言21 Memory消息历史 LangChain Expression Language (LCEL)
  10. LangChain 59 深入理解LangChain 表达式语言22 multiple chains多个链交互 LangChain Expression Language (LCEL)
  11. LangChain 60 深入理解LangChain 表达式语言23 multiple chains链透传参数 LangChain Expression Language (LCEL)
  12. LangChain 61 深入理解LangChain 表达式语言24 multiple chains链透传参数 LangChain Expression Language (LCEL)
  13. LangChain 62 深入理解LangChain 表达式语言25 agents代理 LangChain Expression Language (LCEL)
  14. LangChain 63 深入理解LangChain 表达式语言26 生成代码code并执行 LangChain Expression Language (LCEL)

在这里插入图片描述

1. 添加审查 Moderation

这显示了如何在您的LLM应用程序周围添加审查(或其他保障措施)。

代码实现

from langchain.prompts import PromptTemplate
from langchain_community.chat_models import ChatOpenAI
from langchain_core.runnables import ConfigurableField
# We add in a string output parser here so the outputs between the two are the same type
from langchain_core.output_parsers import StrOutputParser
from langchain.chains import OpenAIModerationChain
from langchain.prompts import ChatPromptTemplate
from langchain_community.llms import OpenAI

from dotenv import load_dotenv  # 导入从 .env 文件加载环境变量的函数
load_dotenv()  # 调用函数实际加载环境变量

from langchain.globals import set_debug  # 导入在 langchain 中设置调试模式的函数
set_debug(True)  # 启用 langchain 的调试模式

moderate = OpenAIModerationChain()
model = OpenAI()
prompt = ChatPromptTemplate.from_messages([("system", "repeat after me: {input}")])
chain = prompt | model
normal_response = chain.invoke({"input": "you are stupid"})
print('normal_response >> ', normal_response)

moderated_chain = chain | moderate
moderated_response = moderated_chain.invoke({"input": "you are stupid"})
print('moderated_response >> ', moderated_response)


运行输出

You tried to access openai.Moderation, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

需要安装老版本的 openai pip install openai==0.28

输出结果

(.venv) zgpeace@zgpeaces-MacBook-Pro git:(develop) ✗% python LCEL/moderation.py                           ~/Workspace/LLM/langchain-llm-app
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
  "input": "you are stupid"
}
[chain/start] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "input": "you are stupid"
}
[chain/end] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] [7ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "SystemMessage"
        ],
        "kwargs": {
          "content": "repeat after me: you are stupid",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:RunnableSequence > 3:llm:OpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: repeat after me: you are stupid"
  ]
}
[llm/end] [1:chain:RunnableSequence > 3:llm:OpenAI] [1.97s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "\n\nI am stupid. ",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "Generation"
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "prompt_tokens": 9,
      "completion_tokens": 6,
      "total_tokens": 15
    },
    "model_name": "gpt-3.5-turbo-instruct"
  },
  "run": null
}
[chain/end] [1:chain:RunnableSequence] [1.99s] Exiting Chain run with output:
{
  "output": "\n\nI am stupid. "
}
normal_response >>  

I am stupid. 
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
  "input": "you are stupid"
}
[chain/start] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "input": "you are stupid"
}
[chain/end] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "SystemMessage"
        ],
        "kwargs": {
          "content": "repeat after me: you are stupid",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:RunnableSequence > 3:llm:OpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: repeat after me: you are stupid"
  ]
}
[llm/end] [1:chain:RunnableSequence > 3:llm:OpenAI] [1.47s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "Generation"
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "prompt_tokens": 9,
      "completion_tokens": 31,
      "total_tokens": 40
    },
    "model_name": "gpt-3.5-turbo-instruct"
  },
  "run": null
}
[chain/start] [1:chain:RunnableSequence > 4:chain:OpenAIModerationChain] Entering Chain run with input:
{
  "input": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid."
}
[chain/end] [1:chain:RunnableSequence > 4:chain:OpenAIModerationChain] [1.02s] Exiting Chain run with output:
{
  "output": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid."
}
[chain/end] [1:chain:RunnableSequence] [2.50s] Exiting Chain run with output:
{
  "input": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.",
  "output": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid."
}
moderated_response >>  {'input': '\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.', 'output': '\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.'}

代码

https://github.com/zgpeace/pets-name-langchain/tree/develop

参考

https://python.langchain.com/docs/expression_language/cookbook/moderation

你可能感兴趣的:(LLM-Large,Language,Models,langchain,chatgpt,人工智能,python)