测试gpt的function函数功能

 官网API (科学上网查看)

1、我对该功能的理解

    利用gpt的上下文理解能力,在执行方法run_conversation(xx)时,目标锁定在--提取出functions里每个function下required属性对应的值。
    而真正的function函数(get_current_weather(xx))只是对提取出的信息进行了一个json封装,并没有实际功能。 

 2、举例

 1)如本例中functions里只有一个function,该function的"required": ["location"], 即要提取出文本中的location。
 2)而 "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                    },
    properties中关于"location"的描述"description",我理解就是喂入模型的prompt的一部分,即用户user表达了想让模型替自己干什么。
3)为什么说只是一部分?

        因为prompt除了要表述想要AI干什么,还要告诉AI他要从哪里提取,即告诉待处理的信息 messages=[{"role": "user", "content": "What's the weather like in Boston?"}]。
4)这样就给gpt了完整的prompt信息。

 3、至于为什么会发送两次gpt调用?

    第1次调用gpt目的:即上面第2条-例子中的说明;
    第2次调用gpt目的:把第1次提取到信息和function返回的json, 重新组合成对话式的语言,而不只是返回function规定死的json。这样不只是开发人员能看懂,而是任何一个人类都可以看懂。
    本例中:
    1) function返回的json 如下:

这里location和unit的信息是从message里取的,但message里并没有保存这2个信息,所以这里都是null。数据的真实信息在 message["function_call"]["arguments"]中。
        {
            weather_info: {
                    "location": null,
                    "temperature": "72",
                    "unit": null,
                    "forecast": ["sunny", "windy"],
                }
        }
    2) gpt第2次调用重新组装的对话语言:见choices--message--content
        {
          "choices": [
            {
              "finish_reason": "stop",
              "index": 0,
              "message": {
                "content": "The current weather in Boston, MA is 72\u00b0F (22\u00b0C). It is sunny and windy.",   # 这里的回答把function返回的json值重新组装成了对话语言。而且每次运行回答都会变,但意思都是一样的,也就是同一句话会换个说法,这样就更加符合人类语言的灵活性了。
                "role": "assistant"
              }
            }
          ],
          "created": 1687312724,
          "id": "chatcmpl-7ThPI7pl75aEQMV2NvcfNZK051YtH",
          "model": "gpt-3.5-turbo-0613",
          "object": "chat.completion",
          "usage": {
            "completion_tokens": 21,
            "prompt_tokens": 70,
            "total_tokens": 91
          }
        }

4、代码 
(1) 官网api测试代码详解
#!/usr/bin/env python
# -*- coding: UTF-8 -*-

import openai
import json


openai.api_base = '代理地址'  
openai.api_key = 'your own apikey'


# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location, unit="fahrenheit"):
    """Get the current weather in a given location"""
    weather_info = {
        "location": location,
        "temperature": "72",
        "unit": unit,
        "forecast": ["sunny", "windy"],
    }
    return json.dumps(weather_info)

# Step 1, send model the user query and what functions it has access to
def run_conversation():
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo-0613",
        messages=[{"role": "user", "content": "What's the weather like in Boston?"}],
        functions=[
            {
                "name": "get_current_weather",
                "description": "Get the current weather in a given location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                    },
                    "required": ["location"],
                },
            }
        ],
        function_call="auto",
    )

    message = response["choices"][0]["message"]  

    print("*"*60)
    print(message)
    print("*" * 60)
    # ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **
    # {
    #     "content": null,
    #     "function_call": {
    #         "arguments": "{\n  \"location\": \"Boston, MA\"\n}",
    #         "name": "get_current_weather"
    #     },
    #     "role": "assistant"
    # }
    # ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **

    # Step 2, check if the model wants to call a function
    if message.get("function_call"):
        function_name = message["function_call"]["name"]

        # Step 3, call the function
        # Note: the JSON response from the model may not be valid JSON
        function_response = get_current_weather(
            location=message.get("location"),  # 这里message里并没有保存location和unit的信息,真实信息在 message["function_call"]["arguments"]中
            unit=message.get("unit"),
        )

        # Step 4, send model the info on the function call and function response
        second_response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo-0613",
            messages=[
                {"role": "user", "content": "What is the weather like in boston?"},
                message,
                {
                    "role": "function",
                    "name": function_name,
                    "content": function_response,
                },
            ],
        )
        return second_response

print(run_conversation())
(2)自定义函数功能
  • 测试信息抽取 
#!/usr/bin/env python
# -*- coding: UTF-8 -*-


import openai
import json


####################################################1、测试function功能
openai.api_base = '代理地址'  # Your Azure OpenAI resource's endpoint value.
openai.api_key = 'your own apikey'


import json

def get_tax_info(location):
    tax_info = {
        "location": location,
        "forecast": "缴税地"
    }
    return json.dumps(tax_info)

# Step 1, send model the user query and what functions it has access to
def run_conversation(content):
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo-0613",
        messages=[{"role": "user", "content": content}],
        functions=[
            {
                "name": "get_tax_info",
                "description": "提取交税地点",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            # TODO 这里的描述是prompt的一部分,会影响模型的提取结果。
                            #  实测,content = "我在安吉开的有共工作室,同时在乐清也经营的有小店,我在这两个地方都要交税。"下,
                            #  设置"description": "交税地点,比如浙江省-湖州市-安吉县。如果有多个缴税地,则以数组的形式返回,如[浙江省-温州市-乐清市,浙江省-杭州市],模型提取结果:{'location': '浙江省-湖州市-安吉县,浙江省-温州市-乐清市'}
                            #  设置"description": "交税地点,比如浙江省-湖州市-安吉县。如果有多个缴税地,则以数组的形式返回,如[浙江省-湖州市-安吉县,浙江省-杭州市],模型提取结果:{'location': '浙江省-湖州市-安吉县'}
                            #  设置"description": "交税地点,比如浙江省-湖州市-安吉县。如果有多个缴税地,则以数组的形式返回,如[浙江省-杭州市],模型提取结果:{'location': '浙江省-湖州市-安吉县'}
                            #  设置"description": "交税地点,比如浙江省-湖州市-安吉县。如果有多个缴税地,则以数组的形式返回,如[浙江省-温州市-乐清市,浙江省-湖州市-安吉县],模型提取结果:{'location': '浙江省-湖州市-安吉县,浙江省-温州市-乐清市'}
                            "description": "交税地点,比如浙江省-湖州市-安吉县。如果有多个缴税地,则以数组的形式返回,如[浙江省-温州市-乐清市,浙江省-杭州市]",
                            "description": "交税地点,比如浙江省湖州市安吉县",
                        }
                    },
                    "required": ["location"],
                },
            }
        ],
        function_call="auto",
    )

    message = response["choices"][0]["message"]

    print("*" * 60)
    print(message)
    print(type(message))  # 
    print("*" * 60)
    # mess_dict = message.to_dict()
    # print(mess_dict)
    # print(type(mess_dict))  # 
    # print("*" * 60)
    # json_str = json.dumps(mess_dict)
    # print(json_str)
    # print(type(json_str))  # 
    print("*" * 60)
    arguments_ = message["function_call"]["arguments"]
    print(type(arguments_))  # 
    print(arguments_)  # 这里已经是正常显示的中文了,所以不需要对字符串再次unicode解码
    # print(arguments_.encode('unicode-escape').decode('unicode-escape'))
    print("*" * 60)
    arg_json = json.loads(arguments_)  # 
    print(type(arg_json))
    print(arg_json)

    return message
    # # Step 2, check if the model wants to call a function
    # if message.get("function_call"):
    #     function_name = message["function_call"]["name"]
    #
    #     # Step 3, call the function
    #     # Note: the JSON response from the model may not be valid JSON
    #     function_response = get_tax_info(
    #         location=message.get("location")
    #     )
    #
    #     # Step 4, send model the info on the function call and function response
    #     second_response = openai.ChatCompletion.create(
    #         model="gpt-3.5-turbo-0613",
    #         messages=[
    #             {"role": "user", "content": content},
    #             message,
    #             {
    #                 "role": "function",
    #                 "name": function_name,
    #                 "content": function_response,
    #             },
    #         ],
    #     )
    #     return second_response


# content = "我在安吉开的有共工作室,同时在乐清也经营的有小店,我在这两个地方都要交税。提取下我的交税地点,并组装成省-市-区的格式,如提取到杭州,则组装返回浙江省-杭州市。"
content = "我在安吉开的有共工作室,同时在乐清也经营的有小店,我在这两个地方都要交税。"
# content = "我在杭州上班,也在这里缴税"
result = run_conversation(content)

####################################################2、测试unicode解码
# str = '''
# {
#   "choices": [
#     {
#       "finish_reason": "stop",
#       "index": 0,
#       "message": {
#         "content": "\u5f88\u62b1\u6b49\uff0c\u6211\u65e0\u6cd5\u63d0\u4f9b\u5173\u4e8e\u5728\u676d\u5dde\u7f34\u7a0e\u7684\u8be6\u7ec6\u4fe1\u606f\u3002\u8bf7\u60a8\u54a8\u8be2\u5f53\u5730\u7a0e\u52a1\u5c40\u6216\u8005\u76f8\u5173\u7a0e\u52a1\u90e8\u95e8\uff0c\u4ed6\u4eec\u5c06\u80fd\u591f\u4e3a\u60a8\u63d0\u4f9b\u6b63\u786e\u7684\u4fe1\u606f\u548c\u6307\u5bfc\u3002",
#         "role": "assistant"
#       }
#     }
#   ],
#   "created": 1686797348,
#   "id": "chatcmpl-7RXKm2DOIKxDtyCkfHF1Gz2xEW7Yi",
#   "model": "gpt-3.5-turbo-0613",
#   "object": "chat.completion",
#   "usage": {
#     "completion_tokens": 63,
#     "prompt_tokens": 60,
#     "total_tokens": 123
#   }
# }
# '''
# print('*'*50)
#
# p = "\u65e0\u6cd5\u8bc6\u522b\u5c5e\u6027\u201cphysical_network, network_type\u201d"
# p2 = "\u8d26\u6237\u8bbe\u5907"
# content= "\u5f88\u62b1\u6b49\uff0c\u6211\u65e0\u6cd5\u63d0\u4f9b\u5173\u4e8e\u5728\u676d\u5dde\u7f34\u7a0e\u7684\u8be6\u7ec6\u4fe1\u606f\u3002\u8bf7\u60a8\u54a8\u8be2\u5f53\u5730\u7a0e\u52a1\u5c40\u6216\u8005\u76f8\u5173\u7a0e\u52a1\u90e8\u95e8\uff0c\u4ed6\u4eec\u5c06\u80fd\u591f\u4e3a\u60a8\u63d0\u4f9b\u6b63\u786e\u7684\u4fe1\u606f\u548c\u6307\u5bfc\u3002"
#
# # print(p2.encode().decode("unicode_escape"))
# # print(p2.encode('utf-8').decode('unicode_escape'))
# print(str.encode('unicode-escape').decode('unicode-escape'))
  • 测试信息总结
    import openai
    import json
    
    '''
    尝试function函数对input进行总结,方便后续用总结后的内容,发起其他操作。
    
    -- 结果:
     可以联系上下文,但是仍然给出的是最后一个问题的回复,而不是内容总结!!!
    '''
    
    
    def get_summary(summary):
        tax_info = {
            "summary": summary
        }
        return json.dumps(tax_info)
    
    # Step 1, send model the user query and what functions it has access to
    def run_conversation(messages):
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo-0613",
            # messages=[{"role": "user", "content": content}],
            messages=messages,
            functions=[
                {
                    "name": "get_summary",
                    "description": "提炼输入内容的主要信息,总结成句子的中心思想并输出",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "summary": {
                                "type": "string",
                                "description": "对输入内容的一段总结性描述,里面包含了输入内容的重要信息,不重要或非相关的信息被丢弃",
                            }
                        },
                        "required": ["summary"],
                    },
                }
            ],
            function_call="auto",
        )
    
        message = response["choices"][0]["message"]
    
        print("*" * 60)
        print(message)
        # print(type(message))  # 
        print("*" * 60)
        arguments_ = message["content"]
        # print(type(arguments_))  # 
        # print(arguments_)  # 这里已经是正常显示的中文了,所以不需要对字符串再次unicode解码
        return arguments_
    
    
    old_q = "我想邀请员工线上签合同,要怎么在你们app上操作"
    old_a = "您好,您先在【首页】点击【人员管理】-【员工】,选择线上签署签约,对合同模板内容进行确认并输入验证码获取签章,然后填写合同相关内容信息(被邀请的人员姓名、身份证号、税前工资、发薪日),最后对合同内容确认无误后即可通过链接或二维码方式发起邀请"
    new_q = "那如果是线下呢"
    # content = "我在杭州上班,也在这里缴税"
    
    messages = [
                {"role": "user", "content": old_q},
                {"role": "assistant", "content": old_a},
                {"role": "user", "content": new_q},
               ]
    result = run_conversation(messages)
    print(result)
    
    
     

 

你可能感兴趣的:(算法学习,python,数学建模,开发语言)