书生·浦语大模型实战营学习笔记(二)

书生·浦语大模型实战营学习笔记(二)

书生·浦语大模型趣味 Demo

第二次课程内容是跑通已经预设的3个demo:
1.InternLM-Chat-7B 智能对话
2.Lagent工具调用解简单数学题
3.浦语·灵笔多模态图文创作和理解

原视频链接:https://www.bilibili.com/video/BV1Ci4y1z72H/
InternStudio:https://studio.intern-ai.org.cn/
github教程链接:https://github.com/InternLM/tutorial/blob/main/helloworld/hello_world.md
有意参加书生·浦语大模型实战营的报名链接:https://www.wjx.top/vm/Yzzz2mi.aspx?udsid=876887,邀请码可填16885

InternLM模型介绍

InterLM是一个开源的轻量级训练框架,旨在支持大模型训练无需大量的依赖。基于InterLM模型,现在已有两个开源的预训练模型:InterLM-7B和InterLM-20B。

Lagent 是一个轻量级、开源的基于大语言模型的智能体 (agent) 框架,用户可以快速地将一个大语言模型转变为多种类型的智能体。通过 Lagent 框架可以更好的发挥 nternLM 模型的全部性能。

浦语·灵笔是基于书生·浦语大语言模型研发的视觉·语言大模型,有着出色的图文理解和创作能力,使用浦语·灵笔大模型可以轻松的创作一篇图文推文。

智能对话 Demo

配置服务器环境

选择A100(1/4) 的配置,镜像选择 Cuda11.7-conda(就这一个选项。。)
进入开发机。
在终端输入 bash 命令,进入 conda 环境。
从本地克隆一个已有的 pytorch 2.0.1 的环境。

conda create --name internlm-demo --clone=/root/share/conda_envs/internlm-base
conda activate internlm-demo

安装所需依赖

python -m pip install --upgrade pip
pip install modelscope==1.9.5
pip install transformers==4.35.2
pip install streamlit==1.24.0
pip install sentencepiece==0.1.99
pip install accelerate==0.24.1

模型下载

直接复制 share 目录下的 InternLM 模型。-r 选项表示递归地复制目录及其内容

mkdir -p /root/model/Shanghai_AI_Laboratory
cp -r /root/share/temp/model_repos/internlm-chat-7b /root/model/Shanghai_AI_Laboratory

也可以使用 modelscope 中的 snapshot_download 函数下载模型,第一个参数为模型名称,参数 cache_dir 为模型的下载路径。

在 /root 路径下新建目录 model,在目录下新建 download.py 文件并在其中输入以下内容,保存文件。并在终端运行 python /root/model/download.py 执行下载。

import torch
from modelscope import snapshot_download, AutoModel, AutoTokenizer
import os
model_dir = snapshot_download('Shanghai_AI_Laboratory/internlm-chat-7b', cache_dir='/root/model', revision='v1.0.3')

代码准备

在 /root 路径下新建 code 目录,clone 代码.

cd /root/code
git clone https://gitee.com/internlm/InternLM.git

切换 commit 版本,这一步是为了与教程 commit 版本保持一致。

cd InternLM
git checkout 3028f07cb79e5b1d7342f4ad8d11efad3fd13d17

更改web_demo.py 中模型的路径,更改为/root/model/Shanghai_AI_Laboratory/internlm-chat-7b。
书生·浦语大模型实战营学习笔记(二)_第1张图片

终端运行

在 /root/code/InternLM 目录下新建一个 cli_demo.py 文件,其中代码如下:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM


model_name_or_path = "/root/model/Shanghai_AI_Laboratory/internlm-chat-7b"

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='auto')
model = model.eval()

system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.
"""

messages = [(system_prompt, '')]

print("=============Welcome to InternLM chatbot, type 'exit' to exit.=============")

while True:
    input_text = input("User  >>> ")
    input_text = input_text.replace(' ', '')
    if input_text == "exit":
        break
    response, history = model.chat(tokenizer, input_text, history=messages)
    messages.append((input_text, response))
    print(f"robot >>> {response}")

终端运行cli_demo.py

python /root/code/InternLM/cli_demo.py

运行web demo

访问服务器上的web 应用需要先配置本地端口。

配置本地端口

step1:在本地机器上打开终端。运行以下命令来生成 SSH 密钥对

ssh-keygen -t rsa

step2: 将被提示选择密钥文件的保存位置,默认情况下是在 ~/.ssh/ 目录中。按 Enter 键接受默认值或输入自定义路径。
step3:公钥默认存储在 ~/.ssh/id_rsa.pub,通过cat 工具查看文件内容:

cat ~\.ssh\id_rsa.pub

很长的这一串就是公钥
在这里插入图片描述
step4:在InternStudio的控制台中配置SSH Key。
书生·浦语大模型实战营学习笔记(二)_第2张图片
step5:在开发机界面里点击SSH链接可以看到连接服务器的代码,33090 是根据开发机的端口进行更改的,每个人的可能不一样。复制这一行代码到终端/vscode中可以连接服务器。
书生·浦语大模型实战营学习笔记(二)_第3张图片

运行

配置好本地端口之后在终端运行/root/code/InternLM 目录下的 web_demo.py 文件,并设置地址和端口号。(注意要在配置好的虚拟环境中运行)

cd /root/code/InternLM
streamlit run web_demo.py --server.address 127.0.0.1 --server.port xxxx

进入网站之后才会开始加载模型(时间有点长)
书生·浦语大模型实战营学习笔记(二)_第4张图片
加载成功
书生·浦语大模型实战营学习笔记(二)_第5张图片

Lagent 智能体工具调用 Demo

Lagent 安装

首先切换路径到 /root/code 克隆 lagent 仓库,并通过 pip install -e . 源码安装 Lagent

cd /root/code
git clone https://gitee.com/internlm/lagent.git
cd /root/code/lagent
git checkout 511b03889010c4811b1701abb153e02b8e94fb5e # 尽量保证和教程commit版本一致
pip install -e . # 源码安装

修改代码

由于代码修改的地方比较多,直接将 /root/code/lagent/examples/react_web_demo.py 内容替换为以下代码:

import copy
import os

import streamlit as st
from streamlit.logger import get_logger

from lagent.actions import ActionExecutor, GoogleSearch, PythonInterpreter
from lagent.agents.react import ReAct
from lagent.llms import GPTAPI
from lagent.llms.huggingface import HFTransformerCasualLM


class SessionState:

    def init_state(self):
        """Initialize session state variables."""
        st.session_state['assistant'] = []
        st.session_state['user'] = []

        #action_list = [PythonInterpreter(), GoogleSearch()]
        action_list = [PythonInterpreter()]
        st.session_state['plugin_map'] = {
            action.name: action
            for action in action_list
        }
        st.session_state['model_map'] = {}
        st.session_state['model_selected'] = None
        st.session_state['plugin_actions'] = set()

    def clear_state(self):
        """Clear the existing session state."""
        st.session_state['assistant'] = []
        st.session_state['user'] = []
        st.session_state['model_selected'] = None
        if 'chatbot' in st.session_state:
            st.session_state['chatbot']._session_history = []


class StreamlitUI:

    def __init__(self, session_state: SessionState):
        self.init_streamlit()
        self.session_state = session_state

    def init_streamlit(self):
        """Initialize Streamlit's UI settings."""
        st.set_page_config(
            layout='wide',
            page_title='lagent-web',
            page_icon='./docs/imgs/lagent_icon.png')
        # st.header(':robot_face: :blue[Lagent] Web Demo ', divider='rainbow')
        st.sidebar.title('模型控制')

    def setup_sidebar(self):
        """Setup the sidebar for model and plugin selection."""
        model_name = st.sidebar.selectbox(
            '模型选择:', options=['gpt-3.5-turbo','internlm'])
        if model_name != st.session_state['model_selected']:
            model = self.init_model(model_name)
            self.session_state.clear_state()
            st.session_state['model_selected'] = model_name
            if 'chatbot' in st.session_state:
                del st.session_state['chatbot']
        else:
            model = st.session_state['model_map'][model_name]

        plugin_name = st.sidebar.multiselect(
            '插件选择',
            options=list(st.session_state['plugin_map'].keys()),
            default=[list(st.session_state['plugin_map'].keys())[0]],
        )

        plugin_action = [
            st.session_state['plugin_map'][name] for name in plugin_name
        ]
        if 'chatbot' in st.session_state:
            st.session_state['chatbot']._action_executor = ActionExecutor(
                actions=plugin_action)
        if st.sidebar.button('清空对话', key='clear'):
            self.session_state.clear_state()
        uploaded_file = st.sidebar.file_uploader(
            '上传文件', type=['png', 'jpg', 'jpeg', 'mp4', 'mp3', 'wav'])
        return model_name, model, plugin_action, uploaded_file

    def init_model(self, option):
        """Initialize the model based on the selected option."""
        if option not in st.session_state['model_map']:
            if option.startswith('gpt'):
                st.session_state['model_map'][option] = GPTAPI(
                    model_type=option)
            else:
                st.session_state['model_map'][option] = HFTransformerCasualLM(
                    '/root/model/Shanghai_AI_Laboratory/internlm-chat-7b')
        return st.session_state['model_map'][option]

    def initialize_chatbot(self, model, plugin_action):
        """Initialize the chatbot with the given model and plugin actions."""
        return ReAct(
            llm=model, action_executor=ActionExecutor(actions=plugin_action))

    def render_user(self, prompt: str):
        with st.chat_message('user'):
            st.markdown(prompt)

    def render_assistant(self, agent_return):
        with st.chat_message('assistant'):
            for action in agent_return.actions:
                if (action):
                    self.render_action(action)
            st.markdown(agent_return.response)

    def render_action(self, action):
        with st.expander(action.type, expanded=True):
            st.markdown(
                "

插 件:" # noqa E501 + action.type + '</span></p>', unsafe_allow_html=True) st.markdown( "

思考步骤:" # noqa E501 + action.thought + '</span></p>', unsafe_allow_html=True) if (isinstance(action.args, dict) and 'text' in action.args): st.markdown( "

执行内容:

", # noqa E501 unsafe_allow_html=True) st.markdown(action.args['
text']) self.render_action_results(action) def render_action_results(self, action): """Render the results of action, including text, images, videos, and audios.""" if (isinstance(action.result, dict)): st.markdown( "

执行结果:

", # noqa E501 unsafe_allow_html=True) if '
text' in action.result: st.markdown( "

" + action.result['text'] + '</p>', unsafe_allow_html=True) if 'image' in action.result: image_path = action.result['image'] image_data = open(image_path, 'rb').read() st.image(image_data, caption='Generated Image') if 'video' in action.result: video_data = action.result['video'] video_data = open(video_data, 'rb').read() st.video(video_data) if 'audio' in action.result: audio_data = action.result['audio'] audio_data = open(audio_data, 'rb').read() st.audio(audio_data) def main(): logger = get_logger(__name__) # Initialize Streamlit UI and setup sidebar if 'ui' not in st.session_state: session_state = SessionState() session_state.init_state() st.session_state['ui'] = StreamlitUI(session_state) else: st.set_page_config( layout='wide', page_title='lagent-web', page_icon='./docs/imgs/lagent_icon.png') # st.header(':robot_face: :blue[Lagent] Web Demo ', divider='rainbow') model_name, model, plugin_action, uploaded_file = st.session_state[ 'ui'].setup_sidebar() # Initialize chatbot if it is not already initialized # or if the model has changed if 'chatbot' not in st.session_state or model != st.session_state[ 'chatbot']._llm: st.session_state['chatbot'] = st.session_state[ 'ui'].initialize_chatbot(model, plugin_action) for prompt, agent_return in zip(st.session_state['user'], st.session_state['assistant']): st.session_state['ui'].render_user(prompt) st.session_state['ui'].render_assistant(agent_return) # User input form at the bottom (this part will be at the bottom) # with st.form(key='my_form', clear_on_submit=True): if user_input := st.chat_input(''): st.session_state['ui'].render_user(user_input) st.session_state['user'].append(user_input) # Add file uploader to sidebar if uploaded_file: file_bytes = uploaded_file.read() file_type = uploaded_file.type if 'image' in file_type: st.image(file_bytes, caption='Uploaded Image') elif 'video' in file_type: st.video(file_bytes, caption='Uploaded Video') elif 'audio' in file_type: st.audio(file_bytes, caption='Uploaded Audio') # Save the file to a temporary location and get the path file_path = os.path.join(root_dir, uploaded_file.name) with open(file_path, 'wb') as tmpfile: tmpfile.write(file_bytes) st.write(f'File saved at: {file_path}') user_input = '我上传了一个图像,路径为: {file_path}. {user_input}'.format( file_path=file_path, user_input=user_input) agent_return = st.session_state['chatbot'].chat(user_input) st.session_state['assistant'].append(copy.deepcopy(agent_return)) logger.info(agent_return.inner_steps) st.session_state['ui'].render_assistant(agent_return) if __name__ == '__main__': root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) root_dir = os.path.join(root_dir, 'tmp_dir') os.makedirs(root_dir, exist_ok=True) main()

Demo运行

streamlit run /root/code/lagent/examples/react_web_demo.py --server.address 127.0.0.1 --server.port 6006

在 Web 页面选择 InternLM 模型,等待模型加载完毕后,输入数学问题 已知 2x+3=10,求x ,此时 InternLM-Chat-7B 模型理解题意生成解此题的 Python 代码,Lagent 调度送入 Python 代码解释器求出该问题的解。
书生·浦语大模型实战营学习笔记(二)_第6张图片

浦语·灵笔图文理解创作 Demo

本小节我们将使用 InternStudio 中的 A100(1/4) * 2 机器和 internlm-xcomposer-7b 模型部署一个图文理解创作 Demo 。

环境准备

首先在 InternStudio 上选择 A100(1/4)*2 的配置。coda11.7。

bash
/root/share/install_conda_env_internlm_base.sh xcomposer-demo
conda activate xcomposer-demo 
pip install transformers==4.33.1 timm==0.4.12 sentencepiece==0.1.99 gradio==3.44.4 markdown2==2.4.10 xlsxwriter==3.1.2 einops accelerate

模型下载

安装 modelscope pip install modelscope==1.9.5
在 /root/model 路径下新建 download.py 文件并在其中输入以下内容,并运行 python /root/model/download.py 执行下载

import torch
from modelscope import snapshot_download, AutoModel, AutoTokenizer
import os
model_dir = snapshot_download('Shanghai_AI_Laboratory/internlm-xcomposer-7b', cache_dir='/root/model', revision='master')

代码准备

在 /root/code git clone InternLM-XComposer 仓库的代码

cd /root/code
git clone https://gitee.com/internlm/InternLM-XComposer.git
cd /root/code/InternLM-XComposer
git checkout 3e8c79051a1356b9c388a6447867355c0634932d  # 最好保证和教程的 commit 版本一致

demo运行

在终端输入

cd /root/code/InternLM-XComposer
python examples/web_demo.py  \
    --folder /root/model/Shanghai_AI_Laboratory/internlm-xcomposer-7b \
    --num_gpus 1 \ #这里 num_gpus 1 是因为InternStudio平台对于 A100(1/4)*2 识别仍为一张显卡
    --port 6006

你可能感兴趣的:(学习,笔记)