【书生·浦语】大模型实战营——第六次作业

使用OpenCompass 评测 InterLM2-chat-chat-7B 模型在C-Eval数据集上的性能

环境配置

1. 创建虚拟环境


conda create --name opencompass --clone=/root/share/conda_envs/internlm-base
source activate opencompass
git clone https://github.com/open-compass/opencompass
cd opencompass
pip install -e .


2. 数据集准备

# 解压评测数据集到 data/ 处
cp /share/temp/datasets/OpenCompassData-core-20231110.zip /root/opencompass/
unzip OpenCompassData-core-20231110.zip

3. 查看支持的数据集和模型

# 列出所有跟 internlm 及 ceval 相关的配置
python tools/list_configs.py internlm ceval

测试

        评测不同的模型主要是设置正确的参数。可以通过命令行的形式一个一个指定,也可以通过config文件实现。本文选择了config文件的形式,首先创建configs/eval_internlm2_chat.py:

from mmengine.config import read_base
from opencompass.models import HuggingFaceCausalLM
from opencompass.partitioners import NaivePartitioner
from opencompass.runners import LocalRunner
from opencompass.tasks import OpenICLEvalTask, OpenICLInferTask

with read_base():
    # choose a list of datasets
    from .datasets.ceval.ceval_gen_5f30c7 import ceval_datasets

datasets = [*ceval_datasets]


_meta_template = dict(
    round=[
        dict(role='HUMAN', begin='[UNUSED_TOKEN_146]user\n', end='[UNUSED_TOKEN_145]\n'),
        dict(role='SYSTEM', begin='[UNUSED_TOKEN_146]system\n', end='[UNUSED_TOKEN_145]\n'),
        dict(role='BOT', begin='[UNUSED_TOKEN_146]assistant\n', end='[UNUSED_TOKEN_145]\n', generate=True),
    ],
    eos_token_id=92542
)

models = [
    dict(
        type=HuggingFaceCausalLM,
        abbr='internlm2-chat-7b',
        path="/share/model_repos/internlm2-chat-7b",
        tokenizer_path='/share/model_repos/internlm2-chat-7b',
        model_kwargs=dict(
            trust_remote_code=True,
            device_map='auto',
        ),
        tokenizer_kwargs=dict(
            padding_side='left',
            truncation_side='left',
            trust_remote_code=True,
        ),
        max_out_len=100,
        max_seq_len=2048,
        batch_size=128,
        meta_template=_meta_template,
        run_cfg=dict(num_gpus=1, num_procs=1),
        end_str='[UNUSED_TOKEN_145]',
    )
]
infer = dict(
    partitioner=dict(type=NaivePartitioner),
    runner=dict(
        type=LocalRunner,
        max_num_workers=2,
        task=dict(type=OpenICLInferTask)),
)
eval = dict(
    partitioner=dict(type=NaivePartitioner),
    runner=dict(
        type=LocalRunner,
        max_num_workers=2,
        task=dict(type=OpenICLEvalTask)),
)

测试结果:

【书生·浦语】大模型实战营——第六次作业_第1张图片

【书生·浦语】大模型实战营——第六次作业_第2张图片

部分结果:

【书生·浦语】大模型实战营——第六次作业_第3张图片

你可能感兴趣的:(人工智能)