[论文笔记] chatgpt系列 2.3 DeepSpeed-chat SFT训练

  accelerate+deepspeed多机多卡训练的两种方法 - 知乎

 单节点训练: 

# Move into the first step of the pipeline cd training/step1_supervised_finetuning/
 
# Run the training script
 
bash training_scripts/single_gpu/ run_1.3b.sh
 
# Evaluate the model
 
bash evaluation_scripts/ run_prompt.sh

 run_1.3b.sh脚本

# DeepSpeed Team
OUTPUT=$1
ZERO_STAGE=$2
if [ "$OUTPUT" == "" ]; then
    OUTPUT=./output
fi
if [ "$ZERO_STAGE" == "" ]; then
    ZERO_STAGE=2
fi
mkdir -p $OUTPUT

deepspeed main.py \
   --data_path Dahoas/rm-static Dahoas/full-hh-rlhf Dahoas/synthetic-instruct-gptj-pairwise yitingxie/rlhf-reward-datasets \
   --data_split 2,4,4 \
   --model_name_or_path facebook/opt-1.3b \
   --per_device_train_batch_size 8 \
   --per_device_eval_batch_size 8 \
   --max_seq_len 512 \
   --learning_rate 9.65e-6 \
   --weight_decay 0. \
   --num_train_epochs 16 \
   --gradient_accumulation_steps 1 \
   --lr_scheduler_type cosine \
   --num_warmup_steps 0 \
   --seed 1234 \
   --zero_stage $ZERO_STAGE \
   --deepspeed \
   --output_dir $OUTPUT \
   &> $OUTPUT/training.log
~                          

你可能感兴趣的:(论文阅读,chatgpt)