[linux] 超长文本训练tokenizer报错 训练数据格式不正确

[linux] 超长文本训练tokenizer报错 训练数据格式不正确_第1张图片

Traceback (most recent call last):
  File "/xxxtext_generation_train/preprocess/token_preprocess/train_tokenizer.py", line 170, in
    spm.SentencePieceTrainer.train(
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 989, in Train
    SentencePieceTrainer._Train(arg=arg, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 973, in _Train
    model_proto = SentencePieceTrainer._TrainFromMap4(new_kwargs,
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 939, in _TrainFromMap4
    return _sentencepiece.SentencePieceTrainer__TrainFromMap4(args, iter)
RuntimeError: Internal: src/trainer_interface.cc(428) [!sentences_.empty()]  

检查训练数据格式,不要直接print ,要加到dict里面json.dumps(d, ensure_ascii=False)

你可能感兴趣的:(linux,linux,python,深度学习)