解决‘BaichuanTokenizer‘ object has no attribute ‘sp_model‘,无需重装transformers和torch

如https://github.com/baichuan-inc/Baichuan2/issues/204
中所说:

修改下 tokenization_baichuan.py ,把 super() 修改到最后执行

        self.vocab_file = vocab_file
        self.add_bos_token = add_bos_token
        self.add_eos_token = add_eos_token
        self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
        self.sp_model.Load(vocab_file)
        super().__init__(
            bos_token=bos_token,
            eos_token=eos_token,
            unk_token=unk_token,
            pad_token=pad_token,
            add_bos_token=add_bos_token,
            add_eos_token=add_eos_token,
            sp_model_kwargs=self.sp_model_kwargs,
            clean_up_tokenization_spaces=clean_up_tokenization_spaces,
            **kwargs,
        )
        # self.vocab_file = vocab_file
        # self.add_bos_token = add_bos_token
        # self.add_eos_token = add_eos_token
        # self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
        # self.sp_model.Load(vocab_file)

注意需要把模型文件download到本地后修改模型文件中的tokenization_baichuan.py

你可能感兴趣的:(python,语言模型)