成功解决Transformers=4.3.3的管道方式实现模型调用时,AutoConfig不能识别的错误

项目场景:

本人在使用transfromers的管道方式实现情感分类时,调用自己本地预训练模型出现的bug


问题描述

报错如下:

D:\python36\lib\site-packages\OpenSSL\crypto.py:8: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
  from cryptography import utils, x509
Traceback (most recent call last):
  File "D:/python36/pythonProject/study_pytorch/Bert_Study/管线调用Bert.py", line 4, in 
    nlp = pipeline("sentiment-analysis",model='D:\model\chinese_bert_models',config='D:\model\chinese_bert_models',tokenizer=tokenizers)
  File "D:\python36\lib\site-packages\transformers\pipelines\__init__.py", line 377, in pipeline
    config = AutoConfig.from_pretrained(config, revision=revision)
NameError: name 'AutoConfig' is not defined

 # Instantiate config if needed
    if isinstance(config, str):
        config = AutoConfig.from_pretrained(config, revision=revision)

    # Instantiate modelcard if needed
    if isinstance(modelcard, str):
        modelcard = ModelCard.from_pretrained(modelcard, revision=revision)

    # Instantiate model if needed
    if isinstance(model, str):
        # Handle transparent TF/PT model conversion
        model_kwargs = {}

原因分析:

例如:报错原因是pipelines的__init__.py中的AutoConfig不能被识别。原因猜测:可能transformer4.3.3版本太新,以致于代码不完善。具体原因也不太清楚。


解决方案:

我首先没有直接去导入AutoConfig,而是去分析和它类似的类ModelCard,发现这个类在上面导入时导入了,然后我类比将AutoConfig从transformer中导入,并在这个库文件中做了标记。

# limitations under the License.
from transformers import AutoConfig#自己导入了AutoConfig
import warnings

然后问题成功解决,代码完美运行,至于transformer为什么没有导入这个类的原因,暂时没有头绪,有感兴趣的朋友,可以一起学习。

运行成功示意图:

D:\python36\lib\site-packages\OpenSSL\crypto.py:8: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
  from cryptography import utils, x509
Some weights of the model checkpoint at D:\model\chinese_bert_models were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.decoder.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at D:\model\chinese_bert_models and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[{'label': 'LABEL_1', 'score': 0.5313035249710083}]

你可能感兴趣的:(transformer,人工智能,深度学习)