python-transformers基础总结【二】-如何微调一个预训练模型

如何微调一个预训练模型

import torch
from transformers import AdamW, AutoTokenizer, AutoModelForSequenceClassification

# 以前不用变化
checkpoint = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
sequences = [
    "I've been waiting for a HuggingFace course my whole life.",
    "This course is amazing!",
]
batch = tokenizer(sequences, padding=True, truncation=True, return_tensors="pt")

# 手动打标签微调
batch["labels"] = torch.tensor([1, 1])

optimizer = AdamW(model.parameters())
loss = model(**batch).loss
loss.backward()
optimizer.step()

你可能感兴趣的:(python,tranformers,pytorch,python,深度学习,开发语言)