需要注意的⼀个重要细节是,保存模型的参数不是保存整个模型。加载模型参数时,需要先生成相同结构的模型.
import torch
from torch import nn
from torch.nn import functional as F
class MLP(nn.Module):
def __init__(self):
super().__init__()
self.hidden = nn.Linear(10,50)
self.out = nn.Linear(50,10)
def forward(self,X):
return self.out(F.relu(self.hidden(X)))
net = MLP()
print(net)
"""
输出结果:
MLP(
(hidden): Linear(in_features=10, out_features=50, bias=True)
(out): Linear(in_features=50, out_features=10, bias=True)
)
"""
X = torch.randn(size=(2, 10))
Y = net(X)
print(Y)
"""
输出结果:
tensor([[ 0.0438, 0.0588, 0.0569, -0.1400, -0.2479, 0.1160, -0.3251, -0.1829,
0.2035, -0.1680],
[-0.1223, -0.1221, -0.0059, 0.3528, -0.0057, 0.1553, -0.2157, 0.3413,
0.2524, -0.2509]], grad_fn=)
"""
将模型参数储存下来并加载参数
torch.save(net.state_dict(), 'mlp.params')
clone = MLP()
clone.load_state_dict(torch.load('mlp.params'))
clone.eval()
"""
输出结果:
MLP(
(hidden): Linear(in_features=10, out_features=50, bias=True)
(out): Linear(in_features=50, out_features=10, bias=True)
)
"""
由于两个实例具有相同的模型参数,在输⼊相同的X时,两个实例的计算结果应该相同。
Y_clone = clone(X)
Y_clone == Y
"""
输出结果:
tensor([[True, True, True, True, True, True, True, True, True, True],
[True, True, True, True, True, True, True, True, True, True]])
"""