Pytorch中获取各层权重的代码

方法一:提取各层的参数
output_1=model.fc1.bias.data
output_1=model.fc1.weight.data

方法二:提取模型中所有weight的参数(bias一样)
    for name,par in model.named_parameters():
        if "weight" in name:
                print(name)

方法三:提取所有的parameters
params = list(model.parameters())
np.set_printoptions(suppress=True)
print(params)
或者
for param_tensor in model.state_dict():
np.set_printoptions(suppress=True)
print(param_tensor, “\n”, model.state_dict()[param_tensor].numpy())

方法四:模型参数的总数
model_parameters = filter(lambda p: p == 0, model.parameters())
params = sum([np.prod(p.size()) for p in model_parameters])
print(params)

方法5:可以看到整个模型的输出shape和对应的参数量
from torchsummary import summary
summary(net, (1, 28, 28))
还有from ptflops import get_model_complexity_info

原文链接:https://blog.csdn.net/lrzh0123/article/details/106409276

你可能感兴趣的:(python,开发语言)