pytorch固定某些层进行训练

为了满足程序某些功能,需要冻结网络中某一些层,下面为详细代码:

    name_list =['up1','up2','up3','up4','up5']    #list中为需要冻结的网络层
    model = eval(args.model_name)(n_class=n_class)  #加载model
    for name, value in model.named_parameters():
        if name in name_list:
            value.requires_grad = False 
    params = filter(lambda p: p.requires_grad, model.parameters())
    optimizer = torch.optim.SGD(params, lr=lr_start, momentum=0.9, weight_decay=0.0005)

另外关于pytorch打印网络参数的内容:参考:https://blog.csdn.net/Jee_King/article/details/87368398

1.

for name, param in VGG.named_parameters():
	print(name, '      ', param.size())

打印的是模块名字.序号.权重名,注意此处不会打印relu,pool等不需要back的层

2.

for i, para in enumerate(model.parameters()):
        print(i,para)

打印网络参数的排序和具体值,这里会打印relu和pool.如下:
pytorch固定某些层进行训练_第1张图片

3.

 for name, param in model.named_parameters():
        print(name, param)

直接打印param, 即 print(name,param), 打印结果:参数名字及具体值

你可能感兴趣的:(小工具)