如何使用Pytorch冻结你不想要其参数被改变的层

如何使用Pytorch冻结你不想要其参数被改变的层

我猜下面的内容应该是你想要的!

#---------------------------model define ------------------------------------#
class YourModel(nn.Module):
    def __init__(self, feat_dim):   # input the dim of output fea-map of Resnet:
        super(YourModel, self).__init__()
        
        BackBone = models.resnet50(pretrained=True)
        
        add_block = []
        add_block += [nn.Linear(2048, 512)]
        add_block += [nn.LeakyReLU(inplace=True)]
        add_block = nn.Sequential(*add_block)
        add_block.apply(weights_init_xavier)
 
        self.BackBone = BackBone
        self.add_block = add_block
 
 
    def forward(self, input):   # input is 2048!
 
        x = self.BackBone(input)
        x = self.add_block(x)
 
        return x
#----------------------------------------------------------------------------#
 
# 模型准备
model = YourModel()
 
# 优化、正则项、权重设置与冻结层
 
for param in model.parameters():
    param.requires_grad = False
for param in model.add_block.parameters():
    param.requires_grad = True
 
optimizer = optim.SGD(
            filter(lambda p: p.requires_grad, model.parameters()),  # 记住一定要加上filter(),不然会报错。 filter用法:https://www.runoob.com/python/python-func-filter.html
            lr=0.01,
            weight_decay=1e-5, momentum=0.9, nesterov=True)
 

你可能感兴趣的:(pyTorch,算法)