pytorch optim灵活传参

有时候训练模型肯定是需要optim对吧,但是很多情况下,我一个模型,有encoder,有dense等等不同层构成的,而每一层、每一种模型我都想要用不同的学习率,该怎么办?

torch.optim就给我们提供了一个很好的接口,先看看doc:
pytorch optim灵活传参_第1张图片
比方说这是Adam的参数要求,后面这些lr、beta、weight_decay啥的先不去看他,就先看params这个参数:
params:iterable of parameters to optimize or dicts defining parameter groups
需要被优化的可迭代模型参数对象或者一个定义了所有parameter groups的dict

也就是params不仅可以传model.parameters(),还可以传一个字典,这就可以让我们很灵活地设置参数了,比方说像这样:

params = []
for name, param in self._encoder.named_parameters():
    if param.requires_grad == True:
        if "weight" in name:
            params += [{"params": param, "lr": self._learning_rate, "weight_decay": self._weight_decay}]
        elif "bias" in name:
            params += [{"params": param, "lr": self._learning_rate}]
params += [{"params": list(dense_layer.parameters())[0], "lr": self._learning_rate * 10,
            "weight_decay": self._weight_decay}]
params += [{"params": list(dense_layer.parameters())[1], "lr": self._learning_rate * 10}]
# each task has it's own optimizer
optimizer = Adam(params)

可以看到我设置所有weight有weight_decay,而bias的为0。同时,我的dense层lr是encoder层的10倍。然后把这个dict,也就是parameter groups传给了Adam.

或者再pythonic点吧,和上面的写法一样的效果:

named_params = {name:param for name,param in self.named_parameters() if param.requires_grad == True}
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
params = []
params += [{"params":[p for n,p in named_params.items() if not any([nd in n for nd in no_decay])],"weight_decay":0.01}]
params += [{"params":[p for n, p in named_params.items() if any(nd in n for nd in no_decay)],"weight_decay":0.0}]

optimizer = Adam(params)

你可能感兴趣的:(深度学习,pytorch)