小白学Pytorch系列--Torch.optim API Base class(1)

小白学Pytorch系列–Torch.optim API Base class(1)

小白学Pytorch系列--Torch.optim API Base class(1)_第1张图片
torch.optim是一个实现各种优化算法的包。大多数常用的方法都已得到支持,而且接口足够通用,因此将来还可以轻松集成更复杂的方法。

如何使用优化器

使用手torch.optim您必须构造一个优化器对象,该对象将保存当前状态,并将根据计算出的梯度更新参数。

构造它

要构造一个优化器,你必须给它一个包含参数(所有应该是变量)的可迭代对象来优化。然后,您可以指定特定于优化器的选项,如学习率、权重衰减等。

optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
optimizer = optim.Adam([var1, var2], lr=0.0001)

每个参数选项

优化器还支持指定每个参数的选项。要做到这一点,不是传递一个变量的迭代对象,而是传递一个dict的迭代对象。它们每个都将定义一个单独的参数组,并且应该包含一个params键,包含属于它的参数列表。其他键应该与优化器接受的关键字参数匹配,并将用作该组的优化选项。

注意: 您仍然可以将选项作为关键字参数传递。在没有覆盖它们的组中,它们将作为默认值使用。当您只想改变一个选项,同时在参数组之间保持所有其他选项一致时,这很有用。

例如,当想要指定每层的学习率时,这是非常有用的

optim.SGD([
                {'params': model.base.parameters()},
                {'params': model.classifier.parameters(), 'lr': 1e-3}
            ], lr=1e-2, momentum=0.9)

这意味着这个model.base参数将使用默认的学习率1e-2model.classifier’参数将使用1e-3的学习率,所有参数将使用0.9的动量。

进行优化步骤

所有优化器都实现了一个step()方法,用于更新参数。它有两种用法
optimizer.step()
这是大多数优化器支持的简化版本。该函数可以在梯度计算完成后调用,例如backward()

例如:

for input, target in dataset:
    optimizer.zero_grad()
    output = model(input)
    loss = loss_fn(output, target)
    loss.backward()
    optimizer.step()

optimizer.step(closure)
一些优化算法(如共轭梯度和LBFGS)需要多次重新计算函数,所以你必须传入一个闭包,允许它们重新计算你的模型。闭包应该清除梯度,计算损失并返回。

for input, target in dataset:
    def closure():
        optimizer.zero_grad()
        output = model(input)
        loss = loss_fn(output, target)
        loss.backward()
        return loss
    optimizer.step(closure)

Base class

这部分参考了:https://zhuanlan.zhihu.com/p/87209990
PyTorch 的优化器基本都继承于 “class Optimizer”,这是所有 optimizer 的 base class。
下面是Optimizer的结构

class Optimizer(object):
    def __init__(self, params, defaults):
        self.defaults = defaults
        self._hook_for_profile()
        if isinstance(params, torch.Tensor):
            raise TypeError("params argument given to the optimizer should be "
                            "an iterable of Tensors or dicts, but got " +
                            torch.typename(params))

        self.state = defaultdict(dict)
        self.param_groups = []

        param_groups = list(params)
        if len(param_groups) == 0:
            raise ValueError("optimizer got an empty parameter list")
        if not isinstance(param_groups[0], dict):
            param_groups = [{'params': param_groups}]
        for param_group in param_groups:
            self.add_param_group(param_group)
    def state_dict(self):
    	...

    def load_state_dict(self, state_dict):
        ...

    def cast(param, value):
    	...

    def zero_grad(self, set_to_none: bool = False):
    	...
    
    def step(self, closure):
    	...
    
    def add_param_group(self, param_group):    
    	...

init 函数初始化

paramsdefaults是两个重要的参数,defaults定义了全局优化默认值,params定义了模型参数和局部优化默认值。

add_param_group

defaultdict的作用在于当字典里的 key 被查找但不存在时,返回的不是keyError而是一个默认值,此处defaultdict(dict)`返回的默认值会是个空字典。最后一行调用的self.add_param_group(param_group),其中param_group是个字典,Key 就是params,Value 就是param_groups = list(params)。

def add_param_group(self, param_group):
        params = param_group['params']
        if isinstance(params, torch.Tensor):
            param_group['params'] = [params]
        elif isinstance(params, set):
            raise TypeError('optimizer')
        else:
            param_group['params'] = list(params)

        for param in param_group['params']:
            if not isinstance(param, torch.Tensor):
                raise TypeError("optimizer " + torch.typename(param))
            if not param.is_leaf:
                raise ValueError("can't optimize a non-leaf Tensor")

        for name, default in self.defaults.items():
            if default is required and name not in param_group:
                raise ValueError("parameter group didn't specify a value of required optimization parameter " +
                                 name)
            else:
                param_group.setdefault(name, default) # 给参数设置默认参数

        params = param_group['params']
        if len(params) != len(set(params)):
            warnings.warn("optimizer contains ", stacklevel=3)

        param_set = set()
        for group in self.param_groups:
            param_set.update(set(group['params']))

        if not param_set.isdisjoint(set(param_group['params'])): # 判断两个集合是否包含相同的元素
            raise ValueError("some parameters appear in more than one parameter group")

        self.param_groups.append(param_group)

zero_grad

就是将所有参数的梯度置为零p.grad.zero_()。detach_()的作用是Detaches the Tensor from the graph that created it, making it a leaf. self.param_groups是列表,其中的元素是字典。

def zero_grad(self):
    r"""Clears the gradients of all optimized :class:`torch.Tensor` s."""
    for group in self.param_groups:
        for p in group['params']:
            if p.grad is not None:
                p.grad.detach_()
                p.grad.zero_()

step

更新参数作用, 在父类 Optimizer 的 step 函数中只有一行代码raise NotImplementedError。网络模型参数和优化器的参数都保存在列表 self.param_groups 的元素中,该元素以字典形式存储和访问具体的网络模型参数和优化器的参数。所以,可以通过两层循环访问网络模型的每一个参数 p 。获取到梯度d_p = p.grad.data之后,根据优化器参数设置是否使用 momentum或者nesterov再对参数进行调整。最后一行 p.data.add_(-group['lr'], d_p)的作用是对参数进行更新。state用于保存本次更新是优化器第几轮迭代更新参数。

下面以SGD优化器为例

def step(self, closure=None):
        loss = None
        if closure is not None:
            with torch.enable_grad():
                loss = closure()

        for group in self.param_groups:
            params_with_grad = []
            d_p_list = []
            momentum_buffer_list = []
            weight_decay = group['weight_decay']
            momentum = group['momentum']
            dampening = group['dampening']
            nesterov = group['nesterov']
            maximize = group['maximize']
            lr = group['lr']

            for p in group['params']:
                if p.grad is not None:
                    params_with_grad.append(p)
                    d_p_list.append(p.grad)

                    state = self.state[p]
                    if 'momentum_buffer' not in state:
                        momentum_buffer_list.append(None)
                    else:
                        momentum_buffer_list.append(state['momentum_buffer'])

            F.sgd(params_with_grad,
                  d_p_list,
                  momentum_buffer_list,
                  weight_decay=weight_decay,
                  momentum=momentum,
                  lr=lr,
                  dampening=dampening,
                  nesterov=nesterov,
                  maximize=maximize,)

            # update momentum_buffers in state
            for p, momentum_buffer in zip(params_with_grad, momentum_buffer_list):
                state = self.state[p] ## 保存
                state['momentum_buffer'] = momentum_buffer
        return loss

F.sgd

def sgd(params: List[Tensor],
        d_p_list: List[Tensor],
        momentum_buffer_list: List[Optional[Tensor]],
        *,
        weight_decay: float,
        momentum: float,
        lr: float,
        dampening: float,
        nesterov: bool,
        maximize: bool):
    for i, param in enumerate(params):
        d_p = d_p_list[i]
        if weight_decay != 0:
            d_p = d_p.add(param, alpha=weight_decay)
        if momentum != 0:
            buf = momentum_buffer_list[i]
            if buf is None:
                buf = torch.clone(d_p).detach()
                momentum_buffer_list[i] = buf
            else:
                buf.mul_(momentum).add_(d_p, alpha=1 - dampening)
            if nesterov:
                d_p = d_p.add(buf, alpha=momentum)
            else:
                d_p = buf
        alpha = lr if maximize else -lr
        param.add_(d_p, alpha=alpha)

SGD上引入了一个Momentum(又叫Heavy Ball)的改进。
小白学Pytorch系列--Torch.optim API Base class(1)_第2张图片

load_state_dict

加载优化器状态。

def load_state_dict(self, state_dict):

     # deepcopy, to be consistent with module API
     state_dict = deepcopy(state_dict)
     # Validate the state_dict
     groups = self.param_groups
     saved_groups = state_dict['param_groups']

     if len(groups) != len(saved_groups):
         raise ValueError("loaded state dict has a different number of "
                          "parameter groups")
     param_lens = (len(g['params']) for g in groups)
     saved_lens = (len(g['params']) for g in saved_groups)
     if any(p_len != s_len for p_len, s_len in zip(param_lens, saved_lens)):
         raise ValueError("loaded state dict contains a parameter group "
                          "that doesn't match the size of optimizer's group")

     # Update the state
     id_map = {old_id: p for old_id, p in
               zip(chain.from_iterable((g['params'] for g in saved_groups)),
                   chain.from_iterable((g['params'] for g in groups)))}

     def cast(param, value):
         r"""Make a deep copy of value, casting all tensors to device of param."""
         if isinstance(value, torch.Tensor):
             # Floating-point types are a bit special here. They are the only ones
             # that are assumed to always match the type of params.
             if param.is_floating_point():
                 value = value.to(param.dtype)
             value = value.to(param.device)
             return value
         elif isinstance(value, dict):
             return {k: cast(param, v) for k, v in value.items()}
         elif isinstance(value, container_abcs.Iterable):
             return type(value)(cast(param, v) for v in value)
         else:
             return value

     # Copy state assigned to params (and cast tensors to appropriate types).
     # State that is not assigned to params is copied as is (needed for
     # backward compatibility).
     state = defaultdict(dict)
     for k, v in state_dict['state'].items():
         if k in id_map:
             param = id_map[k]
             state[param] = cast(param, v)
         else:
             state[k] = v

state_dict

以字典的形式返回优化器的状态。

def state_dict(self):    
     # Save order indices instead of Tensors
      param_mappings = {}
      start_index = 0

      def pack_group(group):
          nonlocal start_index
          packed = {k: v for k, v in group.items() if k != 'params'}
          param_mappings.update({id(p): i for i, p in enumerate(group['params'], start_index)
                                 if id(p) not in param_mappings})
          packed['params'] = [param_mappings[id(p)] for p in group['params']]
          start_index += len(packed['params'])
          return packed
      param_groups = [pack_group(g) for g in self.param_groups]
      # Remap state to use order indices as keys
      packed_state = {(param_mappings[id(k)] if isinstance(k, torch.Tensor) else k): v
                      for k, v in self.state.items()}
      return {
          'state': packed_state,
          'param_groups': param_groups,
      }

你可能感兴趣的:(PyTorch框架,pytorch,深度学习,人工智能)