【阅读笔记】DARTS: Differentiable Architecture Search

作者:

Hanxiao Liu Karen Simonyan Yiming Yang
CMU DeepMind CMU
[email protected] [email protected] [email protected]

Liu, Hanxiao, Karen Simonyan, and Yiming Yang. “Darts: Differentiable architecture search.” arXiv preprint arXiv:1806.09055 (2018).

发布时间: 24 Jun 2018

昨天看了这篇文章,感觉挺有意思的,将神经网络的结构也作为参数来进行梯度下降优化,给结构的选择以科学的方式。

Abstract

本文的核心思想是通过以可微的方式进行结构搜索。
与传统的在离散的和不可微的搜索空间上采用进化或强化学习搜索结构的方法不同,本文的方法是基于将结构表示的松弛(relaxation),允许使用梯度下降高效搜索架构。
在CIFAR-10,ImageNet,Penn Treebank和WikiText-2上进行了大量实验,表明本文的算法擅长于发现用于图像分类的高性能卷积结构和用于语言建模的循环神经网络结构,同时比现有技术的非微分搜索技术要快几个数量级。

1 Introduction

发现最先进的神经网络架构需要人类专家的大量工作。
最近,人们越来越有兴趣开发自动化算法解决神经网络结构的设计。
自动搜索架构在诸如图像分类 (Zoph and Le, 2016; Zoph et al., 2017; Liu et al., 2017b,a; Real et al., 2018)和object detection (Zoph et al., 2017)上有着广泛的探索。

现有最好的架构搜索算法尽管性能卓越,但在计算上要求很高。
例如,获得CIFAR-10和ImageNet的最新架构需要1800 GPU天的强化学习(RL) (Zoph et al., 2017) 或3150 GPU天的进化学习 (Real et al., 2018)。
虽然已经提出了几种加速方法,如强加搜索空间的特定结构(Liu et al., 2017b,a),对每个单独架构的权重或性能预测(Brock et al., 2017; Baker et al., 2018) ,以及跨体系结构的权重共享 (Pham et al., 2018b; Cai et al., 2018),但可扩展性(scalability )的根本challenge依然存在。
主流方法效率低下的内在原因, e.g. based on RL, evolution, MCTS (Negrinho and Gordon, 2017), SMBO (Liu et al., 2017a) or Bayesian optimization (Kandasamy et al., 2018),是在于把结构搜索视为一个在离散域的黑箱优化问题,这导致需要大量的架构评估。

在这项工作中,作者从另一个角度来看问题,并提出了一种称为DARTS(可微分的结构搜索,Differentiable Architecture Search)的高效架构搜索方法。
取代搜索一组离散的候选架构,松弛(relaxation)搜索空间使之连续,从而使架构可以通过梯度下降的方法对其在验证集上的性能进行优化。
因为基于梯度的优化,与低效的黑盒搜索不同,使得DARTS使用比现有技术数量级较少的计算资源实现具有竞争力的表现。
它也优于另一种最近的高效架构搜索方法,ENAS (Pham et al., 2018b).
值得注意的是,DARTS比许多现有方法简单,因为它不涉及任何controllers (Zoph and Le, 2016; Baker et al., 2016; Zoph et al., 2017; Pham et al., 2018b),hypernetworks (Brock et al., 2017) ,或表现预测因子performance predictors (Liu et al., 2017a)。

在一个连续的领域内搜索体系结构的想法并不新鲜(Saxena and Verbeek, 2016; Ahmed and Torresani, 2017; Shin et al., 2018),与先前的工作有几个主要区别在于:

  • 虽然之前的工作试图对结构的特定方面进行微调,如卷积网络中的滤波器形状或分支模式,但是DARTS能够在丰富的搜索空间内发现具有复杂图形拓扑的高性能架构。
  • 此外,DARTS不限于任何特定架构系列,能够搜索卷积网络和循环网络。

本文的贡献可以总结如下:

  • 引入了一种适用于卷积和循环结构的可微分网络体系结构搜索的新算法。
  • 通过实验表明本文的方法具有很强的竞争力。
  • 实现了卓越的结构搜索效率(4个GPU:1天内CIFAR10误差2.83%; 6小时内PTB误差56.1),这归因于使用基于梯度的优化而非非微分搜索技术。
  • 证明 DARTS 在 CIFAR-10 和 PTB 上学习的体系结构可以迁移到 ImageNet 和 WikiText-2 上

DARTS的实现可在https://github.com/quark0/darts 上找到。

2 Differentiable Architecture Search

在Sect. 2.1,用一般形式描述搜索空间,其中结构(或其中的单元cell)的计算过程被表示为有向无环图。
然后为搜索空间引入一个简单的连续松弛方案,使得架构及其权重的联合优化目标可微(Sect. 2.2)。
最后,本文提出了一种近似技术,使算法在计算上可行和高效(Sect. 2.3)。

2.1 Search Space

Following 前人的工作,搜索一个计算单元(cell)作为最终架构的基石。
学习过的单元可以堆叠起来形成一个卷积网络,或者递归连接形成一个循环网络。

单元是由N个有序节点组成的有向无环图。
每个节点 x ( i ) x^{(i)} x(i)都是一个latent representation(例如卷积网络中的特征映射),每个有向边是对 x ( i ) x^{(i)} x(i)的某种运算 o ( i , j ) o^{(i,j)} o(i,j)
假设每个单元有两个输入节点和一个输出节点。
对于卷积单元,输入节点被定义为前两层的单元输出(Zoph et al. 2017)。
对于循环单元,输入节点被定义为当前步骤的输入以及上一步骤中的状态。
通过对所有中间节点应用reduction操作(例如concatenation)来获得单元的输出。

每个中间节点都是基于所有它之前的节点进行计算的:
x ( i ) = ∑ j < i o i , j ( x ( j ) ) x^{(i)}=\sum_{j<i}o^{i,j}(x^{(j)}) x(i)=j<ioi,j(x(j))
还包括一个特殊的零操作来指示两个节点之间没有连接。
因此学习的任务减少到学习其连边的操作。

2.2 Continuous Relaxation and Optimization

Let O O O be a set of candidate operations (e.g., convolution, max pooling, zero) where each operation represents some function o ( ⋅ ) o(\cdot) o() to be applied to x ( i ) x^{(i)} x(i).
To make the search space continuous, we relax the categorical choice of a particular operation as a softmax over all possible operations(把操作当作是一堆操作的softmax的结果):
o ˉ ( i , j ) ( x ) = ∑ o ∈ O e x p ( α o ( i , j ) ) ∑ o ′ ∈ O e x p ( α o ′ ( i , j ) ) o ( x ) \bar{o}^{(i,j)}(x)=\sum_{o\in O}\frac{exp(\alpha^{(i,j)}_{o})}{\sum_{o'\in O}exp(\alpha^{(i,j)}_{o'})}o(x) oˉ(i,j)(x)=oOoOexp(αo(i,j))exp(αo(i,j))o(x)
where the operation mixing weights for a pair of nodes ( i , j ) (i, j) (i,j) are parameterized by a vector α i , j \alpha^{i,j} αi,j of dimension ∣ O ∣ |O| O.
After the relaxation, the task of architecture search reduces to learning a set of continuous variables { α i , j } \{\alpha^{i,j}\} {αi,j}.
At the end of search, a discrete architecture is obtained by replacing each mixed operation o ˉ i , j ( x ) \bar{o}^{i,j}(x) oˉi,j(x) with the most likely operation, i.e., o ( i , j ) = a r g m a x o ∈ O   α o ( i , j ) o(i,j) = argmax_{o\in O}~\alpha^{(i,j)}_{o} o(i,j)=argmaxoO αo(i,j). In the following, we refer to α \alpha α as the (encoding of the) architecture.

After relaxation, our goal is to jointly learn the architecture α \alpha α and the weights w w w within all the mixed operations (e.g. weights of the convolution filters).
Analogous(类似) to architecture search using RL (Zoph and Le, 2016; Zoph et al., 2017; Pham et al., 2018b) or evolution (Liu et al., 2017b; Real et al., 2018) where the validation set performance is treated as the reward or fitness, DARTS aims to optimize the validation loss, but using gradient descent.

Denote by L t r a i n L_{train} Ltrain and L v a l L_{val} Lval the training and the validation loss, respectively.
Both losses are determined not only by the architecture α \alpha α, but also the weights w w w in the network.
The goal for architecture search is to find α ∗ \alpha^* α that minimizes the validation loss L v a l ( w ∗ , α ∗ ) L_{val}(w^*, \alpha^*) Lval(w,α), where the weights w ∗ w^* w associated with the architecture are obtained by minimizing the training loss w ∗ = a r g m i n w L t r a i n ( w , α ∗ ) w^* = argmin_w L_{train}(w,\alpha^*) w=argminwLtrain(w,α).

This implies a bilevel optimization problem(双层优化问题) (Anandalingam and Friesz, 1992; Colson et al., 2007) with α \alpha α as the upper-level variable and w w w as the lower-level variable:
m i n α   L v a l ( w ∗ ( α ) , α )    s . t .   w ∗ ( α ) = a r g m i n w   L t r a i n ( w , α ) min_{\alpha}~L_{val}(w^*(\alpha),\alpha)~~s.t.~w^*(\alpha)=argmin_w~L_{train}(w,\alpha) minα Lval(w(α),α)  s.t. w(α)=argminw Ltrain(w,α)

The nested formulation also arises in gradient-based hyperparameter optimization (Maclaurin et al., 2015; Pedregosa, 2016), which is related in a sense that the continuous architecture α \alpha α could be viewed as a special type of hyperparameter, although its dimension is substantially higher than scalar-valued hyperparameters (such as the learning rate), and it is harder to optimize.(高纬超参数,不好优化)

2.3 Approximation

Solving the bilevel optimization exactly is prohibitive(令人望而却步), as it would require recomputing w ∗ ( α ) w^*(\alpha) w(α) by solving the inner problem whenever there is any change in α \alpha α.
We thus propose an approximate iterative optimization procedure where w w w and α \alpha α are optimized by alternating between gradient descent steps in the weight and architecture spaces respectively (Alg. 1).
At step k, given the current architecture α k − 1 \alpha{k-1} αk1, we obtain w k w_k wk by moving w k − 1 w_{k-1} wk1 in the direction of minimising the training loss L t r a i n ( w k − 1 , α k − 1 ) L_{train}(w_{k-1}, \alpha_{k-1}) Ltrain(wk1,αk1).
Then, keeping the weights w k w_k wk fixed, we update the architecture so as to minimize the the validation loss after a single step of gradient descent w.r.t. the weights:
L v a l ( w k − ϵ ∇ w L t r a i n ( w w , α k − 1 ) , α k − 1 ) L_{val}(w_k-\epsilon \nabla_w L_{train}(w_w, \alpha_{k-1}), \alpha_{k-1}) Lval(wkϵwLtrain(ww,αk1),αk1)

where ϵ \epsilon ϵ is the learning rate for this virtual gradient step.
The motivation behind is that we would like to find an architecture which has a low validation loss when its weights are optimized by (a single step of) gradient descent, where the one-step unrolled weights serve as the surrogate(替代) for w ∗ ( α ) w^*(\alpha) w(α).
A related approach has been used in meta-learning for model transfer (Finn et al., 2017).
Notably, the dynamics of our iterative algorithm define a Stackelberg game (Von Stackelberg, 1934) between α \alpha α’s optimizer (leader) and w w w’s optimizer (follower), which typically requires the leader to anticipate(预测) the follower’s next-step move in order to achieve an equilibrium(平衡).
While we are not currently aware of the convergence guarantees for our optimization algorithm, in practice it is able to converge with a suitable choice of ϵ \epsilon ϵ(合适的 ϵ \epsilon ϵ可以收敛,A simple working strategy is to set ϵ \epsilon ϵ equal to the learning rate for w w w’s optimizer.). We also note that when momentum is enabled for weight optimisation, the one-step forward learning objective is modified accordingly and all of our analysis still applies.

The architecture gradient is given by (we omit the step index k for brevity):
∇ α L v a l ( w ′ , α ) − ϵ ∇ α , w 2 L t r a i n ( w , α ) ∇ w ′ L v a l ( w ′ , α ) \nabla_\alpha L_{val}(w', \alpha)-\epsilon\nabla^2_{\alpha,w}L_{train}(w, \alpha)\nabla_{w'}L_{val}(w', \alpha) αLval(w,α)ϵα,w2Ltrain(w,α)wLval(w,α)

where w ′ = w − ϵ ∇ w L t r a i n ( w , α ) w'=w-\epsilon\nabla_{w}L_{train}(w, \alpha) w=wϵwLtrain(w,α) denotes the weights for a one-step forward model.
The gradient contains a matrix-vector product in its second term, which is expensive to compute. (公式中的二阶项不好计算)
Fortunately, the complexity can be substantially reduced using the finite difference approximation(有限差分近似). Let δ \delta δ be a small scalar (We found δ = 0.01 / ∣ ∣ ∇ w ′ L v a l ( w ′ , α ) ∣ ∣ 2 \delta=0.01/||\nabla_{w'}L_{val}(w', \alpha)||_2 δ=0.01/wLval(w,α)2 to be sufficiently accurate in all of our experiments.), w + = w + δ ∇ w ′ L v a l ( w ′ , α ) w^+=w+\delta \nabla_{w'}L_{val}(w', \alpha) w+=w+δwLval(w,α) and w − = w − δ ∇ w ′ L v a l ( w ′ , α ) w^-=w-\delta \nabla_{w'}L_{val}(w', \alpha) w=wδwLval(w,α). Then:
∇ α , w 2 L t r a i n ( w , α ) ∇ w ′ L v a l ( w ′ , α ) ≈ ∇ α L t r a i n ( w + , α ) − ∇ α L t r a i n ( w , α ) 2 δ \nabla^2_{\alpha,w}L_{train}(w, \alpha)\nabla_{w'}L_{val}(w', \alpha)\approx \frac{\nabla_{\alpha}L_{train}(w^+, \alpha)-\nabla_{\alpha}L_{train}(w, \alpha)}{2\delta} α,w2Ltrain(w,α)wLval(w,α)2δαLtrain(w+,α)αLtrain(w,α)

Evaluating the finite difference requires only two forward passes for the weights and two backward passes for α \alpha α, and the complexity is reduced from O ( ∣ α ∣ ∣ w ∣ ) O(|\alpha||w|) O(αw) to O ( ∣ α ∣ + ∣ w ∣ ) O(|\alpha|+|w|) O(α+w).

First-order Approximation: When ϵ = 0 \epsilon=0 ϵ=0, the second-order derivative will then disappear.
In this case, the architecture gradient is given by ∇ α L v a l ( w , α ) \nabla_{\alpha} L_{val}(w, \alpha) αLval(w,α), corresponding to the simple heuristic of optimizing the validation loss by assuming α \alpha α and w w w are independent of each other.
This leads to some speed-up but empirically worse performance.
In the following, we refer to the case of ϵ = 0 \epsilon = 0 ϵ=0 as the first-order approximation, and refer to the gradient formulation with ϵ > 0 \epsilon>0 ϵ>0 as the second-order approximation.

Algorithm 1: DARTS-Differentiable Architecture Search
Create a mixed operation o ˉ ( i , j ) \bar{o}^{(i,j)} oˉ(i,j) parametrized by α ( i , j ) \alpha^{(i,j)} α(i,j) for each edge ( i , j ) (i, j) (i,j)
while not converged do

  1. Update weights w w w by descending ∇ w L t r a i n ( w , α ) \nabla_wL_{train}(w, \alpha) wLtrain(w,α)
  2. Update architecture α \alpha α by descending ∇ α L v a l ( w − δ ∇ w L t r a i n ( w , α ) , α ) \nabla_{\alpha} L_{val}(w-\delta \nabla_wL_{train}(w, \alpha), \alpha) αLval(wδwLtrain(w,α),α) .

Replace o ˉ ( i , j ) \bar{o}^{(i,j)} oˉ(i,j) with o ( i , j ) = a r g m a x o ∈ O α o ( i , j ) o^{(i,j)}=argmax_{o\in O}\alpha^{(i,j)}_o o(i,j)=argmaxoOαo(i,j) for each edge ( i , j ) (i, j) (i,j)

2.4 Deriving Discrete Architectures

After obtaining the continuous architecture encoding α \alpha α, the discrete architecture is derived by

  1. Retaining k strongest predecessors for each intermediate node(为每个中间节点保留k个最强的前驱), where the strength of an edge is defined as m a x o ∈ O , o ≠ z e r o e x p ( α o ( i , j ) ) ∑ o ′ ∈ O e x p ( α o ′ ( i , j ) ) max_{o\in O,o\neq zero} \frac{exp(\alpha^{(i,j)}_o)}{\sum_{o'\in O}exp(\alpha^{(i,j)}_{o'})} maxoO,o̸=zerooOexp(αo(i,j))exp(αo(i,j)).
    To make our derived architecture comparable with those in the existing works, we use k = 2 for convolutional cells (Zoph et al., 2017; Real et al., 2018) and k = 1 for recurrent cells (Pham et al., 2018b).
  2. Replacing every mixed operation as the most likely operation by taking the argmax.

3 Experiments and Results

3.1 Architecture Search

3.2 Architecture Evaluation

3.3 Results Analysis

3.3 Results Analysis

3.4 Transferability of Learned Architectures

4 Conclusion

We presented DARTS, the first differentiable architecture search algorithm for both convolutional and recurrent networks.
By searching in a continuous space, DARTS is able to match or outperform the state-of-the-art non-differentiable architecture search methods on image classification and language modeling tasks with remarkable efficiency improvement by several orders of magnitude.
In the future, we would like to investigate direct architecture search on larger tasks (e.g. ImageNet) using DARTS.

读后感

总体思路比较清晰,就是把网络结构的元件当作是连边,然后用softmax把原本分立的可能性松弛连续状态,是通过求解双层优化问题来对混合概率和网络权重进行联合优化,从学习的混合概率中引出最终的体系结构。看了这篇文章对 Inception 的理解更深刻一点了,其实Inception相当于把几个卷积池化都列出来,然后通过反向传播学习各个操作的权重,按这篇文章的思路其实就是学习哪个操作更为适合。而且通过训练集训练权重,通过验证集训练(验证?)结构的机器学习思想也在这篇文章中有很明确的体现。最后就是一点顾虑,一是如此多的参数,感觉需要的数据集规模要很大;二是之所不把softmax值作为最终的结果,还需要argmax一下肯定是因为怕参数过多,一是可以简化一下模型提高速度,二是可以相当于正则化一下,但是如果softmax值分差距没有那么大,保留最大的一个会不会在更一般的情况下影响性能(虽然文章中说了迁移到其他数据集上效果也还不错,但是更一般的情况我还是觉得未必会很好)。最后希望有时间能尝试跑一下,感觉get到了新思路,还是蛮开心的。

你可能感兴趣的:(机器学习)