Deep learning with Elastic Averaging SGD

1. Abstract

  • A new algorithm is proposed in this setting where the communication and coordination of work
    among concurrent processes (local workers), is based on an elastic force which links the parameters they compute with a center variable stored by the parameter server (master). elastic force连接了local参数和PS上全局的参数
  • enables the local workers to perform more exploration. The algorithm allows the local variables to fluctuate further from the center variable by reducing the amount of communication between local workers and the master. 通过减少local worker和master之间的通信,允许local参数超前探索,远离全局参数
  • 提出了同步的版本和异步的版本
  • We provide the stability analysis of the asynchronous variant in the round-robin scheme and compare it with the more common parallelized method ADMM. 收敛性证明,基于RR模式分析,与并行ADMM比较
  • We additionally propose the momentum-based version of our algorithm that can be applied in both
    synchronous and asynchronous settings. 额外提出了加入动量的版本,能够用于同步和异步版本

2. Intro

  • But practical image recognition systems consist of large-scale convolutional neural networks trained on few GPU cards sitting in a single computer [3, 4]. The main challenge is to devise parallel SGD algorithms to train large-scale deep learning models that yield a significant speedup when run on multiple GPU cards. 本文研究的是单机多GPU卡,挑战是在多GPU卡上并行SGD
  • In this paper we introduce the Elastic Averaging SGD method (EASGD) and its variants. EASGD
    is motivated by quadratic penalty method [5], but is re-interpreted as a parallelized extension of the
    averaging SGD algorithm [6]. 本文提出了EASGD和其variants,motivated by平方惩罚方法,但是被重新设计为average SGD算法的并行版本
  • elastic force 链接了local parameter和master上的center variable,center variable使用moving average来更新,both in time and in space
  • The main contribution of this paper is a new algorithm that provides fast convergent minimization while outperforming DOWNPOUR method [2] and other baseline approaches in practice. 主要贡献是提供了更快的收敛,超过DOWNPOUR和其他baseline方法
  • EASGD减少了master和local workers的通信开销

3. Problem setting

  • This paper focuses on the problem of reducing the parameter communication overhead between the master and local workers. 本文着重的问题是减少master和local worker之间的参数通信

4. EASGD update rule

Deep learning with Elastic Averaging SGD_第1张图片
EASGD_update_rule.png
Deep learning with Elastic Averaging SGD_第2张图片
move_average.png
  • 计算local参数和全局参数之间的差距,然后在梯度下降时,加上这个差距,使得local参数向全局参数靠拢
  • Note that choosing beta=p*alpha� leads to an elastic symmetry in the update rule, i.e. there exists an symmetric force between the update of each local参数和全局参数.
  • Note also that � alpha=eta*rho��, where the magnitude of rho� represents the amount of exploration we allow in the model. In particular, small rho� allows for more exploration as it allows xi to fluctuate further from the center x. rho代表了本地参数能够独自explore到什么程度,小的rho允许更大的explore,允许本地参数能够离全局参数更远
  • The distinctive idea of EASGD is to allow the local workers to perform more exploration (small rho�) and the master to perform exploitation. EASGD的novelty是,允许local worker更多的探索

4.1. Asynchronous EASGD

  • 上个section是同步的EASGD,这一节介绍异步的EASGD
  • Each worker maintains its own clock ti, which starts from 0 and is incremented by 1 after each stochastic gradient update of xi as shown in Algorithm 1. The master performs an update whenever the local workers finished �t steps of their gradient updates, where we refer to �t as the communication period. 每个worker保存自己的clock,每次梯度下降后递增clock,每隔t个clock与master通信一次,更新参数,同时获取最新的全局参数
  • worker等待master发回参数,然后计算elastic difference,接着把elastic difference发回给master,master更新全局参数
  • The communication period � controls the frequency of the communication between every local
    worker and the master, and thus the trade-off between exploration and exploitation. 通信周期控制更新的频率

4.2 Momentum EASGD

  • It is based on the Nesterov’s momentum scheme [24, 25, 26], where the update of the local worker is replaced by the following update
EAMSGD.png

5. Experiments

  • In this section we compare the performance of EASGD and EAMSGD with the parallel method
    DOWNPOUR and the sequential method SGD, as well as their averaging and momentum variants. 比较了EASGD、EAMSGD、Downspour,还有average和momentum变型
  • We perform experiments in a deep learning setting on two benchmark datasets: CIFAR-10 (we refer to it as CIFAR) and ImageNet ILSVRC 2013 (we refer to it as ImageNet). 数据集是CIFAR-10和 ImageNet
  • We focus on the image classification task with deep convolutional neural networks. 算法是图像分类,深度卷积神经网络

6. Conclusion

  • In this paper we describe a new algorithm called EASGD and its variants for training deep neural
    networks in the stochastic setting when the computations are parallelized over multiple GPUs. 在GPU上并行SGD
  • We provide the stability analysis of the asynchronous EASGD in the round-robin scheme, and show the theoretical advantage of the method over ADMM.

你可能感兴趣的:(Deep learning with Elastic Averaging SGD)