Paper Summary 2: Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization

0 Paper

Rakhlin A, Shamir O, Sridharan K. Making gradient descent optimal for strongly convex stochastic optimization. arXiv preprint arXiv:1109.5647. 2011 Sep 26.

1 Key contribution

The paper proved the followings regarding the convergence rate of stochastic gradient descent (SGD)

  1. For smooth & strongly convex problems, SGD attains convergence rate
  2. For strongly convex problems, SGD with averaging has a lower bound
  3. For non-smooth & strongly convex problems, SGD with -suffix averaging can recover the rate both in expectation and high probability.

2 Preliminary knowledge

  1. Problem statement
    Given some convex domain and an unknown convex function , using SGD to update so as to find the optimal solution , and then the goal is to provide bounds on either in expectation or in high probability.


  2. is if for all and any subgradient of at ,


  3. SGD
    At each step, SGD produces such that is a subgradient of at . Then, update as follows.

    where, is the projection on and is the learning rate.

  4. -suffix averaging

3 Main analysis

1) Smooth functions

suppose is and w.r.t. over a convex set , and . Then if let for some constant , it holds for any that

suppose is and w.r.t. over a convex set , and . Then if let for some constant , it holds for any that

suppose is and w.r.t. over a convex set , is the average of , and . Then if let for some constant , it holds for any that

2) Non-smooth functions

shows that when the global optimum lies at the corner of , leading SGD to approaches the optimal from one direction, the convergence rate of SGD with averaging is .

shows that even when the global optimum lies in the interior of , as long as SGD approaches the optimum only from one direction, the convergence rate of SGD with averaging is still .

3) SGD with -suffix averaging

Consider SGD with -suffix averaging, and with step size where is a constant. Suppose is and that for all t. Then for any , it holds that

where

4) High probability bounds

Let and . Suppose is over a convex set , and that with probalility 1. Then if we pick for some constant , such that is a whole number, it holds with probability at least that for any that

3 Some thoughts about innovation and writing

1) Innovation:

  1. Extend something already known to some unknown areas
  2. Talk about some special but important cases
  3. Establish theoretical analysis for some phenomenon
  4. Apply theoretical results to real applications

2) Writing:

  1. The title should be better if it is less than 10 words. Make it more concise and interesting!
  2. In the introduction, start with a general topic and narrow it down to the key topic of the paper step by step. Transitions could be made by saying, e.g., ”An important special case…”, “One of the … is that ”. Remember, tell a good story!
  3. Regarding literature review, include only the latest papers may be enough. Clarify the difference of your paper and other related works.
  4. Claim and list the specific contributions of the paper. Even though you may say something about what are your innovations, list them in details any way so that readers are able to know your exact contributions.

你可能感兴趣的:(Paper Summary 2: Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization)