l1和l2正则化

L1 regularization encourages sparsity.

import tensorflow as tf
import tensorflow.contrib as contrib

weight = tf.constant([[1.0, -2.0], [-3.0, 4.0]])
with tf.Session() as sess:
    print(sess.run(contrib.layers.l1_regularizer(0.2)(weight)))
    print(sess.run(contrib.layers.l1_regularizer(0.5)(weight)))
    print(sess.run(contrib.layers.l1_regularizer(1.0)(weight)))

\left \|x \right \|_1=\sum {|x_i|}

输出

2.0
5.0
10.0

Small values of L2 can help prevent overfitting the training data.

import tensorflow as tf
import tensorflow.contrib as contrib

weight = tf.constant([[1.0, -2.0], [-3.0, 4.0]])
with tf.Session() as sess:
    print(sess.run(contrib.layers.l2_regularizer(scale = 0.2)(weight)))
    print(sess.run(contrib.layers.l2_regularizer(0.5)(weight)))
    print(sess.run(contrib.layers.l2_regularizer(1.0)(weight)))

\left \| x \right \|_2=\sqrt{\sum x_i^2}

但是,tensorflow该函数不开方,而是除2。

输出:

3.0
7.5
15.0

正则化的结果是Loss函数的一部分,而正则计算的一般是权重w与偏置值b
例:Loss = cross_entropy + L(w)+L(b)
优化Loss函数,不但要最小化交叉熵,还要最小化w,b张量中各元素的绝对值之和或者平方和。
 

 

 

你可能感兴趣的:(大蛇丸)