————————————————————————————
原文发表于夏木青 | JoselynZhao Blog,欢迎访问博文原文。
————————————————————————————
深度学习 | 绪论
深度学习 | 线性代数基础
深度学习 | 机器学习基础
深度学习 | 实践方法论
深度学习 | 应用
深度学习 | 安装conda、opencv、pycharm以及相关问题
深度学习 | 工具及实践(TensorFlow)
深度学习 | TensorFlow 命名机制和变量共享、变量赋值与模型封装
深度学习 | TFSlim介绍
深度学习 | TensorFlow可视化
深度学习 | 训练及优化方法
深度学习 | 模型评估与梯度下降优化
深度学习 | 物体检测
深度学习| 实战1-python基本操作
深度学习 | 实战2-TensorFlow基础
深度学习 | 实战3-设计变量共享网络进行MNIST分类
深度学习 | 实战4-将LENET封装为class,并进行分类
深度学习 | 实战5-用slim 定义Lenet网络,并训练测试
深度学习 | 实战6-利用tensorboard实现卷积可视化
深度学习 | 实战7- 连体网络MINIST优化
深度学习 | 实战8 - 梯度截断
深度学习 | 实战9- 参数正则化
TF-Slim 是 Tensorflow 中一个轻量级的库,用于定义、训练和评估
复杂的模型
TF-Slim 中的组件可以与 Tensorflow 中原生的函数一起使用,与其 他的框架,比如与tf.contrib.learn 也可以一起使用
为什么使用Slim:
TF-Slim 结合 variables, layers 和 scopes 可以很简洁地定义模型 例:定义一个权重(weight)变量,使用截断正态分布初始化,使用 l2 loss 正则化,并将该变量放置在 CPU 中
weights = slim.variable('weights', shape=[10, 10, 3 , 3],
initializer=tf.truncated_normal_initializer(stddev=0.1), regularizer=slim.l2_regularizer(0.05),
device='/CPU:0')
截断正太分布 tf.trucated_normal_initializer()
TF 中变量分为 regular variables 和 local (transient) variables 两种
regular variables 可被 saver 保存,Local variables 仅存在于一个 session, 不会被保存
TF-Slim 用 model variable 进一步区分变量
Model variables 可训练或者 fine-tuned,可从 checkpoint 中加载
例如 slim.fully_connected 或者 slim.conv2d 创建的变量
Non-model variables 是训练、评估时需要的变量,但实际模型中不需要 的变量,例如 global_step,BN 中的 moving average variables
# Model Variables
weights = slim.model_variable('weights', shape=[10, 10, 3 , 3],
initializer=tf.truncated_normal_initializer(stddev=0.1), regularizer=slim.l2_regularizer(0.05),
device='/CPU:0')
model_variables = slim.get_model_variables()
# Regular variables
my_var = slim.variable('my_var',
shape=[20, 1], initializer=tf.zeros_initializer())
regular_variables_and_model_variables = slim.get_variables()
TF-Slim 的 layer 或通过 slim.model_variable 创建模型变量时,TF-Slim 将变量添加到 tf.GraphKeys.MODEL_VARIABLES 集合中管理
自定义 layers 或 slim.variable 创建变量,也手动添加入 TF-Slim 管理机制
my_model_variable = CreateViaCustomCode()
# Letting TF-Slim know about the additional variable.
slim.add_model_variable(my_model_variable)
TF 添加层很麻烦,需要很多步骤,例如写一个卷积层,需要:定义变 量,给出卷积操作,给出加权操作,添加激活函数等操作。。
input = ...
with tf.name_scope('conv1_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype= tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf. float32),
trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
TF-Slim 将常见层高度抽象,方便定义创建
input = ...
net = slim.conv2d(input, 128, [3, 3], scope='conv1_1')
除了卷积层,TF-Slim 还定义了很多标准层的实现,例如:
slim.fully_connected,slim.max_pool2d,slim.batch_norm,slim.dropout.等.
在搭建网络时,TF-Slim 提供 repeat 和 stack,允许用户重复执行相同的
操作,方便网络构建,例如:
net = ...
net = slim.conv2d(net, 256, [3, 3], scope='conv3_1') net = slim.conv2d(net, 256, [3, 3], scope='conv3_2')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_3') net = slim.max_pool2d(net, [2, 2], scope='pool2')
可用循环减少工作
net = ...
for i in range(3):
net = slim.conv2d(net, 256, [3, 3], scope='conv3_%d' % (i+1)) net = slim.max_pool2d(net, [2, 2], scope='pool2')
使用 TF-Slim 中的 repeat 操作:
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3 ')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
slim.repeat 会自动给每一个卷积层的 scopes 命名为’conv3/conv3_1’, ’conv3/conv3_2’ 和’conv3/conv3_3’
TF-Slim 的 slim.stack 操作允许用户用不同的参数重复调用同一种操作 slim.stack 也为每一个被创建的操作创建一个新的 tf.variable_scope
# Verbose way:
x = slim.fully_connected(x, 32, scope='fc/fc_1')
x = slim.fully_connected(x, 64, scope='fc/fc_2') x = slim.fully_connected(x, 128, scope='fc/fc_3')
# Equivalent, TF-Slim way using slim.stack:
slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc')
slim.stack 调用了 slim.fully_connected 三次
stack中的列表,将每次重复不同的参数传递进去。
除了 TF 中的 name_scope, variable_scope,TF-Slim 增加称为 arg_scope 的 作用域, 允许使用者定义 arg_scope 域中操作的默认参数,例如:
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='SAME',
weights_initializer=tf.truncated_normal_initializer(stddev=0.01) ,
weights_regularizer=slim.l2_regularizer(0.5), scope='conv1') net = slim.conv2d(net, 128, [11, 11], padding='VALID',
weights_initializer=tf.truncated_normal_initializer(stddev=0.01) ,
weights_regularizer=slim.l2_regularizer(0.5), scope='conv2')
拥有相同参数的操作,可放在一个 arg_scope 下,简化定义
with slim.arg_scope([slim.conv2d], padding='SAME',
weights_initializer=tf.truncated_normal_initializer(stddev =0.01)
weights_regularizer=slim.l2_regularizer(0.5)):
net = slim.conv2d(inputs, 64, [11, 11], scope='conv1')
net = slim.conv2d(net, 128, [11, 11], padding='VALID', scope=' conv2')
arg_scope 中的默认参数也可以被覆盖
复杂模型定义示例,VGG-16:
def vgg16(inputs):
with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu, weights_initializer=tf.truncated_normal_initializer(0.0, 0.01), weights_regularizer=slim.l2_regularizer(0.0005)):
net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1') net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2') net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4') net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5') net = slim.max_pool2d(net, [2, 2], scope='pool5')
net = slim.fully_connected(net, 4096, scope='fc6')
net = slim.dropout(net, 0.5, scope='dropout6')
net = slim.fully_connected(net, 4096, scope='fc7')
net = slim.dropout(net, 0.5, scope='dropout7')
net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')
return net
TF 模型训练需要:1. 模型;2. loss function; 3. train_op;
TF-Slim 通过 losses 模块为用户提供了一种机制,使得定义 loss function 变得简单
例:创建 VGG,然后增加标准分类 loss
import tensorflow as tf
import tensorflow.contrib.slim.nets as nets
vgg = nets.vgg
# Load the images and labels.
images, labels = ...
# Create the model.
predictions, _ = vgg.vgg_16(images)
# Define the loss functions and get the total loss.
loss = slim.losses.softmax_cross_entropy(predictions, labels)
例:多任务模型 loss 构建
# Load the images and labels.
images, scene_labels, depth_labels = ...
# Create the model.
scene_predictions, depth_predictions = CreateMultiTaskModel( images)
# Define the loss functions and get the total loss.
classification_loss = slim.losses.softmax_cross_entropy( scene_predictions, scene_labels)
sum_of_squares_loss = slim.losses.sum_of_squares( depth_predictions, depth_labels)
# The following two lines have the same effect:
total_loss = classification_loss + sum_of_squares_loss total_loss = slim.losses.get_total_loss(
add_regularization_losses=False)
调用 slim.losses.get_total_loss() 只加 TF-Slim 中定义了的全部 loss;自
定义 loss 需手动添加进 TF-Slims collection 统一管理
# Load the images and labels.
images, scene_labels, depth_labels, pose_labels = ...
# Create the model.
scene_predictions, depth_predictions, pose_predictions = CreateMultiTaskModel(images)
# Define the loss functions and get the total loss.
classification_loss = slim.losses.softmax_cross_entropy(scene_predictions, scene_labels) sum_of_squares_loss = slim.losses.sum_of_squares(depth_predictions, depth_labels) pose_loss = MyCustomLossFunction(pose_predictions, pose_labels) slim.losses.add_loss(pose_loss) # Letting TF-Slim know about the additional loss.
# The following two ways to compute the total loss are equivalent:
regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
total_loss1 = classification_loss + sum_of_squares_loss + pose_loss + regularization_loss # (Regularization Loss is included in the total loss by default).
total_loss2 = slim.losses.get_total_loss()
TF-Slim 封装好了训练流程,包括训练中的参数记录,模型存储等都不 需我们操心,只要给定 train_op,loss, 然后调用 slim.learning.create_train_op 和 slim.learning.train 来实现优化
g = tf.Graph()
# Create the model and specify the losses...
...
total_loss = slim.losses.get_total_loss()
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
# create_train_op ensures that each time we ask for the loss, the update_ops # are run and the gradients being computed are applied too.
train_op = slim.learning.create_train_op(total_loss, optimizer)
logdir = ... # Where checkpoints are stored.
slim.learning.train(
train_op,
logdir,
number_of_steps=1000,
save_summaries_secs=300,
save_interval_secs=600):
1)train_op:用于计算 loss 和梯度,
2)logdir:checkpoints 和 event 路 径,number_of_steps 限制梯度下降的步数;save_summaries_secs=300 每 5 分钟计算一次 summaries,save_interval_secs=600: 每 10 分钟存一 次 checkpoint。
import tensorflow as tf
import tensorflow.contrib.slim.nets as nets
slim = tf.contrib.slim vgg = nets.vgg
...
train_log_dir = ...
if not tf.gfile.Exists(train_log_dir): tf.gfile.MakeDirs(train_log_dir)
with tf.Graph().as_default():
# Set up the data loading:
images, labels = ...
# Define the model:
predictions = vgg.vgg_16(images, is_training=True)
# Specify the loss function:
slim.losses.softmax_cross_entropy(predictions, labels) total_loss = slim.losses.get_total_loss()
tf.summary.scalar('losses/total_loss', total_loss) # Specify the optimization scheme:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001)
# create_train_op that ensures that when we evaluate it to get the loss, # the update_ops are done and the gradient updates are computed. train_tensor = slim.learning.create_train_op(total_loss, optimizer)
# Actually runs training.
slim.learning.train(train_tensor, train_log_dir)