tflearn入门笔记

import tflearn
tflearn.conv_2d(x,32,5,activation='relu',name='canv1')
fc2=tflearn.fully_connected(fc1,32,activation='tanh',regularizer='L2')
"上诉等于fc2=tflearn.fully_connected(fc1,32)" \
"tflearn.add_weights_regularization(fc2,loss='L2')"
"fc2=tflearn.tanh(fc2)"
"Optimizer, Objective and Metric:优化,目标,指标"
reg = tflearn.regression(fc4, optimizer='rmsprop', metric='accuracy', loss='categorical_crossentropy')
##Ops也可以在外部定义,以进行更深入的自定义:
momentum=tflearn.optimizers.Momentum(learning_rate=0.1,weight_decay=0.96,decay_step=200)
top5=tflearn.metrics.Top_k(k=5)#关于top_k的解释https://blog.csdn.net/uestc_c2_403/article/details/73187915和https://blog.csdn.net/Enchanted_ZhouH/article/details/77200592
reg=tflearn.regression(fc4,optimizer=momentum,metric=top5,loss='categorical_crossentropy')

'训练,评估,测试'
'network=...(some layers)...'
network=regression(network,optimizer='sgd',loss='categorical_crossentropy')
model=DNN(network)
#可以直接调用测试评估
'network=...'
model=DNN(network)
model.load('model.tflearn')
model.pred(X)

'tflearn还可以管理日志'
model=DNN(network,tensorboard_verbose=3)
tensorboard_verbose为0时值显示loss和metrics
1: Loss, Metric & Gradients.
2: Loss, Metric, Gradients & Weights.
3: Loss, Metric, Gradients, Weights, Activations & Sparsity (Best Visualization)

 tflearn.layers.merge_ops.merge(tensors_list,model,axis=1,name='Merge')tflearn中merge函数是将tensor列表合并成一个,merge的模式(mdoel)需要指定参数中model支持的字符有:
'concat': concatenate outputs along specified axis
'elemwise_sum': outputs element-wise sum
'elemwise_mul': outputs element-wise sum
'sum': outputs element-wise sum along specified axis
'mean': outputs element-wise average along specified axis
'prod': outputs element-wise multiplication along specified axis
'max': outputs max elements along specified axis
'min': outputs min elements along specified axis
'and': `logical and` btw outputs elements along specified axis
'or': `logical or` btw outputs elements along specified axis

axis:整数Represents the axis to use for merging mode. In most cases: 0 for concat and 1 for other modes.

 

'保存和载入模型也很简单'
model.save('my_model.tflearn')
model.load('my_model.tflearn')

 

'使用其他变量也可以使用get_weights和set_weights'
input_data = tflearn.input_data(shape=[None, 784])
fc1 = tflearn.fully_connected(input_data, 64)
fc2 = tflearn.fully_connected(fc1, 10, activation='softmax')
net = tflearn.regression(fc2)
model = DNN(net)
# Get weights values of fc2
model.get_weights(fc2.W)
# Assign new random weights to fc2
model.set_weights(fc2.W, numpy.random.rand(64, 10))

 

'检索变量也很简单'
fc1=tflearn.fully_connected(input_layer,64,name='fc_layer_1')
fc1_weights_var=fc1.W
fc1_biases_var=fc1.b
'使用张量名'
fc1_vars=tflearn.get_layer_variables_by_name('fc_layer_1')
fc1_weights_var=fc1_vars[0]
fc1_biases_var=fc1_vars[1]
'模型搬迁微调的时候可以用restore来指定是否重置权重,restore重置只针对于权重'
fc_layer=tflearn.fully_connected(input_layer,32)#重置权重
fc_layer=tflearn.fully_connected(input_layer,32,restore=False)#不重置权重
重置除了全连接层外所有层权重的例子见https://github.com/tflearn/tflearn/blob/master/examples/basics/finetuning.py
'数据预处理和数据增强,tflean数据流使用计算管道设计的当GPU训练模型时,数据的处理是用的CPU'
    #实时数据预处理
    img_prep=tflearn.ImagePreprocessing()
  #zero Center(对整个数据进行平均值计算)
  img_prep.add_featurewise_zero_center()
  #标准规范化(对整个数据进行标准呢正规化)
  img_prep.add_featurewise_stdnorm()
#实时数据增强
img_aug=tflearn.ImageAugmentation()
#随机反转图片
img_aug.add_random_flip_leftright()
#将这些方法加入到输入层
network=input_data(shape=[None,32,32,3],data_preprocessing=img_prep,data_augmentation=img_aug)

更多细节见 Data Preprocessing和Data Augmentation

 

'tensorflow和tflearn的混合使用’ #一些tensorflow操作的使用
 X=tf.placeholder(shape=(None,784),dtype=tf.float32) 
net=tf.reshape(X,[-1,28,28,1]) 
#用tflearn卷积层 
net=tflearn.conv_2d(net,32,3,activation='relu') 
#使用tensorflow的最大池化操作 
net=tf.nn.max_pool(net,ksize=[1,2,2,1],stride=[1,2,2,1],padding="SAME")
例子见https://github.com/tflearn/tflearn/blob/master/examples/extending_tensorflow/layers.py




内置操作'tflearn的内置操作与任何的tensorflow表达式相兼容',见https://github.com/tflearn/tflearn/blob/master/examples/extending_tensorflow/builtin_ops.py

 

训练器(trainer)/评估器/(evaluator)/预测器(predictor)
#tflearn使用一个TrainOp类来带表示优化过程
trainop=TrainOp(net=my_network,loss=loss,metric=accuracy)
然后所有的TrainOp可以被传递到一个Trainer类里面,Trainer类将处理整个训练过程,将所有TrainOp当作为一个整体模型
model=Trainer(trainops=trainop,tensorboard_dir='tmp/tflearn')
model.fit(feed_dict={input_placeholder:X,target_placeholder:Y})
虽然大多数模型只有一个优化过程,但对于更复杂的模型来说,它可以用来处理多个模型。
model=Trainer(trainops=[trainop1,trainop2])
model.fit(feed_dict=[{in1:X1,label1:Y1},{in2:X2,label2:Y2}])
关于TrainOp和Trainer见http://tflearn.org/helpers/trainer/,例子见
https://github.com/tflearn/tflearn/blob/master/examples/extending_tensorflow/builtin_ops.py

 

'预测tflearn使用的是Evaluator类,Evaluator类工作方式和Trainer类很相似,将网络作为参数,返回预测结果'
model=Evaluator(network)
model.predict(feed_dict={input_placeholder:X})
' 对于网络层次在训练和测试时有不同操作时(比如dropout和BN),Trainer采用了布尔型变量(is_training)来指明网络是否网络是用于训练或者测试的,该变量是存储在tf.GraphKeys.IS_TRAINING这个集合下面的作为他的第一个(也是唯一一个)元素。所以当定义网络的层时该变量应该使用condition操作(OP)' 
#对dropout的例子
def apply_dropout(): 
    return tf.nn.dropout(x,keep_prob) 
is_training=tflearn.get_training_mode()#检索is_training变量
tf.cond(is_training,apply_dropout,lambda:x)#只在训练的时候使用dropout

 

'为了方便起见,TFLearn实现了检索该变量或更改其值的函数'
#是训练模型
tflearn.is_training(True)
#不是训练模型
tflearn.is_training(False)
'在训练周期中,TFLearn可以在Callback界面给出的一组函数中跟踪训练指标并与之交互。
 为了简化指标检索,每个回调方法都接收一TrainingState,它跟踪状态
(例如:当前时期,步骤,批量迭代)和指标(例如:当前验证准确性,全局准确性等)。'

class MonitorCallback(tflearn.callbacks.Callback): 
    def __init__(self,api): 
        self.my_monitor_api = api
    def on_epoch_end(self,training_state):
        self.my_monitor_api.send({accuracy:training_state.global_acc,loss"training_state.global_loss})
#然后将其加到mode.fit的调用中去
monitorcallback=MonitorCallback(api)#api是自己的API类
model=...
model.fit(...,callbacks=monitorcallback)

 

'变量,tflearn中定义变量很简单'
import tflearn.variables as vs
my_var=vs.variable('W',shape=[784,12],initializer='truncated_normal',regularizer='L2',device='/gpu:0')
例子见https://github.com/tflearn/tflearn/blob/master/examples/extending_tensorflow/variables.py
‘summaries,当使用Trainer类时,管理summaries很简单,只需将监视激活存储到tf.GraphKeys.ACTIVATIONS。然后,只需指定一个详细级别来控制可视化深度’
model=Trainner(network, loss=loss, metric=acc, tensorboard_verbose=3)
‘也可以直接使用TFLearn的操作快速增加summaries到当前tensorflow图’
import tflearn,helpers.summarizer as s
s.summarize_varibles(train_var=[...])
'对模型添加正则化可以用tflearn的regularzer完成,目前支持权重和激活函数正则化'
# Add L2 regularization to a variable
W = tf.Variable(tf.random_normal([784, 256]), name="W")
tflearn.add_weight_regularizer(W, 'L2', weight_decay=0.001)
'数据预处理'
http://tflearn.org/data_utils/

 

 

 

 

 

你可能感兴趣的:(tflearn入门笔记)