本月1日起,上海正式开始了“史上最严“垃圾分类的规定,扔错垃圾最高可罚200元。全国其它46个城市也要陆续步入垃圾分类新时代。各种被垃圾分类逼疯的段子在社交媒体上层出不穷。
其实从人工智能的角度看垃圾分类就是图像处理中图像分类任务的一种应用,而这在2012年以来的ImageNet图像分类任务的评比中,SENet模型以top-5测试集回归2.25%错误率的成绩可谓是技压群雄,堪称目前最强的图像分类器。
年份 | 网络/队名 | top-5-5 | 备注 |
2012 | AlexNet | 16.42% | 5层CNNs |
2013 | Clarifai | 11.20% | 用了2011年的数据 |
2014 | GoogleNet v1 | 6.67% | 7 nets, 144 crops |
2015 | ResNet | 3.57% | 6 models |
2016 | Trimps-Soushen | 2.99% | |
2017 | SENet | 2.25% | Momenta 与牛津大学 |
笔者刚刚还到SENet的创造者momenta公司的网站上看了一下,他们最新的方向已经是3D物体识别和标定了,效果如下:
可以说他们提出的SENet进行垃圾图像处理是完全没问题的。
Senet的是由momenta和牛津大学共同提出的一种基于挤压(squeeze)和激励(Excitation)的模型,每个模块通过“挤压”操作嵌入来自全局感受野的信息,并且通过“激励”操作选择性地诱导响应增强。我们可以看到历年的ImageNet冠军基本都是在使用加大模型数量和连接数量的方式来提高精度,而Senet在这种”大力出奇迹”的潮流下明显是一股清流。其论文地址如下:http://openaccess.thecvf.com/content_cvpr_2018/papers/Hu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.pdf
其具体原理说明如下:
Sequeeze:对 C×H×W 进行 global average pooling,得到 1×1×C 大小的特征图,这个特征图可以理解为具有全局感受野。翻译论文原文来说:将每个二维的特征通道变成一个实数,这个实数某种程度上具有全局的感受野,并且输出的维度和输入的特征通道数相匹配。它表征着在特征通道上响应的全局分布,而且使得靠近输入的层也可以获得全局的感受野。
Excitation :使用一个全连接神经网络,对 Sequeeze 之后的结果做一个非线性变换。它的机制一个类似于循环神经网络中的门。通过参数 w 来为每个特征通道生成权重,其中参数 w 被学习用来显式地建模特征通道间的相关性。
特征重标定:使用 Excitation 得到的结果作为权重,乘到输入特征上。将Excitation输出的权重可以认为是特征通道的重要性反应,逐通道加权到放到先前的特征上,完成对原始特征的重标定。
其模型架构如下:
SENet 构造非常简单,而且很容易被部署,不需要引入新的函数或者层。其caffe模型可以通过百度下载(https://pan.baidu.com/s/1o7HdfAE?errno=0&errmsg=Auth%20Login%20Sucess&&bduss=&ssnerror=0&traceid=)
如果读者布署有caffe那么直接下载刚刚的模型直接load进来就可以使用了。如果没有装caffe而装了tensorflow也没关系,我们刚刚说了SENet没有引入新的函数和层,很方便用tensorflow实现。
下载图像集:经笔者各方查找发现了这个数据集,虽然不大也没有发挥出SENet的优势,不过也方便使用:https://raw.githubusercontent.com/garythung/trashnet/master/data/dataset-resized.zip
建立SENet模型:使用tensorflow建立的模型在github上也有开源项目了,网址如下:https://github.com/taki0112/SENet-Tensorflow,只是他使用的是Cifar10数据集,不过这也没关系,只需要在gitclone以下将其cifar10.py中的prepare_data函数做如下修改即可。
def prepare_data():
print("======Loading data======")
download_data()
data_dir = 'e:/test/'
#data_dir = './cifar-10-batches-py'#改为你的文件侠
image_dim = image_size * image_size * img_channels
#meta = unpickle(data_dir + '/batches.meta')#本数据集不使用meta文件分类,故需要修改
label_names = ['cardboard','glass','metal','trash','paper','plastic']
label_count = len(label_names)
#train_files = ['data_batch_%d' % d for d in range(1, 6)]
train_files = [data_dir+s for s in label_names]#改为
train_data, train_labels = load_data(train_files, data_dir, label_count)
test_data, test_labels = load_data(['test_batch'], data_dir, label_count)
print("Train data:", np.shape(train_data), np.shape(train_labels))
print("Test data :", np.shape(test_data), np.shape(test_labels))
print("======Load finished======")
print("======Shuffling data======")
indices = np.random.permutation(len(train_data))
train_data = train_data[indices]
train_labels = train_labels[indices]
print("======Prepare Finished======")
return train_data, train_labels, test_data, test_labels
注意cifar10.py文件中的class_num = 6,image_size = 512也需要修改。
其最主要的建模代码如下,其主要工作就是将SENet的模型结构实现一下即可:
import tensorflow as tf
from tflearn.layers.conv import global_avg_pool
from tensorflow.contrib.layers import batch_norm, flatten
from tensorflow.contrib.framework import arg_scope
from cifar10 import *
import numpy as np
weight_decay = 0.0005
momentum = 0.9
init_learning_rate = 0.1
reduction_ratio = 4
batch_size = 128
iteration = 391
# 128 * 391 ~ 50,000
test_iteration = 10
total_epochs = 100
def conv_layer(input, filter, kernel, stride=1, padding='SAME', layer_name="conv", activation=True):
with tf.name_scope(layer_name):
network = tf.layers.conv2d(inputs=input, use_bias=True, filters=filter, kernel_size=kernel, strides=stride, padding=padding)
if activation :
network = Relu(network)
return network
def Fully_connected(x, units=class_num, layer_name='fully_connected') :
with tf.name_scope(layer_name) :
return tf.layers.dense(inputs=x, use_bias=True, units=units)
def Relu(x):
return tf.nn.relu(x)
def Sigmoid(x):
return tf.nn.sigmoid(x)
def Global_Average_Pooling(x):
return global_avg_pool(x, name='Global_avg_pooling')
def Max_pooling(x, pool_size=[3,3], stride=2, padding='VALID') :
return tf.layers.max_pooling2d(inputs=x, pool_size=pool_size, strides=stride, padding=padding)
def Batch_Normalization(x, training, scope):
with arg_scope([batch_norm],
scope=scope,
updates_collections=None,
decay=0.9,
center=True,
scale=True,
zero_debias_moving_mean=True) :
return tf.cond(training,
lambda : batch_norm(inputs=x, is_training=training, reuse=None),
lambda : batch_norm(inputs=x, is_training=training, reuse=True))
def Concatenation(layers) :
return tf.concat(layers, axis=3)
def Dropout(x, rate, training) :
return tf.layers.dropout(inputs=x, rate=rate, training=training)
def Evaluate(sess):
test_acc = 0.0
test_loss = 0.0
test_pre_index = 0
add = 1000
for it in range(test_iteration):
test_batch_x = test_x[test_pre_index: test_pre_index + add]
test_batch_y = test_y[test_pre_index: test_pre_index + add]
test_pre_index = test_pre_index + add
test_feed_dict = {
x: test_batch_x,
label: test_batch_y,
learning_rate: epoch_learning_rate,
training_flag: False
}
loss_, acc_ = sess.run([cost, accuracy], feed_dict=test_feed_dict)
test_loss += loss_
test_acc += acc_
test_loss /= test_iteration # average loss
test_acc /= test_iteration # average accuracy
summary = tf.Summary(value=[tf.Summary.Value(tag='test_loss', simple_value=test_loss),
tf.Summary.Value(tag='test_accuracy', simple_value=test_acc)])
return test_acc, test_loss, summary
class SE_Inception_resnet_v2():
def __init__(self, x, training):
self.training = training
self.model = self.Build_SEnet(x)
def Stem(self, x, scope):
with tf.name_scope(scope) :
x = conv_layer(x, filter=32, kernel=[3,3], stride=2, padding='VALID', layer_name=scope+'_conv1')
x = conv_layer(x, filter=32, kernel=[3,3], padding='VALID', layer_name=scope+'_conv2')
block_1 = conv_layer(x, filter=64, kernel=[3,3], layer_name=scope+'_conv3')
split_max_x = Max_pooling(block_1)
split_conv_x = conv_layer(block_1, filter=96, kernel=[3,3], stride=2, padding='VALID', layer_name=scope+'_split_conv1')
x = Concatenation([split_max_x,split_conv_x])
split_conv_x1 = conv_layer(x, filter=64, kernel=[1,1], layer_name=scope+'_split_conv2')
split_conv_x1 = conv_layer(split_conv_x1, filter=96, kernel=[3,3], padding='VALID', layer_name=scope+'_split_conv3')
split_conv_x2 = conv_layer(x, filter=64, kernel=[1,1], layer_name=scope+'_split_conv4')
split_conv_x2 = conv_layer(split_conv_x2, filter=64, kernel=[7,1], layer_name=scope+'_split_conv5')
split_conv_x2 = conv_layer(split_conv_x2, filter=64, kernel=[1,7], layer_name=scope+'_split_conv6')
split_conv_x2 = conv_layer(split_conv_x2, filter=96, kernel=[3,3], padding='VALID', layer_name=scope+'_split_conv7')
x = Concatenation([split_conv_x1,split_conv_x2])
split_conv_x = conv_layer(x, filter=192, kernel=[3,3], stride=2, padding='VALID', layer_name=scope+'_split_conv8')
split_max_x = Max_pooling(x)
x = Concatenation([split_conv_x, split_max_x])
x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
x = Relu(x)
return x
def Inception_resnet_A(self, x, scope):
with tf.name_scope(scope) :
init = x
split_conv_x1 = conv_layer(x, filter=32, kernel=[1,1], layer_name=scope+'_split_conv1')
split_conv_x2 = conv_layer(x, filter=32, kernel=[1,1], layer_name=scope+'_split_conv2')
split_conv_x2 = conv_layer(split_conv_x2, filter=32, kernel=[3,3], layer_name=scope+'_split_conv3')
split_conv_x3 = conv_layer(x, filter=32, kernel=[1,1], layer_name=scope+'_split_conv4')
split_conv_x3 = conv_layer(split_conv_x3, filter=48, kernel=[3,3], layer_name=scope+'_split_conv5')
split_conv_x3 = conv_layer(split_conv_x3, filter=64, kernel=[3,3], layer_name=scope+'_split_conv6')
x = Concatenation([split_conv_x1,split_conv_x2,split_conv_x3])
x = conv_layer(x, filter=384, kernel=[1,1], layer_name=scope+'_final_conv1', activation=False)
x = x*0.1
x = init + x
x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
x = Relu(x)
return x
def Inception_resnet_B(self, x, scope):
with tf.name_scope(scope) :
init = x
split_conv_x1 = conv_layer(x, filter=192, kernel=[1,1], layer_name=scope+'_split_conv1')
split_conv_x2 = conv_layer(x, filter=128, kernel=[1,1], layer_name=scope+'_split_conv2')
split_conv_x2 = conv_layer(split_conv_x2, filter=160, kernel=[1,7], layer_name=scope+'_split_conv3')
split_conv_x2 = conv_layer(split_conv_x2, filter=192, kernel=[7,1], layer_name=scope+'_split_conv4')
x = Concatenation([split_conv_x1, split_conv_x2])
x = conv_layer(x, filter=1152, kernel=[1,1], layer_name=scope+'_final_conv1', activation=False)
# 1154
x = x * 0.1
x = init + x
x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
x = Relu(x)
return x
def Inception_resnet_C(self, x, scope):
with tf.name_scope(scope) :
init = x
split_conv_x1 = conv_layer(x, filter=192, kernel=[1,1], layer_name=scope+'_split_conv1')
split_conv_x2 = conv_layer(x, filter=192, kernel=[1, 1], layer_name=scope + '_split_conv2')
split_conv_x2 = conv_layer(split_conv_x2, filter=224, kernel=[1, 3], layer_name=scope + '_split_conv3')
split_conv_x2 = conv_layer(split_conv_x2, filter=256, kernel=[3, 1], layer_name=scope + '_split_conv4')
x = Concatenation([split_conv_x1,split_conv_x2])
x = conv_layer(x, filter=2144, kernel=[1,1], layer_name=scope+'_final_conv2', activation=False)
# 2048
x = x * 0.1
x = init + x
x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
x = Relu(x)
return x
def Reduction_A(self, x, scope):
with tf.name_scope(scope) :
k = 256
l = 256
m = 384
n = 384
split_max_x = Max_pooling(x)
split_conv_x1 = conv_layer(x, filter=n, kernel=[3,3], stride=2, padding='VALID', layer_name=scope+'_split_conv1')
split_conv_x2 = conv_layer(x, filter=k, kernel=[1,1], layer_name=scope+'_split_conv2')
split_conv_x2 = conv_layer(split_conv_x2, filter=l, kernel=[3,3], layer_name=scope+'_split_conv3')
split_conv_x2 = conv_layer(split_conv_x2, filter=m, kernel=[3,3], stride=2, padding='VALID', layer_name=scope+'_split_conv4')
x = Concatenation([split_max_x, split_conv_x1, split_conv_x2])
x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
x = Relu(x)
return x
def Reduction_B(self, x, scope):
with tf.name_scope(scope) :
split_max_x = Max_pooling(x)
split_conv_x1 = conv_layer(x, filter=256, kernel=[1,1], layer_name=scope+'_split_conv1')
split_conv_x1 = conv_layer(split_conv_x1, filter=384, kernel=[3,3], stride=2, padding='VALID', layer_name=scope+'_split_conv2')
split_conv_x2 = conv_layer(x, filter=256, kernel=[1,1], layer_name=scope+'_split_conv3')
split_conv_x2 = conv_layer(split_conv_x2, filter=288, kernel=[3,3], stride=2, padding='VALID', layer_name=scope+'_split_conv4')
split_conv_x3 = conv_layer(x, filter=256, kernel=[1,1], layer_name=scope+'_split_conv5')
split_conv_x3 = conv_layer(split_conv_x3, filter=288, kernel=[3,3], layer_name=scope+'_split_conv6')
split_conv_x3 = conv_layer(split_conv_x3, filter=320, kernel=[3,3], stride=2, padding='VALID', layer_name=scope+'_split_conv7')
x = Concatenation([split_max_x, split_conv_x1, split_conv_x2, split_conv_x3])
x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
x = Relu(x)
return x
def Squeeze_excitation_layer(self, input_x, out_dim, ratio, layer_name):
with tf.name_scope(layer_name) :
squeeze = Global_Average_Pooling(input_x)
excitation = Fully_connected(squeeze, units=out_dim / ratio, layer_name=layer_name+'_fully_connected1')
excitation = Relu(excitation)
excitation = Fully_connected(excitation, units=out_dim, layer_name=layer_name+'_fully_connected2')
excitation = Sigmoid(excitation)
excitation = tf.reshape(excitation, [-1,1,1,out_dim])
scale = input_x * excitation
return scale
def Build_SEnet(self, input_x):
input_x = tf.pad(input_x, [[0, 0], [32, 32], [32, 32], [0, 0]])
# size 32 -> 96
print(np.shape(input_x))
# only cifar10 architecture
x = self.Stem(input_x, scope='stem')
for i in range(5) :
x = self.Inception_resnet_A(x, scope='Inception_A'+str(i))
channel = int(np.shape(x)[-1])
x = self.Squeeze_excitation_layer(x, out_dim=channel, ratio=reduction_ratio, layer_name='SE_A'+str(i))
x = self.Reduction_A(x, scope='Reduction_A')
channel = int(np.shape(x)[-1])
x = self.Squeeze_excitation_layer(x, out_dim=channel, ratio=reduction_ratio, layer_name='SE_A')
for i in range(10) :
x = self.Inception_resnet_B(x, scope='Inception_B'+str(i))
channel = int(np.shape(x)[-1])
x = self.Squeeze_excitation_layer(x, out_dim=channel, ratio=reduction_ratio, layer_name='SE_B'+str(i))
x = self.Reduction_B(x, scope='Reduction_B')
channel = int(np.shape(x)[-1])
x = self.Squeeze_excitation_layer(x, out_dim=channel, ratio=reduction_ratio, layer_name='SE_B')
for i in range(5) :
x = self.Inception_resnet_C(x, scope='Inception_C'+str(i))
channel = int(np.shape(x)[-1])
x = self.Squeeze_excitation_layer(x, out_dim=channel, ratio=reduction_ratio, layer_name='SE_C'+str(i))
# channel = int(np.shape(x)[-1])
# x = self.Squeeze_excitation_layer(x, out_dim=channel, ratio=reduction_ratio, layer_name='SE_C')
x = Global_Average_Pooling(x)
x = Dropout(x, rate=0.2, training=self.training)
x = flatten(x)
x = Fully_connected(x, layer_name='final_fully_connected')
return x
train_x, train_y, test_x, test_y = prepare_data()
train_x, test_x = color_preprocessing(train_x, test_x)
# image_size = 32, img_channels = 3, class_num = 10 in cifar10
x = tf.placeholder(tf.float32, shape=[None, image_size, image_size, img_channels])
label = tf.placeholder(tf.float32, shape=[None, class_num])
training_flag = tf.placeholder(tf.bool)
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
logits = SE_Inception_resnet_v2(x, training=training_flag).model
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=label, logits=logits))
l2_loss = tf.add_n([tf.nn.l2_loss(var) for var in tf.trainable_variables()])
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=momentum, use_nesterov=True)
train = optimizer.minimize(cost + l2_loss * weight_decay)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(label, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver(tf.global_variables())
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state('./model')
if ckpt and tf.train.checkpoint_exists(ckpt.model_checkpoint_path):
saver.restore(sess, ckpt.model_checkpoint_path)
else:
sess.run(tf.global_variables_initializer())
summary_writer = tf.summary.FileWriter('./logs', sess.graph)
epoch_learning_rate = init_learning_rate
for epoch in range(1, total_epochs + 1):
if epoch % 30 == 0 :
epoch_learning_rate = epoch_learning_rate / 10
pre_index = 0
train_acc = 0.0
train_loss = 0.0
for step in range(1, iteration + 1):
if pre_index + batch_size < 50000:
batch_x = train_x[pre_index: pre_index + batch_size]
batch_y = train_y[pre_index: pre_index + batch_size]
else:
batch_x = train_x[pre_index:]
batch_y = train_y[pre_index:]
batch_x = data_augmentation(batch_x)
train_feed_dict = {
x: batch_x,
label: batch_y,
learning_rate: epoch_learning_rate,
training_flag: True
}
_, batch_loss = sess.run([train, cost], feed_dict=train_feed_dict)
batch_acc = accuracy.eval(feed_dict=train_feed_dict)
train_loss += batch_loss
train_acc += batch_acc
pre_index += batch_size
train_loss /= iteration # average loss
train_acc /= iteration # average accuracy
train_summary = tf.Summary(value=[tf.Summary.Value(tag='train_loss', simple_value=train_loss),
tf.Summary.Value(tag='train_accuracy', simple_value=train_acc)])
test_acc, test_loss, test_summary = Evaluate(sess)
summary_writer.add_summary(summary=train_summary, global_step=epoch)
summary_writer.add_summary(summary=test_summary, global_step=epoch)
summary_writer.flush()
line = "epoch: %d/%d, train_loss: %.4f, train_acc: %.4f, test_loss: %.4f, test_acc: %.4f \n" % (
epoch, total_epochs, train_loss, train_acc, test_loss, test_acc)
print(line)
with open('logs.txt', 'a') as f:
f.write(line)
saver.save(sess=sess, save_path='./model/Inception_resnet_v2.ckpt')
其实使用SENet做垃圾分类真是大才小用了,不过大家也可以感受一下他的实力强大。