最近论文里需要一个实验来增加说服力,然后选了经典的分类cifar10数据集,因为是辅助实验不是主要的,想偷懒拿别人的算法改一哈算了。。。是我太天真,毕竟bug千千万~别人能跑出来的你也不一定能跑啊!!!
先是在官网下了一波数据:http://www.cs.toronto.edu/~kriz/cifar.html
可以根据自己的需要下载相应的,看不懂英语的自己Google浏览器右上角翻译好吗,就不截图了!
然后我上网搜啊搜~想找一个可以不用想事就能输出ACC的,然后我看到了这个
https://blog.csdn.net/weixin_42595515/article/details/82496708#commentsedit里面有楼主对代码的简单讲解,在此感谢博主的贡献~
这个代码里有部分小错误,如果有同学跟我一样想用这个代码,可以接着读。
下面写上我遇到的几个问题:
1、数据读取问题
描述:我自己搞错了数据集,,,感天动地,自信的我一直觉得我没毛病,是代码错了~对不起代码
报了这个错“ TypeError: unhashable type: ‘dict’ ”
解决:再看一眼数据集,确定没下载错?因为不同的数据集,包括格式不同,读取方式会有点不同,前面给出的cifar数据集下载网址里面都有读取方式,仔细阅读好吗:(这里贴出来的是python3的读取)
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict
2、onehot编码出问题
描述:错误好像是这个“ IndexError: too many indices for array ”
解决:改成下面这样(是的就加了一个括号!维度问题)
onehot_labels = np.zeros((n_sample, n_class))
看到了一个非常好的讲onehot函数的https://www.cnblogs.com/smartwhite/p/8950600.html
3、一堆变量名字写错了,跟着报错改就可以了。
4、一个比较困惑的(虽然改了出来但是不确定对不对,如果我没记错应该是这样子)
描述:ValueError: Shape must be rank 2 but is rank 1 for ‘MatMul_2’ (op: ‘MatMul’) with input shapes: [?,192], [192].
解决:这个问题的产生是因为matmul函数的用法错了,函数用法请参考
https://blog.csdn.net/u010591976/article/details/82252522
本程序改成(line in 112)
fc2 = tf.add(tf.matmul(fc1, W_conv['fc2']), b_conv['fc2'])
程序还在跑,没配好gpu怪我咯,前几天瞎搞,搞烂了重装系统怪我咯,无聊写的blog,待会去加上tensorboard部分。。。
(不知道哪错了又看不懂数据流动的小白们可以试试在觉得有问题的地方print,一步步排除)
贴代码
# coding:utf-8
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import _pickle as pickle
import time
def unpickle(file):
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='latin1')
return dict
def onehot(labels):
# one-hot编码
n_sample = len(labels)
n_class = max(labels) + 1
onehot_labels = np.zeros((n_sample, n_class))
onehot_labels[np.arange(n_sample), labels] = 1
return onehot_labels
# 训练数据集,
data1 = unpickle('D:/paper/code/mycifar10/cifar-10-bataset/data_batch_1')
data2 = unpickle('D:/paper/code/mycifar10/cifar-10-bataset/data_batch_2')
data3 = unpickle('D:/paper/code/mycifar10/cifar-10-bataset/data_batch_3')
data4 = unpickle('D:/paper/code/mycifar10/cifar-10-bataset/data_batch_4')
data5 = unpickle('D:/paper/code/mycifar10/cifar-10-bataset/data_batch_5')
X_train = np.concatenate((data1['data'], data2['data'], data3['data'], data4['data'], data5['data']), axis=0)
y_train = np.concatenate((data1['labels'], data2['labels'], data3['labels'], data4['labels'], data5['labels']), axis=0)
y_train = onehot(y_train)
# 测试数据集
test = unpickle('D:/paper/code/mycifar10/cifar-10-bataset/test_batch')
X_test = test['data'][:5000, :]
y_test = onehot(test['labels'])[:5000, :]
print("Training dataset shape:", X_train.shape)
print('Training labels shape:', y_train.shape)
print('Testing dataset shape:', X_test.shape)
print('Testing labels shape:', y_test.shape)
learning_rate= 1e-3
training_iters = 200
batch_size = 50
display_step = 5
n_features = 3072 #32*32*3
n_classes = 10
n_fc1 = 384
n_fc2 = 192
# 构建模型
x = tf.placeholder(tf.float32, [None, n_features])
y = tf.placeholder(tf.float32, [None, n_classes])
W_conv = {
'conv1' : tf.Variable(tf.truncated_normal([5, 5, 3, 32], stddev=0.0001)),
'conv2' : tf.Variable(tf.truncated_normal([5, 5, 32, 64], stddev=0.01)),
'fc1' : tf.Variable(tf.truncated_normal([8*8*64, n_fc1], stddev=0.1)),
'fc2' : tf.Variable(tf.truncated_normal([n_fc1, n_fc2], stddev=0.1)),
'fc3' : tf.Variable(tf.truncated_normal([n_fc2, n_classes],stddev=0.1))
}
b_conv = {
'conv1' : tf.Variable(tf.constant(0.0, dtype=tf.float32, shape=[32])),
'conv2' : tf.Variable(tf.constant(0.1, dtype=tf.float32, shape=[64])),
'fc1' : tf.Variable(tf.constant(0.1, dtype=tf.float32, shape=[n_fc1])),
'fc2' : tf.Variable(tf.constant(0.1, dtype=tf.float32, shape=[n_fc2])),
'fc3' : tf.Variable(tf.constant(0.0, dtype=tf.float32, shape=[n_classes]))
}
x_image = tf.reshape(x, [-1, 32, 32, 3])
# 卷积层1
conv1 = tf.nn.conv2d(x_image, W_conv['conv1'], strides=[1, 1, 1, 1], padding='SAME')
conv1 = tf.nn.bias_add(conv1, b_conv['conv1'])
conv1 = tf.nn.relu(conv1)
# 池化层1
pool1 = tf.nn.avg_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME')
# LRN层,Local Response Normalization
norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001/9.0, beta=0.75)
# 卷积层 2
conv2 = tf.nn.conv2d(norm1, W_conv['conv2'], strides=[1, 1, 1, 1], padding='SAME')
conv2 = tf.nn.bias_add(conv2, b_conv['conv2'])
conv2 = tf.nn.relu(conv2)
# LRN层
norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001/9.0, beta=0.75)
# 池化层2
pool2 = tf.nn.avg_pool(norm2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME')
reshape = tf.reshape(pool2, [-1, 8*8*64])
# 全连接层1
fc1 = tf.add(tf.matmul(reshape, W_conv['fc1']), b_conv['fc1'])
fc1 = tf.nn.relu(fc1)
# 全连接层2
fc2 = tf.add(tf.matmul(fc1, W_conv['fc2']), b_conv['fc2'])
fc2 = tf.nn.relu(fc2)
# 全连接层3,即分类层
fc3 = tf.nn.softmax(tf.add(tf.matmul(fc2, W_conv['fc3']), b_conv['fc3']))
# 定义损失
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=fc3, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(loss)
# 评估模型
correct_pred = tf.equal(tf.argmax(fc3, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
c = []
total_batch = int(X_train.shape[0] / batch_size)
start_time = time.time()
for i in range(200):
for batch in range(total_batch):
batch_x = X_train[batch*batch_size : (batch+1)*batch_size, :]
batch_y = y_train[batch*batch_size : (batch+1)*batch_size, :]
sess.run(optimizer, feed_dict={x: batch_x, y : batch_y})
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
print(acc)
c.append(acc)
end_time = time.time()
print('time:', (end_time - start_time))
start_time = end_time
print("--------------%d onpech is finished------------", i)
print("Optimization Finished!")
# TEST
test_acc = sess.run(accuracy, feed_dict={x : X_test, y : y_test})
print("Testing Accuracy:", test_acc)
plt.plot(c)
plt.xlabel('Iter')
plt.ylabel('Cost')
plt.title('lr=%f, ti=%d, bs=%d, acc=%f' % (learning_rate, training_iters,batch_size, test_acc))
plt.tight_layout()
plt.savefig('cnn-tf-cifar10-%s.png' % test_acc, dpi=200)