训练集:60000张灰色图像,大小28*28,共10类(0-9)
测试集:10000张灰色图像,大小28*28
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test: 参数规格分别为(60000, 28, 28)和(10000, 28, 28)。
y_train, y_test: 数字标签(0-9),参数规格分别为(60000,)和(10000,)
数据下载地址:http://yann.lecun.com/exdb/mnist/
CIFAR-10来自于80 million张小型图片的数据集,如下:
总数 | 色彩 | 图片尺寸 | 类别数 | 训练集 | 测试集 |
60000张 | RGB | 32*32 | 10类 | 50000张 | 10000张 |
整个数据集被分为5个training batches和1个test batch。test batch:随机从每类选择10000张图片组成,training batches:从剩下的图片中随机选择,但每类的图片不是平均分给batch的,总数为50000张图片,这些类别是完全互斥的。
数据下载链接:http://www.cs.toronto.edu/~kriz/cifar.html
解压后的文件包括:
下面是python3来打开文件,每个batch文件转换为dictonary:
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
dict_ = pickle.load(fo, encoding='bytes')
return dict_
-----
data = unpickle('test_batch')
data.keys() # dict_keys([b'batch_label', b'labels', b'data', b'filenames'])
data[b'data'][0] # array([158, 159, 165, ..., 124, 129, 110], dtype=uint8)
batches.meta文件包含了[b’num_cases_per_batch’, b’label_names’, b’num_vis’],label_names – 十个类别对应的英文名。
程序中下载数据:
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test: 参数规格分别为(50000, 3, 32, 32)和(10000, 3, 32, 32)
y_train, y_test: 标签取值范围 (0-9),shape (50000)和(10000)
import numpy as np
from PIL import Image
import pickle
import os
import matplotlib.image as plimg
CHANNEL = 3
WIDTH = 32
HEIGHT = 32
data = []
labels=[]
classification = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']
for i in range(5):
with open("./cifar-10-batches-py/data_batch_"+ str(i+1),mode='rb') as file:
#数据集在当脚本前文件夹下
data_dict = pickle.load(file, encoding='bytes')
data+= list(data_dict[b'data'])
labels+= list(data_dict[b'labels'])
img = np.reshape(data,[-1,CHANNEL, WIDTH, HEIGHT])
#代码创建文件夹,也可以自行创建
data_path = "./pic3/"
if not os.path.exists(data_path):
os.makedirs(data_path)
for i in range(100):
r = img[i][0]
g = img[i][1]
b = img[i][2]
plimg.imsave("./pic4/" +str(i)+"r"+".png",r)
plimg.imsave("./pic4/" +str(i)+"g"+".png",g)
plimg.imsave("./pic4/" +str(i) +"b"+".png",b)
ir = Image.fromarray(r)
ig = Image.fromarray(g)
ib = Image.fromarray(b)
rgb = Image.merge("RGB", (ir, ig, ib))
name = "img-" + str(i) +"-"+ classification[labels[i]]+ ".png"
rgb.save(data_path + name, "PNG")
它有100个类,每个类包含600个图像。每类各有500个训练图像和100个测试图像。CIFAR-100中的100个类被分成20个超类。每个图像都带有一个“精细”标签(它所属的类)和一个“粗糙”标签(它所属的超类)。
数据下载地址:http://www.cs.toronto.edu/~kriz/cifar.html
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict
dict.keys()
# -*- coding:utf-8 -*-
import pickle as p
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as plimg
from PIL import Image
def load_CIFAR_batch(filename):
""" load single batch of cifar """
with open(filename, 'rb')as f:
datadict = p.load(f,encoding='bytes')
#X = datadict[b'data']
#Y = datadict[b'labels']
#X = X.reshape(10000, 3, 32, 32)
X = datadict[b'data']
Y = datadict[b'coarse_labels']+datadict[b'fine_labels']
X = X.reshape(50000, 3, 32, 32)
Y = np.array(Y)
return X, Y
if __name__ == "__main__":
#imgX, imgY = load_CIFAR_batch("./cifar-10-batches-py/data_batch_1")
imgX, imgY = load_CIFAR_batch("./cifar-100-python/train")
print(imgX.shape)
print("正在保存图片:")
for i in range(imgX.shape[0]):
imgs = imgX[i]
if i < 100:#只循环100张图片,这句注释掉可以便利出所有的图片,图片较多,可能要一定的时间
img0 = imgs[0]
img1 = imgs[1]
img2 = imgs[2]
i0 = Image.fromarray(img0)
i1 = Image.fromarray(img1)
i2 = Image.fromarray(img2)
img = Image.merge("RGB",(i0,i1,i2))
name = "img" + str(i)+".png"
img.save("./pic1/"+name,"png")#文件夹下是RGB融合后的图像
for j in range(imgs.shape[0]):
img = imgs[j]
name = "img" + str(i) + str(j) + ".jpg"
print("正在保存图片" + name)
plimg.imsave("./pic2/" + name, img)#文件夹下是RGB分离的图像
print("保存完毕.")
程序中下载数据:
from keras.datasets import cifar100
(x_train, y_train), (x_test, y_test) = cifar100.load_data(label_mode='fine')
x_train, x_test: 参数规格分别为(50000, 3, 32, 32)和(10000, 3, 32, 32)
y_train, y_test: 标签取值范围 (0-99),shape (50000)和(10000)
街景号码SVHN数据是一个真实的图像数据集,用于开发机器学习和对象识别算法,对数据预处理和格式化的要求最低。它可以被看作与MNIST的风味相似(例如,图像是小的裁剪数字),但是包含更多标记数据的数量级(超过600,000个数字图像)并且来自更加困难,未解决的现实世界问题(识别自然场景图像中的数字和数字)。SVHN是从Google街景图像中的门牌号码获得的。
SVHN 是对图像中阿拉伯数字进行识别的数据集,该数据集中的图像来自真实世界的门牌号数字,每张图片中包含一组 '0-9' 的阿拉伯数字。训练集中包含 73257 个数字,测试集中包含 26032 个数字,另有 531131 个附加数字。
数据下载地址:http://ufldl.stanford.edu/housenumbers/
为方便转换,可以下载train_32x32.mat和test_32x32.mat,.mat文件中包含两个变量,X是一个4D的矩阵,维度是(32,32,3,n),n是数据个数,y是label变量。
训练集:60000张灰色图像,大小28*28,共10类(0-9)
测试集:10000张灰色图像,大小28*28
图像是一个28*28
的像素数组,每个像素的值为0~255之间的8位无符号整数(uint8),使用三维NDArray存储,最后一维表示通道个数。由于为灰度图像,故通道数为1。
from keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test: 参数规格分别为(60000, 28, 28)和(10000, 28, 28)。
y_train, y_test: 数字标签(0-9),参数规格分别为(60000,)和(10000,)
数据下载地址:
训练集的图像:60000,http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
训练集的类别标签:60000,http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
测试集的图像:10000,http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
测试集的类别标签:10000,http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
1.https://blog.csdn.net/weixin_44633882/article/details/86905285
2.https://blog.csdn.net/disanda/article/details/90744243