目录
CIFAR-10
获取并组织数据集
下载数据集
整理数据集
组织数据集更一般的方式
图像增广
读取数据集
torchvision.datasets.ImageFolder()的特点
定义模型
定义训练函数
训练和验证模型
对测试集进行分类并提交结果
(补充)超参数调整
Dog Breed Identification
特点
获取数据集
整理数据集
由字符型的类别标签得到数字类型的类别标签
划分出验证集
自定义数据集函数
图像增广
读取数据集
定义模型
定义训练函数
训练和验证模型
对测试集分类并在Kaggle提交结果
需要注意的函数
pd.read_csv
torchvision.datasets.ImageFolder()
collections.Counter()
**和*
sort
Series.apply()
StratifiedShuffleSplit和train_test_split
nn.CrossEntropyLoss
sort和sorted
torch的和numpy的transpose
这个练习本身不难,亮点在于对数据集的处理,值得研究
比赛的网址是CIFAR-10 - Object Recognition in Images | Kaggle
原教程网站:13.13. 实战 Kaggle 比赛:图像分类 (CIFAR-10) — 动手学深度学习 2.0.0-beta1 documentation
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import pandas as pd
import os
import collections
import shutil
import math
from d2l import torch as d2l
data_dir = '.\data\cifar-10'
比赛数据集分为训练集和测试集,其中训练集包含50000张、测试集包含300000张图像。 在测试集中,10000张图像将被用于评估,而剩下的290000张图像将不会被进行评估,包含它们只是为了防止手动标记测试集并提交标记结果。 两个数据集中的图像都是png格式,高度和宽度均为32像素并有三个颜色通道(RGB)。 这些图片共涵盖10个类别:飞机、汽车、鸟类、猫、鹿、狗、青蛙、马、船和卡车。
登录Kaggle后,我们可以点击CIFAR-10图像分类竞赛网页上的“Data”选项卡,然后单击“Download All”按钮下载数据集。 在../data
中解压下载的文件,将在以下路径中找到整个数据集:
../data/cifar-10/train/[1-50000].png
../data/cifar-10/test/[1-300000].png
../data/cifar-10/trainLabels.csv
../data/cifar-10/sampleSubmission.csv
train
和test
文件夹分别包含训练和测试图像,trainLabels.csv
含有训练图像的标签, sample_submission.csv
是提交文件的范例。其中train文件夹含有样本图片的如下图,每一个图片的名字都是一个数字编号。从中读取图片的时候要注意了,比如读取1号图片,在label中对应的标签是第0个
用pandas读取trainLabels.csv文件:
id label 0 1 frog 1 2 truck 2 3 truck ... ... ... 49998 49999 automobile 49999 50000 automobile
数据集只含有train和test数据集,而我们在训练的时候,一般还包含验证集,所以要划分出验证集处理。使用Google Colab这样的平台时,我们经常会将训练集、测试集、验证集压缩并上传,所以有时候要将它们划分、保存在不同的文件夹。
定义reorg_train_valid
函数来将验证集从原始的训练集中拆分出来。 此函数中的参数valid_ratio
是验证集中的样本数与原始训练集中的样本数之比。 更具体地说,令n等于样本最少的类别中的图像数量,而r是比率。 验证集将为每个类别拆分出max(⌊nr⌋,1)张图像。 让我们以valid_ratio=0.1
为例,由于原始的训练集有50000张图像,因此train_valid_test/train
路径中将有45000张图像用于训练,而剩下5000张图像将作为路径train_valid_test/valid
中的验证集。 组织数据集后,同类别的图像将被放置在同一文件夹下
def copyfile(filename, target_dir):
"""将文件复制到目标目录"""
os.makedirs(target_dir, exist_ok=True) # 文件夹不存在则创建
shutil.copy(filename, target_dir)
def reorg_train_valid(data_dir, labels, valid_ratio):
"""将验证集从原始的训练集中拆分出来"""
# 训练数据集中样本最少的类别中的样本数,labels的类型是Series,可以用Counter()计数
n = collections.Counter(labels).most_common()[-1][1]
# 验证集中每个类别的样本数
n_valid_per_label = max(1, math.floor(n * valid_ratio))
label_count = {}
for train_file in os.listdir(os.path.join(data_dir, 'train')):
idx = train_file.split('.')[0] # 图片的名称,本例中是编号(1-50000)
label = labels[int(idx)-1] # csv文件中,每个编号对应的label
fname = os.path.join(data_dir, 'train', train_file) # 每个图片的文件名称
# 用每个label建立一个文件夹,将所有图片复制到对应的文件夹中。这么做是为了使用ImageFolder
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'train_valid', label))
# 每一个valid文件夹中,每一个类别的样本数量都是n_valid_per_label
if label not in label_count or label_count[label] < n_valid_per_label:
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'valid', label))
label_count[label] = label_count.get(label, 0) + 1
else:
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'train', label))
return n_valid_per_label
def reorg_test(data_dir):
"""在预测期间整理测试集,以方便读取"""
for test_file in os.listdir(os.path.join(data_dir, 'test')):
copyfile(os.path.join(data_dir, 'test', test_file),
os.path.join(data_dir, 'train_valid_test', 'test',
'unknown'))
读取label并执行上述函数
def reorg_cifar10_data(data_dir, valid_ratio):
labels = pd.read_csv(os.path.join(data_dir, 'trainLabels.csv'))['label']
reorg_train_valid(data_dir, labels, valid_ratio)
reorg_test(data_dir)
batch_size = 64
valid_ratio = 0.1
reorg_cifar10_data(data_dir, valid_ratio)
代码执行的效果是,创建了四个文件夹,分别是test,train(45000个样本),valid(5000个样本)和train_valid,其中train_valid是train和valid的合集。建立train_valid文件夹是因为,使用验证集筛选出最佳超参数之后,再使用train_valid训练一遍,得到最终模型
每一个文件夹下按照类别创建了10个分类文件夹,这是torchvision.datasets.ImageFolder()函数的要求。即使没有分类的test文件夹,下面也要有一个子文件夹(unknown)作为分类文件夹,否则torchvision.datasets.ImageFolder()会报错。因为ImageFolder()的find_classes()函数要从根文件夹下读取文件夹的名称,生成类别列表,没有这个列表就会导致错误
为了在训练集和验证集文件夹下按类别建立子文件夹,需要在读取每个样本图片名的时候,获得对应的类别标签label。然而,pandas一般根据表的index或者行数来选择数据,我没找到根据某一列的值索引其他列的数据的方法。上一节采取了一种讨巧的方法,样本的名字(也就是图片文件名)是从1开始编号,编号减一就是在表中的行号,但是有时候样本的文件名是不规则的,这时候就要使用更一般的方法,根据一列的数据索引另一列的数据,下面的read_csv_labels()函数起到这样的作用
def read_csv_labels(fname):
"""读取fname来给标签字典返回一个文件名"""
with open(fname, 'r') as f:
# 跳过文件头行(列名)
lines = f.readlines()[1:]
tokens = [l.rstrip().split(',') for l in lines]
return dict(((name, label) for name, label in tokens))
def copyfile(filename, target_dir):
"""将文件复制到目标目录"""
os.makedirs(target_dir, exist_ok=True)
shutil.copy(filename, target_dir)
def reorg_train_valid(data_dir, labels, valid_ratio):
"""将验证集从原始的训练集中拆分出来"""
# 训练数据集中样本最少的类别中的样本数
n = collections.Counter(labels.values()).most_common()[-1][1]
# 验证集中每个类别的样本数
n_valid_per_label = max(1, math.floor(n * valid_ratio))
label_count = {}
for train_file in os.listdir(os.path.join(data_dir, 'train')):
label = labels[train_file.split('.')[0]]
fname = os.path.join(data_dir, 'train', train_file)
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'train_valid', label))
if label not in label_count or label_count[label] < n_valid_per_label:
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'valid', label))
label_count[label] = label_count.get(label, 0) + 1
else:
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'train', label))
return n_valid_per_label
def reorg_test(data_dir):
"""在预测期间整理测试集,以方便读取"""
for test_file in os.listdir(os.path.join(data_dir, 'test')):
copyfile(os.path.join(data_dir, 'test', test_file),
os.path.join(data_dir, 'train_valid_test', 'test',
'unknown'))
read_csv_labels函数返回的是一个字典格式的变量,该变量根据name可以索引label:label=labels[train_file.split('.')[0]]。并且labels.values()的格式是
def reorg_cifar10_data(data_dir, valid_ratio):
labels = read_csv_labels(os.path.join(data_dir, 'trainLabels.csv'))
reorg_train_valid(data_dir, labels, valid_ratio)
reorg_test(data_dir)
batch_size = 128
valid_ratio = 0.1
reorg_cifar10_data(data_dir, valid_ratio)
使用图像增广来解决过拟合的问题。例如在训练中,可以随机水平翻转图像等。在测试期间,只对图像执行标准化,以消除评估结果中的随机性。
本例使用的是残差网络。由于残差网络没有使用全连接层,在pytorch中,残差网络的输入可以是任意值(即使输入像素大小为1*1)。但是DataLoader要求每一个样本的尺寸是相同的。本例数据集中图片像素大小都是32*32,所以可以直接使用,但是为了做图像增广,所以下面先将图像放大为40*40,然后进行随机裁剪,重新缩放到32*32。所以对于其他数据来说,其实第一步Resize(40)不是必须的,可以使用RandomResizedCrop一步到位裁剪到目标尺寸。
transform_train = torchvision.transforms.Compose([
# 在高度和宽度上将图像放大到40像素的正方形
torchvision.transforms.Resize(40),
# 随机裁剪出一个面积为原始图像面积0.64到1倍, 高宽比为1.0-1.0的图像
# 然后将其缩放为高度和宽度均为32像素的正方形
torchvision.transforms.RandomResizedCrop(32, scale=(0.64, 1.0),
ratio=(1.0, 1.0)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
# 标准化图像的每个通道
torchvision.transforms.Normalize([0.4914, 0.4822, 0.4465],
[0.2023, 0.1994, 0.2010])])
transforms_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.4914, 0.4822, 0.4465],
[0.2023, 0.1994, 0.2010])
])
读取由原始图像组成的数据集,每个样本都包括一张图片和一个标签。注意,当验证集在超参数调整过程中用于模型评估时,不应引入图像增广的随机性,所以valid数据集使用的transform是transform_test
train_ds, train_valid_ds = [torchvision.datasets.ImageFolder(
os.path.join(data_dir, 'train_valid_test', folder),
transform=transform_train) for folder in ['train', 'train_valid']]
valid_ds, test_ds = [torchvision.datasets.ImageFolder(
os.path.join(data_dir, 'train_valid_test', folder),
transform=transform_test) for folder in ['valid', 'test']]
train_iter, train_valid_iter = [torch.utils.data.DataLoader(
dataset, batch_size, shuffle=True, drop_last=True)
for dataset in (train_ds, train_valid_ds)]
valid_iter = torch.utils.data.DataLoader(valid_ds, batch_size, shuffle=False,
drop_last=True)
test_iter = torch.utils.data.DataLoader(test_ds, batch_size, shuffle=False,
drop_last=False)
具体解释见本文最后一部分。
>>len(train_ds)
45000
>>type(train_ds[0]) # train_ds的每一个元素是一个元祖
>>train_ds[0][0].shape # 元祖第一个元素是图片向量
torch.Size([3, 32, 32])
>>train_ds[0][1] # 元祖第二个元素是int类型的标签
0
>>train_ds.classes
['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
由于是一个小模型,所以没有使用迁移学习,而是整个模型的参数都要训练。如果要迁移学习,那么模型下载的时候还要加上pretrained=True,并且要冻结参数
def get_net():
num_classes = 10
net = torchvision.models.resnet18()
net.fc = nn.Linear(512, num_classes)
return net
loss = nn.CrossEntropyLoss(reduction="none") # 为了作图的时候,loss的量级和accuracy的量级相近
根据模型在验证集上的表现来选择模型并调整超参数
def train(net, train_iter, num_epochs, lr, wd, lr_period, lr_decay, devices, valid_iter=None):
optimizer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9, weight_decay=wd)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, lr_period, lr_decay)
# timer用于计算程序运行时间, timer.stop()之后再运行timer.start(),时间可以累积
num_batches, timer = len(train_iter), d2l.Timer()
legend = ['train loss', 'train acc']
if valid_iter is not None:
legend.append('valid acc')
animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs], legend=legend)
net = nn.DataParallel(net, device_ids=devices).to(devices[0])
for epoch in range(num_epochs):
net.train()
running_loss, num, acc = 0, 0, 0
for i, (data, label) in enumerate(train_iter):
timer.start()
data, label = data.to(devices[0]), label.to(devices[0])
pred = net(data)
optimizer.zero_grad()
loss = criterion(pred, label).sum()
loss.backward()
optimizer.step()
timer.stop()
running_loss += loss.item()
num += len(data)
acc += (pred.argmax(1)==label).sum().item()
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches, (running_loss / num, acc / num, None))
running_loss, num, acc = 0, 0, 0
scheduler.step()
if valid_iter is not None:
net.eval()
val_num, val_acc = 0, 0
with torch.no_grad():
for data, label in valid_iter:
data, label = data.to(devices[0]), label.to(devices[0])
pred = net(data).argmax(dim=1)
val_acc += (pred == label).sum().item()
val_num += len(data)
animator.add(epoch + 1, (None, None, val_acc / val_num))
measures = ''
if valid_iter is not None:
measures += f'valid acc {val_acc / val_num:.3f}'
print( measures + f'\n{len(train_ds) * num_epochs / timer.sum() :1f}'f' examples/sec')
下面是教程提供的训练函数,用到了d2l模块的几个函数:
def train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
lr_decay):
# 只训练小型自定义输出网络
net = nn.DataParallel(net, device_ids=devices).to(devices[0])
trainer = torch.optim.SGD((param for param in net.parameters()
if param.requires_grad), lr=lr,
momentum=0.9, weight_decay=wd)
scheduler = torch.optim.lr_scheduler.StepLR(trainer, lr_period, lr_decay)
num_batches, timer = len(train_iter), d2l.Timer()
legend = ['train loss']
if valid_iter is not None:
legend.append('valid loss')
animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs],
legend=legend)
for epoch in range(num_epochs):
metric = d2l.Accumulator(2)
for i, (features, labels) in enumerate(train_iter):
timer.start()
features, labels = features.to(devices[0]), labels.to(devices[0])
trainer.zero_grad()
output = net(features)
l = loss(output, labels).sum()
l.backward()
trainer.step()
metric.add(l, labels.shape[0])
timer.stop()
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches,
(metric[0] / metric[1], None))
measures = f'train loss {metric[0] / metric[1]:.3f}'
if valid_iter is not None:
valid_loss = evaluate_loss(valid_iter, net, devices)
animator.add(epoch + 1, (None, valid_loss.detach().cpu()))
scheduler.step()
if valid_iter is not None:
measures += f', valid loss {valid_loss:.3f}'
print(measures + f'\n{metric[1] * num_epochs / timer.sum():.1f}'
f' examples/sec on {str(devices)}')
以下所有超参数都可以调整。 例如,我们可以增加周期的数量。当lr_period
和lr_decay
分别设置为4和0.9时,优化算法的学习速率将在每4个周期乘以0.9
params = {
'net': get_net(),
'train_iter': train_iter,
'num_epochs': 50,
'lr': 2e-4,
'wd': 5e-4,
'devices': d2l.try_all_gpus(),
'lr_period': 4,
'lr_decay': 0.9,
'valid_iter': valid_iter
}
train(**params)
valid acc 0.734 1635.301572 examples/sec
在获得具有超参数的满意的模型后,使用所有标记的数据(包括验证集)来重新训练模型并对测试集进行分类。代码将生成一个 submission.csv
文件,其格式符合Kaggle竞赛的要求
注意给preds增加数据,使用的是extend而不是append,这是因为append会将参数当做一个元素加入列表中,而extend会将参数当做列表加入
这地方我一开始犯了个错,在代码sorted_ids = list(range(1, len(test_ds) + 1))后面没有写代码 sorted_ids.sort(key=lambda x: str(x)),结果发现预测的成功率一直在0.1左右。后来发现,给出的预测和样本几乎对应不上。原来torchvision.datasets.ImageFolder()抽取样本的时候,是按照str的顺序,比如0-200排序,顺序是[0, 1, 10, 100, 101, 102, ……],那么id也应该按照这样的顺序。具体解释见本文最后一部分。
net, preds = get_net(), []
devices = d2l.try_all_gpus()
train(net, train_valid_iter, 1, 2e-4, 5e-4, 4, 0.9, devices)
with torch.no_grad():
for data, _ in test_iter:
y_hat = net(data.to(devices[0]))
preds.extend(y_hat.argmax(dim=1).type(torch.int32).cpu().numpy())
sorted_ids = list(range(1, len(test_ds) + 1))
sorted_ids.sort(key=lambda x: str(x))
df = pd.DataFrame({'id': sorted_ids, 'label': preds})
df['label'] = df['label'].apply(lambda x: train_valid_ds.classes[x])
df.to_csv('submission.csv', index=False)
教程中没有关于超参数调整的部分,个人姑且补充一下。训练函数要修改一下:
best_acc = 0
def train(net, train_iter, num_epochs, lr, wd, lr_period, lr_decay, devices, valid_iter=None):
optimizer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9, weight_decay=wd)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, lr_period, lr_decay)
net = nn.DataParallel(net, device_ids=devices).to(devices[0])
for epoch in range(num_epochs):
net.train()
for data, label in train_iter:
data, label = data.to(devices[0]), label.to(devices[0])
pred = net(data)
optimizer.zero_grad()
loss = criterion(pred, label)
loss_sum = loss.sum()
loss_sum.backward()
optimizer.step()
scheduler.step()
if valid_iter is not None:
net.eval()
acc, num = 0, 0
with torch.no_grad():
for data, label in valid_iter:
data, label = data.to(devices[0]), label.to(devices[0])
pred = net(data).argmax(dim=1)
acc += (pred == label).sum().item()
num += len(data)
val_acc = acc / num
#if val_acc > best_acc:
# best_acc = val_acc
print('lr %e wd %e lr_period %e lr_decay %e val acc:%f' %
(lr, wd, lr_period, lr_decay, val_acc))
进行训练
import numpy as np
def random_select_para(lr_min, lr_max, wd_min, wd_max, period_min,
period_max, decay_min, decay_max):
lr = 10 ** np.random.uniform(lr_min, lr_max)
wd = 10 ** np.random.uniform(wd_min, wd_max)
lr_period = np.random.randint(period_min, period_max,)
lr_decay = 10 ** np.random.uniform(decay_min, decay_max)
return lr, wd, lr_period, lr_decay
lr_min, lr_max = -5, -3
wd_min, wd_max = -4, -1
period_min, period_max = 1, 5
decay_min, decay_max = -0.2, 0
for i in range(20):
lr, wd, lr_period, lr_decay = random_select_para(lr_min, lr_max, wd_min, wd_max, period_min,
period_max, decay_min, decay_max)
params = {
'net': get_net(),
'train_iter': train_iter,
'num_epochs': 10,
'lr': lr,
'wd': wd,
'devices': d2l.try_all_gpus(),
'lr_period': lr_period,
'lr_decay': lr_decay,
'valid_iter': valid_iter
}
train(**params)
比赛网址是Dog Breed Identification | Kaggle,在这场比赛中,我们将识别120类不同品种的狗。 这个数据集实际上是著名的ImageNet的数据集子集。
原教程网站13.14. 实战Kaggle比赛:狗的品种识别(ImageNet Dogs) — 动手学深度学习 2.0.0-beta1 documentation
import torch
import torchvision
from torchvision import transforms, models
from torch.utils.data import DataLoader, Dataset
from torch import nn
import os
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedShuffleSplit
import math
import collections
from d2l import torch as d2lfrom PIL import Image
和上面的CIFAR-10例子相比,这个教程有如下特点:
比赛数据集分为训练集和测试集,分别包含RGB(彩色)通道的10222张、10357张JPEG图像。 在训练数据集中,有120种犬类,如拉布拉多、贵宾、腊肠、萨摩耶、哈士奇、吉娃娃和约克夏等。
下载数据集,在../data
中解压下载的文件后,你将在以下路径中找到整个数据集:
../data/dog-breed-identification/labels.csv
../data/dog-breed-identification/sample_submission.csv
../data/dog-breed-identification/train
../data/dog-breed-identification/test
文件夹train/
和test/
分别包含训练和测试狗图像,labels.csv
包含训练图像的标签,其中train文件夹含有样本图片的如下图,图像文件的名称是杂乱的
先读取训练图像的标签文件:
data_dir = 'data\dog-breed-identification'
df = pd.read_csv(os.path.join(data_dir, 'labels.csv'))
df.head()
id | breed | |
---|---|---|
0 | 000bec180eb18c7604dcecc8fe0dba07 | boston_bull |
1 | 001513dfcb2ffafc82cccf4d8bbaba97 | dingo |
2 | 001cdf01b096e06d78e9e5112d419397 | pekinese |
3 | 00214f311d5d2247d5dfe4fe24b2303d | bluetick |
4 | 0021f9ceb3235effd7fcde7f7538ed62 | golden_retrieve |
发现标签是str类型,需要转换成为int类型。得到种类列表breeds,根据breeds创建“类别-序号”的字典,然后由breed列得到数字标签列label_idx。
breeds = df.breed.unique() # 长度是120,即类别数
breeds.sort()
breed2idx = dict((breed, i) for i, breed in enumerate(breeds))
df['label_idx'] = [breed2idx[b] for b in df.breed]
在排序之前,breeds:
array(['boston_bull', 'dingo', 'pekinese', 'bluetick', 'golden_retriever',……])
列表中元素的顺序是他们在df.breed中出现的顺序,这样boston_bull对应编号0,dingo对应编号1,以此类推。kaggle比赛中,结果列表每一类出现的顺序并不重要,后台会自动识别类别和对应的概率,所以直接使用breeds也是可以的。但是一般希望种类按照正常顺序排,所以可以做一下排序,得到的df:
id breed label_idx 0 000bec180eb18c7604dcecc8fe0dba07 boston_bull 19 1 001513dfcb2ffafc82cccf4d8bbaba97 dingo 37 2 001cdf01b096e06d78e9e5112d419397 pekinese 85 ... ... ... ... 10219 ffe2ca6c940cddfee68fa3cc6c63213f airedale 3 10220 ffe5f6d8e2bff356e9482a80a6e29aac miniature_pinscher 75 10221 fff43b07992508bc822f33d8ffd902ae chesapeake_bay_retriever 28
可以像CIFAR-10例子中所做的那样整理数据集,即从原始训练集中拆分验证集,然后将图像移动到按标签分组的子文件夹中。但是这种方法有一定的问题,这里采取另一种方法。
先分析训练集中每个种类的样本有多少
count_train = collections.Counter(df['breed'])
count_train.most_common()
[('scottish_deerhound', 126), ('maltese_dog', 117), ('afghan_hound', 116), …… ('komondor', 67), ('brabancon_griffon', 67), ('eskimo_dog', 66), ('briard', 66)]
样本最多的种类几乎是样本最少的种类的两倍。如果使用CIFAR-10例子的处理方法,从训练集中抽取的每个种类的样本数量都一样,结果就是验证集和训练集的样本分布不一致。有时候希望,训练集中,样本数量多的种类多抽走一些,样本数量少的种类少抽走一些。
这种情况下可以使用StratifiedShuffleSplit函数,将验证集从训练集中划出,得到的结果是df的子集train_df 和val_df。具体解释见本文最后一部分。
stratified_split = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=0)
splits = stratified_split.split(df.id, df.breed)
train_split_id, val_split_id = next(iter(splits))
train_val_df = df
train_df = df.iloc[train_split_id].reset_index()
val_df = df.iloc[val_split_id].reset_index()
注意划分完数据集后,均进行了reset_index()操作。对于train_df ,reset_index()之前:
id breed label_idx 9556 efbabde6fc97bb48c8c8b6b75bfaea59 eskimo_dog 78 2055 332c413119b474653ecca0f358c85e1f giant_schnauzer 29 5652 8e7256b23446acbd33967122787c1eb3 tibetan_mastiff 116
reset_index()之后
index id breed label_idx 0 9556 efbabde6fc97bb48c8c8b6b75bfaea59 eskimo_dog 78 1 2055 332c413119b474653ecca0f358c85e1f giant_schnauzer 29 2 5652 8e7256b23446acbd33967122787c1eb3 tibetan_mastiff 116
如果不重设index,那么DataLoader会报错。使用train_df创建数据集train_dataset之后(见后面),运行以下命令
>>for i in range(100):train_dataset[i]
该命令会在train_dataset[12]这个地方报错
查看train_df和val_df的index:
>>train_df.index.sort_values()[0:25]
Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 25, 27], dtype='int64')
>>val_df.index.sort_values()[0:25]
Int64Index([ 12, 23, 26, 36, 46, 53, 67, 70, 75, 80, 102, 103, 110, 121, 122, 125, 133, 137, 145, 154, 165, 169, 177, 181, 209], dtype='int64')
train_df中没有12这个index,在train_df.id[i]处出错。进行reset_index()操作后就不会有问题了。
根据train_df 和val_df,从训练集文件夹(需要路径img_path)中读取图片并处理,返回图片和对应的标签
class DogDataset(Dataset):
def __init__(self, df, img_path, transform=None):
self.df = df
self.img_path = img_path
self.transform = transform
def __len__(self):
return self.df.shape[0]
def __getitem__(self, idx):
path = os.path.join(self.img_path, self.df.id[idx]) + '.jpg'
img = Image.open(path)
if self.transform:
img = self.transform(img)
label = self.df.label_idx[idx]
return img, label
对于测试集,用os.listdir()获得图片名列表,然后从该列表中根据idx获取图片名。注意要对图片名列表进行排序,这样才能在保存预测结果的时候,将(排序后的)图片名和模型输出的预测结果对应上
class DogDatasetTest(Dataset):
def __init__(self, img_path, transform=None):
self.img_path = img_path
self.img_list = os.listdir(img_path)
self.img_list.sort()
self.transform = transform
def __len__(self):
return len(self.img_list)
def __getitem__(self, idx):
path = os.path.join(self.img_path, self.img_list[idx])
img = Image.open(path)
if self.transform:
img = self.transform(img)
return img
img_size = 224 # 也可以是其他值
train_transform = transforms.Compose([
transforms.RandomResizedCrop(img_size, ratio=(3.0/4.0, 4.0/3.0)),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(30),
transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
test_transform = transforms.Compose([
transforms.Resize(img_size),
transforms.CenterCrop(img_size),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
原图像:
transforms.RandomResizedCrop(224, scale=(0.08, 0.3), ratio=(3.0/4.0, 4.0/3.0))
transforms.RandomHorizontalFlip()
由于我在测试transforms函数时,加一个效果就重新运行一遍图像处理,所以这里的图像并不是在上一幅的基础上左右翻转。下面类似。
transforms.RandomRotation(30)
transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4)
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
transforms.Resize(224)
transforms.CenterCrop(img_size) 这种剪裁是确定性的,上一幅的CenterCrop只会得到下面这张图
train_val_dataset = DogDataset(train_val_df, os.path.join(data_dir, 'train'), train_transform)
train_dataset = DogDataset(train_df, os.path.join(data_dir, 'train'), train_transform)
val_dataset = DogDataset(val_df, os.path.join(data_dir, 'train'), test_transform)
test_dataset = DogDatasetTest(os.path.join(data_dir, 'test'), test_transform)
batch_size = 32
train_val_iter = DataLoader(train_val_dataset, batch_size, shuffle=True, drop_last = True)
train_iter = DataLoader(train_dataset, batch_size, shuffle=False, drop_last = True)
val_iter = DataLoader(val_dataset, batch_size, shuffle=False, drop_last = True)
test_iter = DataLoader(test_dataset, batch_size, shuffle=False, drop_last = False)
本次比赛的数据集是ImageNet数据集的子集, 因此可以使用在完整ImageNet数据集上预训练的模型,进行迁移学习。在这里,我们选择预训练的ResNet-34模型,使用该模型提取图像特征,用一个小型自定义输出网络替换原始输出层,只训练这个输出网络。
回想一下,我们使用三个RGB通道的均值和标准差来对完整的ImageNet数据集进行图像标准化。 事实上,这也符合ImageNet上预训练模型的标准化操作。
书上在原来的输出层(分类数是1000)后面堆叠两个完全连接的图层来输出预测结果:
def get_net(devices):
finetune_net = nn.Sequential()
finetune_net.features = torchvision.models.resnet34(pretrained=True)
# 定义一个新的输出网络,共有120个输出类别
finetune_net.output_new = nn.Sequential(nn.Linear(1000, 256),
nn.ReLU(),
nn.Linear(256, 120))
# 将模型参数分配给用于计算的CPU或GPU
finetune_net = finetune_net.to(devices[0])
# 冻结参数
for param in finetune_net.features.parameters():
param.requires_grad = False
return finetune_net
更常见的做法应该是将预训练模型的fc层替换掉
def get_net(devices):
net = models.resnet34(pretrained=True)
for param in net.parameters():
param.requires_grad = False
num_class = 120
# Parameters of newly constructed modules have requires_grad=True by default
net.fc = nn.Linear(net.fc.in_features, num_class)
net = net.to(devices[0])
return net
这个例子官方的评价方法是交叉熵,而不像CIFAR-10例子的预测精度
loss = nn.CrossEntropyLoss(reduction='none')
def evaluate_loss(data_iter, net, devices):
net.eval()
l_sum, n = 0.0, 0
with torch.no_grad():
for data, label in data_iter:
data, label = data.to(devices[0]), label.to(devices[0])
pred = net(data)
l = loss(label, pred)
l_sum += l.sum()
n += len(data)
return (l_sum / n).to('cpu')
def train(net, train_iter, num_epochs, lr, wd, lr_period, lr_decay, devices, valid_iter=None):
net = nn.DataParallel(net, device_ids=devices).to(devices[0])
# optimizer更新的参数直接写net.parameters()也可以,不知道对训练时候的运算速度有没有影响
optimizer = torch.optim.SGD((param for param in net.parameters() if param.requires_grad),
lr=lr, momentum=0.9, weight_decay=wd)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, lr_period, lr_decay)
# timer用于计算程序运行时间, timer.stop()之后再运行timer.start(),时间可以累积
num_batches, timer = len(train_iter), d2l.Timer()
legend = ['train loss']
if valid_iter is not None:
legend.append('valid loss')
animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs], legend=legend)
cal_sum = 0
running_loss, num = 0.0, 0
for epoch in range(num_epochs):
net.train()
for i, (data, label) in enumerate(train_iter):
timer.start()
data, label = data.to(devices[0]), label.to(devices[0])
pred = net(data)
optimizer.zero_grad()
l = loss(pred, label).sum()
l.backward()
optimizer.step()
timer.stop()
running_loss += l.item()
num += len(data)
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches, (running_loss / num, None))
cal_sum += num
running_loss, num = 0.0, 0
scheduler.step()
if valid_iter is not None:
net.eval()
val_l = evaluate_loss(valid_iter, net, devices)
animator.add(epoch + 1, (None, val_l.detach().cpu()))
measures = ''
if valid_iter is not None:
measures += f'valid loss {val_l:.3f}'
print( measures + f'\n{cal_sum * num_epochs / timer.sum() :1f}'f' examples/sec')
超参数都是可调的, 例如,我们可以增加迭代轮数。 另外,由于lr_period
和lr_decay
分别设置为2和0.9, 因此优化算法的学习速率将在每2个迭代后乘以0.9
devices = d2l.try_all_gpus()
net = get_net(devices)
params = {
'net': net,
'train_iter': val_iter,
'num_epochs': 5,
'lr': 2e-4,
'wd': 5e-4,
'devices': devices,
'lr_period': 4,
'lr_decay': 0.9,
'valid_iter': val_iter
}
train(**params)
自定义的测试集Dataset类中,已经对测试集中的图片名进行了排序,所以不用担心样本id和预测结果对应不上
net = get_net(devices)
train(net, train_val_iter, None, num_epochs, lr, wd, devices, lr_period, lr_decay)
preds = []
with torch.no_grad():
for data in test_iter:
res = net(data.to(devices[0]))
output = nn.functional.softmax(res, dim=1)
preds.extend(output.cpu().numpy())
ids = sorted(os.listdir(os.path.join(data_dir, 'train_valid_test', 'test', 'unknown')))
with open('submission.csv', 'w') as f:
f.write('id,' + ','.join(train_valid_ds.classes) + '\n')
for i, output in zip(ids, preds):
f.write(i.split('.')[0] + ',' + ','.join([str(num) for num in output]) + '\n')
使用以下代码读出了label,train_label是DataFrame格式
train_label = pd.read_csv(os.path.join(data_dir, 'trainLabels.csv'))
id label 0 1 frog 1 2 truck ... ... ... 49997 49998 truck 49998 49999 automobile 49999 50000 automobile [50000 rows x 2 columns]
需要从中取出label这一列的值,通过[]的方式就可以。
>>train_label['label']
0 frog 1 truck 2 truck 3 deer 4 automobile ... 49995 bird 49996 frog 49997 truck 49998 automobile 49999 automobile Name: label, Length: 50000, dtype: object
格式是
>>train_label['label'].values
['frog' 'truck' 'truck' ... 'truck' 'automobile' 'automobile']
格式是
要想从label这一列中某一个值,以下4种方式得到的结果一样,都是str类型的truck
train_label.iloc[49997].values[1]
train_label.iloc[49997][1]
train_label.iloc[49997, 1]
train_label['label'][49997]
本文采取train_label['label']的方式,这样只需要存储label这一列,节省空间
参考从pytorch的transfer learning tutorial讲分类任务的数据读取(深入分析torchvision.datasets.ImageFolder源码) - 代码先锋网从pytorch的transfer learning tutorial讲分类任务的数据读取(深入分析torchvision.datasets.ImageFolder源码)_gaishi_hero的博客-CSDN博客_imagefolder源码从pytorch的transfer learning tutorial讲分类任务的数据读取(深入分析torchvision.datasets.ImageFolder源码) - 代码先锋网
ImageFolder是一个叫做DatasetFolder类的子类
IMG_EXTENSIONS = (".jpg", ".jpeg", ".png", ".ppm", ".bmp", ".pgm", ".tif", ".tiff", ".webp")
def pil_loader(path: str) -> Image.Image: # 根据地址读取图像
with open(path, "rb") as f:
img = Image.open(f)
return img.convert("RGB")
class ImageFolder(DatasetFolder):
def __init__(
self,
root: str,
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,
loader: Callable[[str], Any] = default_loader,
is_valid_file: Optional[Callable[[str], bool]] = None,
):
super().__init__(
root,
loader,
IMG_EXTENSIONS if is_valid_file is None else None,
transform=transform,
target_transform=target_transform,
is_valid_file=is_valid_file,
)
self.imgs = self.samples
loder是上面定义函数pil_loader()的引用,该函数的作用是根据传入的图像地址进行图像读取;IMG_EXTENSIONS定义了读取图像文件的扩展名类型。其余在调用父类__init__方法时传入的参数在最外面就已经传入,包括root表示路径、transform表示要对图像进行的变换。(看第一段代码传入的参数)
接下来看DatasetFolder类的定义(源码):
class DatasetFolder(VisionDataset):
def __init__(
self,
root: str,
loader: Callable[[str], Any],
extensions: Optional[Tuple[str, ...]] = None,
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,
is_valid_file: Optional[Callable[[str], bool]] = None,
) -> None:
super().__init__(root, transform=transform, target_transform=target_transform)
classes, class_to_idx = self.find_classes(self.root)
samples = self.make_dataset(self.root, class_to_idx, extensions, is_valid_file)
self.loader = loader
self.extensions = extensions
self.classes = classes
self.class_to_idx = class_to_idx
self.samples = samples
self.targets = [s[1] for s in samples]
@staticmethod
def make_dataset(
directory: str,
class_to_idx: Dict[str, int],
extensions: Optional[Tuple[str, ...]] = None,
is_valid_file: Optional[Callable[[str], bool]] = None,
) -> List[Tuple[str, int]]:
if class_to_idx is None:
raise ValueError("The class_to_idx parameter cannot be None.")
return make_dataset(directory, class_to_idx, extensions=extensions, is_valid_file=is_valid_file)
def find_classes(self, directory: str) -> Tuple[List[str], Dict[str, int]]:
return find_classes(directory)
def __getitem__(self, index: int) -> Tuple[Any, Any]:
path, target = self.samples[index]
sample = self.loader(path)
if self.transform is not None:
sample = self.transform(sample)
if self.target_transform is not None:
target = self.target_transform(target)
return sample, target
def __len__(self) -> int:
return len(self.samples)
下面是用到的辅助函数的源码
# has_file_allowed_extension函数的功能是根据文件名判断该文件是否具有所需图像类型扩展名的后缀
def has_file_allowed_extension(filename: str, extensions: Union[str, Tuple[str, ...]]) -> bool:
return filename.lower().endswith(extensions if isinstance(extensions, str) else tuple(extensions))
# Checks if a file is an allowed image extension
def is_image_file(filename: str) -> bool:
return has_file_allowed_extension(filename, IMG_EXTENSIONS)
# find_classes函数的功能是根据地址,得到文件夹下面有几种图像,为每种图像分配一个数字
def find_classes(directory: str) -> Tuple[List[str], Dict[str, int]]:
classes = sorted(entry.name for entry in os.scandir(directory) if entry.is_dir())
if not classes:
raise FileNotFoundError(f"Couldn't find any class folder in {directory}.")
class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}
return classes, class_to_idx
# make_dataset函数会根据地址、图像种类字典以及扩展名列表得到一个列表:(path_to_sample, class)
def make_dataset(
directory: str,
class_to_idx: Optional[Dict[str, int]] = None,
extensions: Optional[Union[str, Tuple[str, ...]]] = None,
is_valid_file: Optional[Callable[[str], bool]] = None,
) -> List[Tuple[str, int]]:
directory = os.path.expanduser(directory)
if class_to_idx is None:
_, class_to_idx = find_classes(directory)
elif not class_to_idx:
raise ValueError("'class_to_index' must have at least one entry to collect any samples.")
both_none = extensions is None and is_valid_file is None
both_something = extensions is not None and is_valid_file is not None
if both_none or both_something:
raise ValueError("Both extensions and is_valid_file cannot be None or not None at the same time")
if extensions is not None:
def is_valid_file(x: str) -> bool:
return has_file_allowed_extension(x, extensions) # type: ignore[arg-type]
is_valid_file = cast(Callable[[str], bool], is_valid_file)
instances = []
available_classes = set()
for target_class in sorted(class_to_idx.keys()):
# 第1个for读取类别名称,进入了每个类文件夹中
class_index = class_to_idx[target_class]
target_dir = os.path.join(directory, target_class)
if not os.path.isdir(target_dir):
continue
for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):
# 第2个for深度遍历每个类文件夹及其子文件夹,fnames是这些文件夹内的文件
for fname in sorted(fnames):
# 第3个for读取每个文件的文件名
path = os.path.join(root, fname)
if is_valid_file(path):
item = path, class_index
instances.append(item)
if target_class not in available_classes:
available_classes.add(target_class)
empty_classes = set(class_to_idx.keys()) - available_classes
if empty_classes:
msg = f"Found no valid file for the classes {', '.join(sorted(empty_classes))}. "
if extensions is not None:
msg += f"Supported extensions are: {extensions if isinstance(extensions, str) else ', '.join(extensions)}"
raise FileNotFoundError(msg)
return instances
源码有些复杂,下面是简化版本:
def find_classes(dir):
classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]
classes.sort()
class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}
return classes, class_to_idx
def make_dataset(directory, class_to_idx, extensions) :
directory = os.path.expanduser(directory)
instances = []
available_classes = set()
for target_class in sorted(class_to_idx.keys()):
# 第1个for读取类别名称,进入了每个类文件夹中
class_index = class_to_idx[target_class]
target_dir = os.path.join(directory, target_class)
if not os.path.isdir(target_dir):
continue
for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):
# 第2个for深度遍历每个类文件夹及其子文件夹,fnames是这些文件夹内的文件
for fname in sorted(fnames):
# 第3个for读取每个文件的文件名
path = os.path.join(root, fname)
if has_file_allowed_extension(path, IMG_EXTENSIONS):
item = path, class_index
instances.append(item)
if target_class not in available_classes:
available_classes.add(target_class)
# 如果有的类型没找到对应的文件,就报错
empty_classes = set(class_to_idx.keys()) - available_classes
if empty_classes:
msg = f"Found no valid file for the classes {', '.join(sorted(empty_classes))}. "
if extensions is not None:
msg += f"Supported extensions are: {extensions if isinstance(extensions, str) else ', '.join(extensions)}"
raise FileNotFoundError(msg)
return instances
IMG_EXTENSIONS = (".jpg", ".jpeg", ".png", ".ppm", ".bmp", ".pgm", ".tif", ".tiff", ".webp")
def has_file_allowed_extension(filename, extensions):
return filename.lower().endswith(extensions if isinstance(extensions, str) else tuple(extensions))
用到了os.walk()函数,可以参考os.walk()的详细理解(秒懂)_不堪沉沦的博客-CSDN博客_os.walk()
>>classes, class_to_idx = find_classes(os.path.join(data_dir, 'train_valid_test', 'train'))
>>classes
(['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'],
>>class_to_idx
{'airplane': 0, 'automobile': 1, 'bird': 2, 'cat': 3, 'deer': 4, 'dog': 5, 'frog': 6, 'horse': 7, 'ship': 8, 'truck': 9})
>>samples = make_dataset(os.path.join(data_dir, 'train_valid_test', 'train'), class_to_idx, IMG_EXTENSIONS)
>>samples[:4]
[('.\\data\\cifar-10\\train_valid_test\\train\\airplane\\14469.png', 0), ('.\\data\\cifar-10\\train_valid_test\\train\\airplane\\14480.png', 0), ('.\\data\\cifar-10\\train_valid_test\\train\\airplane\\14483.png', 0), ('.\\data\\cifar-10\\train_valid_test\\train\\airplane\\14487.png', 0)]
从输出结果可以看出:
有了这些信息,就能够通过__getitem__方法中的前两句代码:
path, target = self.samples[index]
sample = self.loader(path)
获取到图像和其对应分类了。并且,从代码中可以看出,ImageFolder读取每个文件夹的文件时,都要先排序一下。这就解释了读取测试集时,样本的顺序是什么样的了 。
该方法用于统计某序列中每个元素出现的次数,以键值对的方式存在字典中。但类型其实是Counter
>>n = collections.Counter(train_label['label']) # 计数,n是一个字典
Counter({'frog': 5000, 'truck': 5000, 'deer': 5000, 'automobile': 5000, 'bird': 5000, 'horse': 5000, 'ship': 5000, 'cat': 5000, 'dog': 5000, 'airplane': 5000})
>>sum(n.values())
50000
>>n['frog']
5000
>>n.most_common() # 从多到少排列n中出现最多的元素(及出现的次数),该列表中每一个元素是一个元祖
[('frog', 5000), ('truck', 5000), ('deer', 5000), ('automobile', 5000), ('bird', 5000), ('horse', 5000), ('ship', 5000), ('cat', 5000), ('dog', 5000), ('airplane', 5000)]
>>n.most_common(3)
[('frog', 5000), ('truck', 5000), ('deer', 5000)]
>>n.most_common()[-1]
('airplane', 5000)
>>n.most_common()[-1][1] # -1指出现次数最少的元素,1指出现的次数(元祖取值)
('airplane', 5000)
>>c = collections.Counter('gallahad')
Counter({'a': 3, 'd': 1, 'g': 1, 'h': 1, 'l': 2})
>>params = {'a': 10, 'b': 2e-4}
>>def test(a, b):
print(type(a))
print(type(b))
>>test(**params)
>>test(*params)
>>sorted_ids = [2,5,8,7,5,9,5,4,8,7]
>>sorted_ids.sort()
>>sorted_ids
[2, 4, 5, 5, 5, 7, 7, 8, 8, 9]
>>sorted_ids = [2,5,8,7,5,9,5,4,8,7]
>>sorted_ids.sort(key=lambda x: str(x))
>>sorted_ids
[2, 4, 5, 5, 5, 7, 7, 8, 8, 9]
>>a = torch.tensor([1,2,3,4,5,6])
>>frame = pd.DataFrame({'id': a})
>>frame['id'] = frame['id'].apply(lambda x: x**2)
>>frame
id 0 1 1 4 2 9 3 16 4 25 5 36
>>a = [1,2,3]
>>b = [4,5,6]
>>a.extend(b)
[1, 2, 3, 4, 5, 6]
>>a.append(b)
[1, 2, 3, 4, 5, 6]
[1, 2, 3, 4, 5, 6, [4, 5, 6]]
StratifiedShuffleSplit和train_test_split都来自sklearn.model_selection模块,都用于数据集的划分(将训练集划分为训练集和验证集),区别在于一个是分层抽样,一个是随机抽样。可以参考
StratifiedShuffleSplit()函数的详细理解_wang_xuecheng的博客-CSDN博客_stratifiedshufflesplit,这里给出直观的结果
n_splits代表将数据集分成多少训练集-验证集对, test_size代表验证集比例。下面的代码将数据集df进行一次划分,验证集占10%。
>>stratified_split = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=0)
>>splits = stratified_split.split(df.id, df.breed)
>>train_split_id, val_split_id = next(iter(splits))
>>train_split_id.shape
(9199,)
>>val_split_id.shape
(1023,)
>>train_split_id
[9556 2055 5652 ... 7133 366 4846]
train_split_id是划分出来的训练集在原数据集中的索引。其实只需要df.id,甚至只需要样本长度就行了,为什么StratifiedShuffleSplit函数也使用了breed这一列?
>>df_val = df.iloc[val_split_id]
>>count_val = collections.Counter(df_val['breed'])
>>count_val.most_common(5)
[('scottish_deerhound', 13), ('maltese_dog', 12), ('entlebucher', 12), …… ('golden_retriever', 7), ('english_springer', 7), ('eskimo_dog', 7)]
无论这段代码执行多少次,各种类被划分出来的数量都是确定不变的。下面看一下每个种类在验证集和训练集中的比例
>>ratio = []
>>for i in count_train:
ratio.append(count_train[i] / count_val[i])
>>print(min(ratio), max(ratio))
0.09333333333333334 0.10606060606060606
基本上都在0.1左右。这就是StratifiedShuffleSplit使用了breed这一列的原因。
作为对比,使用train_test_split函数
>>from sklearn.model_selection import train_test_split
>>train_id, val_id, train_breed, val_breed= train_test_split(df.id.values, df.breed.values, test_size=0.1)
>>len(val_id)
1023
>>len(train_id)
9199
>>val_id
['890efbec7147c2887c460be0af763381' 'c7441fba1f18864b59b1d474936def91' '63dd3e15f7fe4b3b3e9a69530e8d36b3' ... 'e24af0affe6c7a51b3e8ed9c30b090b7' '3d78ff549552e90b9a01eefb12548283' 'cc7ae3da3bebcc4acb10128078cdf29a']
该函数得到的结果是表中的id这一列,而不是index
>>df_val = pd.DataFrame({'breed':val_breed})
>>count_val_2 = collections.Counter(df_val['breed'])
>>common = count_val_2.most_common(5) # 分类不可复现
>>common
[('newfoundland', 16), ('sealyham_terrier', 15), ('entlebucher', 14), …… ('giant_schnauzer', 3), ('flat-coated_retriever', 3), ('eskimo_dog', 3)]
同样地计算一下每个种类在验证集和训练集中的比例
>>ratio = []
>>for i in count_train:
ratio.append(count_train[i]/ count_val_2[i])
>>print(min(ratio), max(ratio))
5.461538461538462 24.0
可以发现,数据集的划分没有考虑每个种类在训练集中相对多少。并且上面代码每执行一次,train_id, val_id, train_breed, val_breed都会变
F.cross_entropy
Input: Shape (C), (N, C) or (N, C, d_1, d_2, ..., d_K) with K in the case of K-dimensional loss.
- Target:
>>criterion = nn.CrossEntropyLoss(reduction='none')
一种情况是,target是每个样本的类别序号的向量,形状为(N),input是预测矩阵,每一行是每个样本关于C个类别的概率,形状为(N, C),其中每个值位于[0, C)
>>target = torch.tensor([1,2], dtype=torch.long)
tensor([1, 2])
>>input = torch.tensor([[0.1,0.2,0.3],[0.4,0.35,0.6]])
tensor([[0.1000, 0.2000, 0.3000], [0.4000, 0.3500, 0.6000]])
>>print(target.shape, input.shape)
torch.Size([2]) torch.Size([2, 3])
>>criterion(input, target)
tensor([1.1019, 0.9546])
另一种情况是,target是一个矩阵,每行是每个样本的K维特征,形状是(N, d_1, d_2, ..., d_K),其中每个值位于[0, C),input是对应的预测矩阵,矩阵第1维是每个样本关于C个类别的概率,,比如语义分割将每个像素点的分类预测概率放在通道维上
>>input = torch.randn([64,10,24,24], dtype = torch.float)
>>target = torch.randint(10, [64,24,24] , dtype = torch.int64)
>>criterion(input, target).shape
torch.Size([64, 24, 24])
最后一种是,target是每个样本关于每个类别的概率,要求每个概率都位于[0, 1]
>>input = torch.randn([64,10], dtype = torch.float)
>>target = torch.randn([64,10], dtype = torch.float)
>>print(input.shape, target.shape)
>>criterion(input, target).shape
torch.Size([64])
>>b = ['0062.jpg', '0102.jpg', '012a.jpg', '0151.jpg']
>>print(b.sort()) # 直接打印.sort(),结果是None
None
>>a = b.sort()
>>print(a)
None
>>b.sort()
>>print(b) # 应该排序后,再打印该数据
['0062.jpg', '0102.jpg', '012a.jpg', '0151.jpg']
>>print(sorted(b)) # sorted() 可以直接打印
['0062.jpg', '0102.jpg', '012a.jpg', '0151.jpg']
>>print(sorted(b, key=lambda x: x[3], reverse=True)) # 根据第四个数,逆序
['012a.jpg', '0062.jpg', '0102.jpg', '0151.jpg']
pytorch中tensor的transpose的格式是transpose(dim0, dim1),即交换两个维度,numpy中transpose的格式transpose(*axes),可以交换多个维度
>>ten = torch.randn([3, 200, 250])
>>ten.transpose(2,0).shape
torch.Size([250, 200, 3])
>>num = ten.numpy()
>>num.transpose(1,2,0).shape
(200, 250, 3)
使用plt画图的时候,二者相差90°