Pytorch学习笔记2——DATASETS & DATALOADERS

Pytorch学习笔记2——DATASETS & DATALOADERS

    • **Pytorch Learning Notes**
      • 2.DATASETS & DATALOADERS
        • 2.1 Loading a Dataset( [Fashion-MNIST](https://research.zalando.com/project/fashion_mnist/fashion_mnist/) dataset)
        • 2.2 Iterating and Visualizing the Dataset
        • 2.3 Creating a Custom Dataset for your files
        • 2.4 Preparing your data for training with DataLoaders
        • 2.5 Futher reading——TORCH.UTILS.DATA
          • 2.5.1 Map-style datasets
          • 2.5.2 Iterable-style datasets

Pytorch Learning Notes

Reference:Pytorch官方文档
以上是Pytorch官方的文档,本文主要对其进行翻译整理,并加入一些自己的理解,仅作日后复习查阅所用。

2.DATASETS & DATALOADERS

Dataset存储samples和对应的label,Dataloaders为Dataset提供 easy access to the samples.

2.1 Loading a Dataset( Fashion-MNIST dataset)

import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt

training_data = datasets.FashionMNIST(
    root="data",
    train=True,
    download=True,
    transform=ToTensor()
)

test_data = datasets.FashionMNIST(
    root="data",
    train=False,
    download=True,
    transform=ToTensor()
)
  • root:数据路径。

  • train:表明是training data或是test data。

  • download:若root路径下没有data是否从网上下载。

  • transform\target_transform:指定data\label的转换。

2.2 Iterating and Visualizing the Dataset

labels_map = {
    0: "T-Shirt",
    1: "Trouser",
    2: "Pullover",
    3: "Dress",
    4: "Coat",
    5: "Sandal",
    6: "Shirt",
    7: "Sneaker",
    8: "Bag",
    9: "Ankle Boot",
}
figure = plt.figure(figsize=(8, 8))
cols, rows = 3, 3
for i in range(1, cols * rows + 1):
    sample_idx = torch.randint(len(training_data), size=(1,)).item()
    img, label = training_data[sample_idx]
    figure.add_subplot(rows, cols, i)
    plt.title(labels_map[label])
    plt.axis("off")
    plt.imshow(img.squeeze(), cmap="gray")
plt.show()

2.3 Creating a Custom Dataset for your files

一个定制的Dataset需实现以下三个函数:__ init__ , __ len__ , and __ getitem __

import os
import pandas as pd
from torchvision.io import read_image

class CustomImageDataset(Dataset):
    def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
        self.img_labels = pd.read_csv(annotations_file)
        self.img_dir = img_dir
        self.transform = transform
        self.target_transform = target_transform

    def __len__(self):
        return len(self.img_labels)

    def __getitem__(self, idx):
        img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
        image = read_image(img_path)
        label = self.img_labels.iloc[idx, 1]
        if self.transform:
            image = self.transform(image)
        if self.target_transform:
            label = self.target_transform(label)
        return image, label
  • _init_:在实例化Dataset对象时运行一次,初始化图片存放的路径img_dir,和对应的label存放CSV文件annotations_file,和两种transform方式,其中label.csv文件格式如下:

    tshirt1.jpg, 0
    tshirt2.jpg, 0

    ankleboot999.jpg, 9

  • _len_:返回dataset中的sample数量。

  • _getitem_:返回给定idx坐标下的的sample,并用read_image将图片转化为一个tensor。

2.4 Preparing your data for training with DataLoaders

通过dataset,我们已经实现了一次取一个sample和对应label这一功能,但在实际训练过程中,往往会一次训练一个minibatch的sample,并且会在每一个epoch重新排序数据从而减少模型的过拟合,并利用python的多线程加快数据检索。而DataLoader就是为了简化这些步骤。

  • 将dataset Load入Dataloader中(此代码接的是 Fashion-MNIST dataset)

    from torch.utils.data import DataLoader
    
    train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
    test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
    
  • 再根据需要对DataLoader中的数据进行遍历

    # Display image and label.
    train_features, train_labels = next(iter(train_dataloader))
    print(f"Feature batch shape: {train_features.size()}")
    print(f"Labels batch shape: {train_labels.size()}")
    img = train_features[0].squeeze()
    label = train_labels[0]
    plt.imshow(img, cmap="gray")
    plt.show()
    print(f"Label: {label}")
    
    Out:
    
    Feature batch shape: torch.Size([64, 1, 28, 28])
    Labels batch shape: torch.Size([64])
    Label: 0
    

Pytorch学习笔记2——DATASETS & DATALOADERS_第1张图片

2.5 Futher reading——TORCH.UTILS.DATA

DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,
           batch_sampler=None, num_workers=0, collate_fn=None,
           pin_memory=False, drop_last=False, timeout=0,
           worker_init_fn=None, *, prefetch_factor=2,
           persistent_workers=False)
  • pytorch中有两种形式的dataset:map-style datasets,iterable-style datasets.
  • TORCH.UTILS.DATA库详细内容可以参考

TORCH.UTILS.DATA官方技术文档

2.5.1 Map-style datasets
  • 定义:是实现__ getitem__ () and _len_() protocol,且将data sample与indices/keys(可能是非整数)映射起来的dataset。例如dataset[idx]可读得第idx张图片和对应的label。
2.5.2 Iterable-style datasets
  • 定义:为IterableDataset子类的一个实例,实现了_iter()_ protocol,并表示对data sample的迭代。这类dataset适用于对数据的random read开销较大或不合适时,且batch size取决于数据时。例如iter(dataset),可以返回从dataset或远程服务器等读到的数据流。

你可能感兴趣的:(pytorch,人工智能,python)