pytorch Dataset和DataLoader 加载数据集

简介:pytorch提供的加载数据集的两个工具包Dataset和DataLoader,对Dataset进行简单的改造就可以加载自己的数据了。

目录

      • 1.Dataset 类
      • 2. 自定义Dataset示例:
      • 3.DataLoader
      • 4.完整示例

1.Dataset 类

要使用pytorch提供的Dataset类,需要重写两个主要的方法

  • len(self) 获取数据集大小
  • getitem(self, idx) 根据idx索引数据
    重写之后就可以直接使用DataLoader来加载batch数据了。

也就是基本所有的自定义数据加载类看起来都会是这样的

from torch.utils.data import Dataset
class myDataset(Dataset):
    def __init__(self,):
        pass
    
    def __len__(self):
        pass
    
    def __getitem__(self, idx):
        pass

2. 自定义Dataset示例:

以kaggle竞赛中的猫狗分类数据集为例,在训练集中包含12500张猫的图片和12500张狗的图片,文件名为: class.index.jpg
pytorch Dataset和DataLoader 加载数据集_第1张图片
重写Dataset类的__init__,len,__getitem__方法

class CatsAndDogs(Dataset):
    def __init__(self, root,transforms=None,size=(224,224)):
        #初始化
        self.images = [os.path.join(root,item) for item in os.listdir(root)]
        self.transforms = transforms
        self.size = size

    def __len__(self):
        return len(self.images)

    def __getitem__(self, idx):
        #这里需要resize是因为用Dataloader加载的同一个batch里面的图片大小需要一样
        image = np.array(Image.open(self.images[idx]).resize(self.size))
        #the format of the path :"K:\\imageData\\dogAndCat\\train\\dog.9983.jpg"
        label = self.images[idx].split("\\")[-1].split(".")[0]
        return image,label

:一般在分类问题中损失函数使用CrossEntropy,getitem 返回的标签应该是0<= label <= numClasses-1的,例如有3类标签值就分别是:0,1,2。参考

3.DataLoader

重写完Dataset之后使用DataLoader获得batch数据。

trainLoader = DataLoader(mydataset,batch_size=32,num_workers=2,shuffle=True)
for images,labels in trainLoader:
    pass

DataLoader原型
torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, multiprocessing_context=None, generator=None, *, prefetch_factor=2, persistent_workers=False)

  • dataset (Dataset) – dataset from which to load the data.

  • batch_size (int*, *optional) – how many samples per batch to load (default: 1).

  • shuffle (bool*, *optional) – set to True to have the data reshuffled at every epoch (default: False).

  • sampler (Sampler* or Iterable, *optional) – defines the strategy to draw samples from the dataset. Can be any Iterable with __len__ implemented. If specified, shuffle must not be specified.

  • batch_sampler (Sampler* or Iterable, *optional) – like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, shuffle, sampler, and drop_last.

  • num_workers (int*, *optional) – how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)

  • collate_fn (callable*, *optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.

  • pin_memory (bool*, *optional) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn returns a batch that is a custom type, see the example below.

  • drop_last (bool*, *optional) – set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False)

  • timeout (numeric*, *optional) – if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: 0)

  • worker_init_fn (callable*, *optional) – If not None, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None)

  • prefetch_factor (int*, optional, *keyword-only arg) – Number of samples loaded in advance by each worker. 2 means there will be a total of 2 * num_workers samples prefetched across all workers. (default: 2)

  • persistent_workers (bool*, *optional) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. (default: False)

4.完整示例

from torch.utils.data import Dataset,DataLoader
import os
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt

class CatsAndDogs(Dataset):
    def __init__(self, root,transforms=None,size=(224,224)):
        #初始化

        self.images = [os.path.join(root,item) for item in os.listdir(root)]
        self.transforms = transforms
        self.size = size

    def __len__(self):
        return len(self.images)

    def __getitem__(self, idx):
        #这里需要resize是因为用Dataloader加载的同一个batch里面的图片大小需要一样
        image = np.array(Image.open(self.images[idx]).resize(self.size))
        #the format of the path :"K:\\imageData\\dogAndCat\\train\\dog.9983.jpg"
        label = self.images[idx].split("\\")[-1].split(".")[0]
        return image,label

if __name__ == "__main__":
    mydataset = CatsAndDogs(r"K:\imageData\dogAndCat\train")
    dataloader = DataLoader(mydataset,batch_size=8,shuffle=True)
    num = 0

    for imgs,labels in dataloader:
        print(labels)
        print(imgs.size())
        num += 1
        if num > 2:
            break

    for i in range(8):
        ax = plt.subplot(3,3,i+1)
        ax.imshow(imgs[i])
        ax.set_title(labels[i])
        ax.axis("off")
    # plt.imshow(imgs[1])
    plt.show()
  • 输出
('dog', 'dog', 'dog', 'dog', 'cat', 'dog', 'dog', 'cat')
torch.Size([8, 224, 224, 3])
('dog', 'cat', 'dog', 'dog', 'dog', 'cat', 'dog', 'dog')
torch.Size([8, 224, 224, 3])
('dog', 'cat', 'cat', 'dog', 'cat', 'dog', 'cat', 'dog')
torch.Size([8, 224, 224, 3])

pytorch Dataset和DataLoader 加载数据集_第2张图片

你可能感兴趣的:(machine,learning,深度学习,dataset,pytorch)