在pytorch中用keras的数据集

项目场景:

由keras转战pytorch,还在学习阶段。以前keras装了mnist数据集(下了一夜,可慢了)。现在用pytorch又需要mnist数据集,就想着直接拿过来用,免得重复下载。此方法也适用于类似的数据预处理场景。

解决方案:

首先需要的库:

from torch.utils.data import TensorDataset,DataLoader
from keras.datasets import mnist

导入keras的数据并观察形状,数据类型

(Xtrain, ytrain), (Xtest, ytest) = mnist.load_data()
print('Xtrain,ytrain shape and type',Xtrain.shape,ytrain.shape,type(Xtrain),type(ytrain))
print('Xtest,ytest shape and type',Xtest.shape,ytest.shape,type(Xtest),type(ytest))

输出为:

Using TensorFlow backend.
Xtrain,ytrain shape and type (60000, 28, 28) (60000,)  
Xtest,ytest shape and type (10000, 28, 28) (10000,)  

查阅资料得Torch中的数据为(60000,1,28,28)的Tensor类型要转成float,标签为(60000)的Tensor,类型需为LongTensor.则做如下转换:

X_train=torch.from_numpy(Xtrain).float().view(60000,1,28,28)

y_train=torch.from_numpy(ytrain).view(60000)
y_train=y_train.type(torch.LongTensor)

X_test=torch.from_numpy(Xtest).float().view(10000,1,28,28)

y_test=torch.from_numpy(ytest).view(10000)
y_test=y_test.type(torch.LongTensor)

再将数据打包成Torch的dataset并用Torch的DataLoader管理数据:

Train_ds=TensorDataset(X_train,y_train)
Train_dl=DataLoader(Train_ds,batch_size=BATCH_SIZE,shuffle=True)

Test_ds=TensorDataset(X_test,y_test)
Test_dl=DataLoader(Test_ds,batch_size=BATCH_SIZE,shuffle=True)

以上步骤是分解写法,比较冗长。若有更好的方法恳请指教!!

你可能感兴趣的:(pytorch)