这个教程很多,我不赘述。自行百度谷歌。
推荐参考:win10下安装使用pytorch以及cuda9、cudnn7.0,Anaconda3虚拟环境的设置真的很赞!我的虚拟环境设置如下,使用的是python3.6,路径在E:\ProgramingTools\Anaconda\Anaconda3\envs\my_pytorch。
$conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
$conda config --set show_channel_urls yes
# -n my_pytorch 是指你的虚拟环境
$conda install -n my_pytorch numpy pyyaml mkl setuptools cmake cffi
pytorch安装包百度网盘地址:https://pan.baidu.com/s/1nvaamrn#list/path=%2F 。我下载的是pytorch-0.2.1-py36he6bf560_0.2.1cu80.tar.bz2,即python3.6-cuda8版本的安装包。下载完成后,进入该文件目录,
$conda install -n my_pytorch pytorch-0.2.1-py36he6bf560_0.2.1cu80.tar.bz2
等待一段时间后,应该就可以import torch了。
另外,注意numpy、scipy、matplotlib的前后依赖性,由于已经安装了numpy,所以只需再安装scipy、matplotlib即可,代码如下:
conda install -n my_pytorch scipy matplotlib
#进入虚拟环境,直接pip,不过网络要好,我是安装好的
$activate my_pytorch
(my_pytorch)$pip install torchvision
运行下面这段代码:
# CUDA TEST
import torch
x = torch.Tensor([1.0])
xx = x.cuda()
print(xx)
# CUDNN TEST
from torch.backends import cudnn
print(cudnn.is_acceptable(xx))
另外LogisticRegression代码测试:
import torch
import torch.nn as nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
from torch.autograd import Variable
# Hyper Parameters
input_size = 784
num_classes = 10
num_epochs = 5
batch_size = 100
learning_rate = 0.001
# MNIST Dataset (Images and Labels)
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
# Dataset Loader (Input Pipline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Model
class LogisticRegression(nn.Module):
def __init__(self, input_size, num_classes):
super(LogisticRegression, self).__init__()
self.linear = nn.Linear(input_size, num_classes)
def forward(self, x):
out = self.linear(x)
return out
model = LogisticRegression(input_size, num_classes)
# Loss and Optimizer
# Softmax is internally computed.
# Set parameters to be updated.
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# Training the Model
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = Variable(images.view(-1, 28 * 28))
labels = Variable(labels)
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0:
print ('Epoch: [%d/%d], Step: [%d/%d], Loss: %.4f'
% (epoch + 1, num_epochs, i + 1, len(train_dataset) // batch_size, loss.data[0]))
# Test the Model
correct = 0
total = 0
for images, labels in test_loader:
images = Variable(images.view(-1, 28 * 28))
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the model on the 10000 test images: %d %%' % (100 * correct / total))
# Save the Model
torch.save(model.state_dict(), 'model.pkl')
【1】win10下安装使用pytorch以及cuda9、cudnn7.0
【2】windows下超简单安装Anaconda配置环境以及虚拟环境配置