探讨如何构建损失函数和基于损失函数计算误差
系列第一篇:https://blog.csdn.net/qq_37385726/article/details/81740386
系列第二篇:https://blog.csdn.net/qq_37385726/article/details/81742247
系列第三篇:https://blog.csdn.net/qq_37385726/article/details/81744802
系列第四篇:https://blog.csdn.net/qq_37385726/article/details/81745510
系列第五篇:https://blog.csdn.net/qq_37385726/article/details/81748635
1.预构建网络
网络结构
2.向网络传入输入,得到输出
3.构建损失函数,计算误差
中间的曲折(不看跳过)
构造
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5*5 square convolution
# kernel
self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=5, stride=1, padding=2)
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5, stride=1, padding=2)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(64 * 8 * 8, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# max pooling over a (2, 2) window
x = self.conv1(x)
x = F.max_pool2d(F.relu(x), (2, 2)) #32*16*16
# If size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2) #64*8*8
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
Net(
(conv1): Conv2d(1, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(fc1): Linear(in_features=4096, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
将输入的variable作为参数传入到net中,即net(input)
输出即为net(input)调用后的返回值
input = Variable(torch.Tensor(1,1,32,32), requires_grad = True)
out = net(input) #将输入作为参数传入网络返回值即为输出
print(out)
输出为 tensor([[-0.1163, 0.0099, 0.0055, -0.0484, 0.1090, -0.0102, -0.1381, 0.0693,
-0.0400, -0.0166]], grad_fn=
损失函数定义在nn中,在这里我们使用MSELoss()即最小均方误差作为损失函数
Simply&& Generall
在回归问题上,我们使用 nn.MSELoss() 作为损失函数
在分类问题上,我们使用 nn.CrossEntropyLoss 作为损失函数
target = torch.arange(1,11) #利用arange函数生成[10,]的张量,包装成variable
target = Variable(target,requires_grad = True)
看起来很美好嘛,没什么问题,但是在我的pycharm上报错了【不同配置报不报错不一定】:报错的意思是说只有FloatTensor才能在requires_grad上为True。也就是说torch.arange(1,11)生成的是int型的张量,但是我看别人生成的明明是Float的嘛。。。(我装的是windows下无GPU,python3.6,Anaconda4.0的Pytorch)
总之,反正报错就不能这样写。
于是,我改成了
target = torch.Tensor([0,1,2,3,4,5,6,7,8,9])
target = Variable(target,requires_grad = True)
嗯,一看我这改法,我就认输了23333
后来发现,可以用arange,但是参数改成浮点就行
target = torch.arange(1.0,11.0)
target = Variable(target,requires_grad = True)
但其实target本来就不用requires_grad,考虑到loss_func接受的需要为FloatTensor,所以还是应该这样。
#Model预测输出
input = Variable(torch.rand(1,1,32,32)) #根据上述网络定义in_channels=1,所以输入为1,shape(32,32)
out = net(input) #将输入作为参数传入网络返回值即为输出
print('out:\n',out)
#对于input目标输出
target = torch.arange(1.0,11.0).reshape(1,10)
print('\n\ntarget:\n',target)
#损失函数
#定义损失函数,选择MSE最小均方误差作为损失函数
loss_func = nn.MSELoss()
#计算误差,误差即为利用损失函数计算out和target间的误差
loss = loss_func(out, target)
print('\n\nloss:\n',loss)
out:
tensor([[-0.0423, 0.0614, 0.0607, 0.0778, 0.0255, -0.0705, 0.0049, 0.0573,
-0.1050, 0.0537]], grad_fn=
target:
tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])
loss:
tensor(384.3105, grad_fn=