pytorch拟合一元一次函数

pytorch拟合一元一次函数

  • 1. 自定义网络
  • 2. 使用卷积网络

  拟合函数 y = a × x + b y=a\times x+b y=a×x+b,其中 a = 1 , b = 2 a=1,b=2 a=1,b=2

1. 自定义网络

import torch
import numpy as np

class Net:
    def __init__(self):
        self.a = torch.rand(1, requires_grad=True)
        self.b = torch.rand(1, requires_grad=True)
        self.__parameters = dict(a=self.a, b=self.b)
        self.___gpu = False
    def forward(self, inputs):
        return self.a * inputs + self.b
    def parameters(self):
        for name, param in self.__parameters.items():
            yield param

if __name__ == '__main__':
    x = np.linspace(1, 50, 50)
    y = x + 2
    x = torch.from_numpy(x.astype(np.float32))
    y = torch.from_numpy(y.astype(np.float32))
    net = Net()
    optimizer = torch.optim.Adam(net.parameters(), lr=0.001, weight_decay=0.0005)
    loss_op = torch.nn.MSELoss(reduction='sum')
    for i in range(1, 20001, 1):
        out = net.forward(x)
        loss = loss_op(y, out)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        # 输出中间过程
        loss_numpy = loss.cpu().detach().numpy()
        if i % 1000 == 0:
            print(i, loss_numpy)
        if loss_numpy < 0.00001:
            a = net.a.cpu().detach().numpy()
            b = net.b.cpu().detach().numpy()
            print(a, b)
            exit()

  这种方法定义网络时没有继承torch.nn.Module,完全自己写了一个网络,要显式调用Net的forward函数。损失函数使用的是L2损失。

2. 使用卷积网络

import torch
import numpy as np

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        layers = []
        layers.append(torch.nn.Conv2d(1, 1, kernel_size=1, stride=1, bias=True))
        self.net = torch.nn.ModuleList(layers)
    def forward(self, x):
        return self.net[0](x)

if __name__ == '__main__':
    x = np.linspace(1, 50, 50)
    y = x + 2  # a = 1, b = 2
    x = torch.from_numpy(x.astype(np.float32))
    y = torch.from_numpy(y.astype(np.float32))
    net = Net()
    optimizer = torch.optim.Adam(net.parameters(), lr=0.001, weight_decay=0.0005)
    loss_op = torch.nn.L1Loss(reduce=True, size_average=True)
    for i in range(20000):
        x_batch = torch.tensor([x[i % 50]]).reshape(1, 1, 1, 1)
        y_batch = torch.tensor([y[i % 50]]).reshape(1, 1, 1, 1)
        out = net(x_batch)
        loss = loss_op(y_batch, out)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        # 输出中间过程
        loss_numpy = loss.cpu().detach().numpy()
        if i % 1000 == 0:
            print('--iterator:', i, 'loss:', loss_numpy)
        if loss_numpy < 1e-10:
            break
    for k, v in net.named_parameters():
        print(k, v.cpu().detach().numpy())

  这种方法使用卷积网络拟合。使用L2损失拟合的效果很差,这里使用L1作为损失函数。原因是L2度量的是误差的平方,当误差小于1时,L2度量的误差数量级比实际误差的数量级成倍减少。本例 b a t c h _ s i z e = 1 batch\_size=1 batch_size=1,迭代若干次后误差必定小于1,所以使用L1损失在loss达到指定阈值时收敛得更好。

你可能感兴趣的:(Pytorch)