自我记录:Python学习之Pytorch 01 一元线性规划

代码取自B站刘二老师的《Pytorch深度学习实践》视频教程

import numpy as np
import matplotlib.pyplot as plt

x_data=[1.0,2.0,3.0]
y_data=[2.0,4.0,6.0]


def forward(x):
    return x*w


def loss_func(x,y):
    y_pred=forward(x)
    return (y_pred-y)*(y_pred-y)


w_list=[]
mse_list=[]

for w in np.arange(0.0,4.1,0.1):
    print('w=',w)
    sum=0
    for x_val,y_val in zip(x_data,y_data):
        y_pred_val=forward(x_val)
        loss_val=loss_func(x_val,y_val)
        sum+=loss_val
        print('\t',x_val,y_val,y_pred_val,loss_val)
    print('MSE=',sum/3)
    w_list.append(w)
    mse_list.append(sum/3)


plt.plot(w_list,mse_list)
plt.ylabel('loss')
plt.xlabel('w')
plt.show()

自我记录:Python学习之Pytorch 01 一元线性规划_第1张图片
使用pytorch实现的模型更有普适性

import torch

x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])


# 建立模型
class LinearModel(torch.nn.Module):
    def __init__(self):
        super(LinearModel, self).__init__()
        self.linear = torch.nn.Linear(1, 1)

    def forward(self, x):
        y_pred = self.linear(x)
        return y_pred


model = LinearModel()

# 计算损失与优化
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

# 训练循环
for epoch in range(1000):
    y_pred = model(x_data)
    loss = criterion(y_pred, y_data)
    print(epoch, loss.data)

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

# 输出权重和偏差
print('w=', model.linear.weight.item())
print('b=', model.linear.bias.item())

# 测试模型
x_test=torch.Tensor([4.0])
y_test=model(x_test)
print('y_pred=',y_test.data)

结果:
自我记录:Python学习之Pytorch 01 一元线性规划_第2张图片

Model类简介
Linear类简介
init,call,forward方法

你可能感兴趣的:(python学习,python,机器学习)