最小二乘法是一种最基本的线性回归方法,使用torch.lstsq
直接实现最小二乘法的代码如下:
import torch
x = torch.tensor([[1., 2., 1.], [2., 4., 1.], [3., 5., 1.], [4., 2., 1.], [4., 4., 1.]])
y = torch.tensor([-12., 13., 15., 14., 18.])
wr, _ = torch.lstsq(y, x) # 返回2个值
w = wr[: 3] # 因为x大小为(5,3)所以,取wr的前3个元素为所得权重
print(w)
import torch
import torch.nn
import torch.optim
x = torch.tensor([[1., 2., 1.], [2., 4., 1.], [3., 5., 1.], [4., 2., 1.], [4., 4., 1.]])
y = torch.tensor([-12., 13., 15., 14., 18.])
w = torch.zeros(3, requires_grad=True)
L = torch.nn.MSELoss()
optimizer = torch.optim.Adam([w, ],)
for step in range(30001):
if step:
optimizer.zero_grad()
loss.backward()
optimizer.step()
pred = torch.mv(x, w)
loss = L(pred, y)
if step % 1000 == 0:
print('step = {}, loss = {}, W = {}'.format(step, loss, w.tolist()))
import torch
import torch.nn
import torch.optim
x = torch.tensor([[1., 2., 1.], [2., 4., 1.], [3., 5., 1.], [4., 2., 1.], [4., 4., 1.]])
y = torch.tensor([-12., 13., 15., 14., 18.]).reshape(-1, 1)
fc = torch.nn.Linear(3, 1)
L = torch.nn.MSELoss()
optimiter = torch.optim.Adam(fc.parameters())
w, b = fc.parameters()
for step in range(30001):
if step:
optimiter.zero_grad()
loss.backward()
optimiter.step()
pred = fc(x)
loss = L(pred, y)
if step % 1000 == 0:
print('step = {}, loss = {}, w = {}, b= {}'.format(step, loss, w.tolist(), b.item()))
二、三、
中均使用与最小二乘法相对应的MSE损失torch.nn.MSELoss()
,此外还有L1损失torch.nn.L1Loss
和SmoothL1损失torch.nn.SmoothL1Loss
等