B站刘二大人-多元逻辑回归 Lecture 7


系列文章:

《PyTorch深度学习实践》完结合集-B站刘二大人

Pytorch代码注意的细节,容易敲错的地方

B站刘二大人-线性回归及梯度下降 Lecture3

B站刘二大人-反向传播Lecture4

B站刘二大人-线性回归 Pytorch实现 Lecture 5

B站刘二大人-多元逻辑回归 Lecture 7

B站刘二大人-数据集及数据加载 Lecture 8

B站刘二大人-Softmx分类器及MNIST实现-Lecture 9


文章目录


import torch
import  matplotlib.pyplot as plt
import numpy as  np
class LogisticRegressionModel(torch.nn.Module):
    def __init__(self):
        super(LogisticRegressionModel, self).__init__()
        # 输入维度8输出维度6
        self.lay1 = torch.nn.Linear(8,6)
        self.lay2 = torch.nn.Linear(6,4)
        self.lay3 = torch.nn.Linear(4,1)
        self.sigmod = torch.nn.Sigmoid()

    def forward(self,x):
        x = self.sigmod(self.lay1(x))
        x = self.sigmod(self.lay2(x))
        x = self.sigmod(self.lay3(x))
        return  x

model = LogisticRegressionModel()
criterion = torch.nn.BCELoss(reduction='mean')
optimizer = torch.optim.SGD(model.parameters(), lr=0.005)
# 读取数据
xy = np.loadtxt('./datasets/diabetes.csv.gz', delimiter=',', dtype=np.float32)
x_data = torch.from_numpy(xy[:,:-1])
y_data = torch.from_numpy(xy[:,[-1]])
epoch_list = []
loss_list = []
for epoch in range(1000):
# 没有用到最小批处理
    y_pred = model(x_data)
    loss = criterion(y_pred, y_data)
    loss_list.append(loss.item())
    epoch_list.append(epoch)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
plt.plot(epoch_list, loss_list)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.show()

你可能感兴趣的:(PyTorch使用,逻辑回归,机器学习,python)