[PyTorch]利用torch.nn实现logistic回归

文章目录

  • 内容速览
  • 实验要求
  • 代码
  • 实验结果

内容速览

  1. 采用PyTorch提供的data库创建数据加载器data_iter
  2. 继承nn.Module构建网络
  3. 模型参数初始化
  4. 损失函数torch.nn.BCELoss(),torch.optim.SGD构建优化器
  5. 模型训练与计算训练集上的准确率

实验要求

  • 利用torch.nn实现logistic回归在人工构造的数据集上进行训练和测试,并对结果进行分析,并从loss以及训练集上的准确率等多个角度对结果进行分析

代码

import torch
from torch import tensor
import numpy as np
import torch.utils.data as Data
from torch.nn import init
import torch.optim as optim

# 1、生成训练集...h_k(x)=1/(1+e^(-k^*x)),k为参数,这里设置为[1.3,-1.0]
num_inputs = 2  # 特征数
num_examples = 1000  # 训练数据集样本数
true_k = [1.3, -1.0]
features = tensor(np.random.normal(0, 1, (num_examples, num_inputs)), dtype=torch.float)
labels = 1 / (1 + torch.exp(-1 * (true_k[0] * features[:, 0] + true_k[1] * features[:, 1])))
# 加入噪声
labels += torch.tensor(np.random.normal(0, 0.01, size=labels.size()), dtype=torch.float)
num0 = 0
num1 = 0
for i in range(num_examples):
    if labels[i] < 0.5:
        labels[i] = 0
        num0 += 1
    else:
        labels[i] = 1
        num1 += 1
# print(labels)
labels = labels.view(num_examples, 1)
print(num0, num1)

# 2、读取数据,采用PyTorch提供的data库读取数据。
dataset = Data.TensorDataset(features, labels)

lr = 0.01
num_epochs = 20
batch_size = 10
# 把dataset放入DataLoader
data_iter = Data.DataLoader(
    dataset=dataset,  # torch TensorDataset format
    batch_size=batch_size,
    shuffle=True,  # 是否打乱数据
    num_workers=0,  # 多线程来读数据,在Win下需要设置为0
)
test_iter = Data.DataLoader(
    dataset=dataset,
    batch_size=batch_size,
    shuffle=True,
    num_workers=0,
)


# 构建模型,最常见的是继承nn.Module然后构建自己的网络
class LogisticNet(torch.nn.Module):
    def __init__(self, n_feature):
        super(LogisticNet, self).__init__()
        self.linear = torch.nn.Linear(n_feature, 1)

    # 向前传播
    def forward(self, x):
        y_hat = 1/(1+torch.exp(self.linear(x)))
        return y_hat


net = LogisticNet(num_inputs)

# 模型参数初始化
init.normal_(net.linear.weight, mean=0, std=1.0)
init.constant_(net.linear.bias, val=0)  # 也可以直接修改bias的data:net[0].bias.data.fill_(0)

# 损失函数和优化
loss = torch.nn.BCELoss()
optimizer = optim.SGD(net.parameters(), lr=lr)

# 可以为不同的子网络设置不同的学习率
# optimizer = optim.SGD([
#     # 如果不指定学习率,默认最外层学习率
#     {'params': net.subnet1.parameters()},  # lr=0.03
#     {'params': net.subnet2.parameters(), 'lr': 0.01}
# ], lr=0.03)


# 模型训练
for epoch in range(num_epochs + 1):
    for x, y in data_iter:
        y_hat = net(x)
        l = loss(y_hat, y)
        optimizer.zero_grad()  # 梯度清零,等价于net.zero_grad()
        l.backward()
        optimizer.step()  # 更新所有参数
    print('epoch %d, loss %f' % (epoch, l.item()))
    # 训练集上的正确率
    allTrain = 0
    rightTrain = 0
    for train_x, train_y in test_iter:
        allTrain += len(train_y)
        # train_x = train_x.view(train_x.size()[0], train_x.size()[1], -1)
        train_out = net(train_x)
        mask = train_out.ge(0.5).float()
        correct = (mask.view(-1, 1) == train_y.view(-1, 1)).sum()
        rightTrain += correct.float().sum()
    print('train accuracy: %f' % (rightTrain/allTrain))

实验结果

[PyTorch]利用torch.nn实现logistic回归_第1张图片

你可能感兴趣的:(深度学习,python,深度学习)