Pytorch学习笔记(四)----logistic模型

序言

上一章学习了一个线性回归模型,这次我们了解一下一个简单的分类模型logistic。其实就是在线性模型外加了一个sigmod函数。具体原理pdf与本次实验用数据见网盘。欢迎各位和我一起学习pytorch与深度学习,请多多讨论留言~

原理与数据

https://pan.baidu.com/s/1FoV6V7n0ljBHLB5jF-FBeQ
数据样例:
34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
60.18259938620976,86.30855209546826,1

详细代码

import torch
import numpy as np

o_data = []
#读取数据
with open('./data.txt','r') as f:
    for i in f.readlines():
        o_data.append(i.strip('\n').split(','))
data = np.array(o_data,dtype='float32')

#min-max标准化
x_max = data.max(0)
x_min = data.min(0)
data = (data - x_min)/(x_max-x_min)

#数据集 入参、出参
x_train = torch.tensor(data[:,:2]) #(1,2)
y_train = torch.tensor(data[:,-1]).unsqueeze(1)#(100,1)

#定义logistic模型  sigmod(x*w+b)
w = torch.randn((2,1),requires_grad=True)
b = torch.zeros(1,requires_grad=True)
def logistic_model(x):
    return torch.sigmoid(torch.mm(x,w)+b)

#损失函数
loss_func = torch.nn.BCELoss()

#参数更新算法 SGD Adam等
optimizer = torch.optim.Adam([w,b],lr=1e-1)

for i in range(100):
    #计算实际y
    y_ = logistic_model(x_train)#100,1
    #计算损失
    loss = loss_func(y_, y_train)
    #梯度清空
    optimizer.zero_grad()
    #反向传递,计算梯度
    loss.backward()
    #更新参数 w b
    optimizer.step()

    #计算准确率 我们定义y_中 大于等于0.5的为1,小于0.5的为0
    pre_y = y_.ge(0.5).float()
    acc = 1-sum(abs(pre_y-y_train)).squeeze().numpy()/y_train.size(0)

    print('epoch: {}, loss: {},acc: {}'.format(i, loss.detach().numpy(), acc))

结果

epoch: 0, loss: 0.6000279784202576,acc: 0.6
epoch: 1, loss: 0.5880877375602722,acc: 0.6
epoch: 2, loss: 0.5764607191085815,acc: 0.6
epoch: 3, loss: 0.5651463866233826,acc: 0.61
epoch: 4, loss: 0.5541413426399231,acc: 0.65
······
epoch: 95, loss: 0.24912481009960175,acc: 0.89
epoch: 96, loss: 0.24845609068870544,acc: 0.89
epoch: 97, loss: 0.24780069291591644,acc: 0.89
epoch: 98, loss: 0.2471579611301422,acc: 0.89
epoch: 99, loss: 0.2465277910232544,acc: 0.89
最终的准确率在89%。

你可能感兴趣的:(Pytorch)