(刘二大人)PyTorch深度学习实践-处理多维输入特征

1.代码实现(带动量,momentum=0.9)

import torch
import numpy as np
from torch.utils.tensorboard import SummaryWriter
import time

#追踪
writer = SummaryWriter("../LEDR")

# 加载数据
xy = np.loadtxt('E:\learn_pytorch\LE\diabetes\diabetes.csv.gz', dtype=np.float32, delimiter=',')
x_data = torch.from_numpy(xy[:,:-1])#全部行,去掉最后一列
y_data = torch.from_numpy(xy[:,[-1]])#这个-1一定要加上中括号,不然取出是向量的形式,我们要求x,y都是矩阵

#构建网络模型
class MultiLogistic(torch.nn.Module):
    def __init__(self):
        super(MultiLogistic, self).__init__()
        self.linear1 = torch.nn.Linear(8,6)
        self.linear2 = torch.nn.Linear(6,4)
        self.linear3 = torch.nn.Linear(4,1)
        self.activation = torch.nn.Sigmoid()

    def forward(self,x):
        x = self.activation(self.linear1(x))
        x = self.activation(self.linear2(x))
        x = self.activation(self.linear3(x))#如果activation换成了ReLu,但是最后的这个激活函数还是要用sigmoid,防止负数ReLu会识别为0,梯度消失
        return x

Model = MultiLogistic()

#loss和优化函数
criterion = torch.nn.BCELoss()
optimizer = torch.optim.SGD(params=Model.parameters(),lr=0.1,momentum=0.9)

#开始时间
start = time.time()

#进行100轮训练
for epoch in range(100):
    y_pred = Model.forward(x_data)
    loss = criterion(y_pred,y_data)
    writer.add_scalar('Multi_loss4',loss.item(),epoch)
    print("\tEpoch:",epoch,loss.item())

    #归零、反馈、更新
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

#结束时间
end = time.time()
writer.close()
print('time:',end-start)

1.1部分结果展示

    Epoch: 90 0.6448379158973694
    Epoch: 91 0.6448283195495605
    Epoch: 92 0.6448185443878174
    Epoch: 93 0.6448085904121399
    Epoch: 94 0.6447984576225281
    Epoch: 95 0.6447881460189819
    Epoch: 96 0.6447776556015015
    Epoch: 97 0.644767165184021
    Epoch: 98 0.6447566151618958
    Epoch: 99 0.6447460055351257
    time: 0.15403103828430176

1.2 loss函数图像展示

(刘二大人)PyTorch深度学习实践-处理多维输入特征_第1张图片

2.不带动量

2.1部分结果展示

    Epoch: 90 0.6436880826950073
    Epoch: 91 0.6436854600906372
    Epoch: 92 0.6436827778816223
    Epoch: 93 0.643680214881897
    Epoch: 94 0.6436775326728821
    Epoch: 95 0.6436749696731567
    Epoch: 96 0.6436724066734314
    Epoch: 97 0.6436697840690613
    Epoch: 98 0.6436671614646912
    Epoch: 99 0.6436646580696106
   time: 0.13448810577392578

2.2loss函数图像展示(收敛更加平滑) 

(刘二大人)PyTorch深度学习实践-处理多维输入特征_第2张图片

到底什么时候使用动量比较好呢??? 

你可能感兴趣的:(PyTorch,深度学习,pytorch,python)