pytorch神经网络


1.梯度下降
2.人工神经网络

输入-加工-输出

3.pytorch
  1. 诠释神经网络(更好)

PyTorch 是Torch 在 Python上的衍生.因为Torch是一个使用Lua语言的神经网络库,Torch 很好用,但是 Lua又不是特别流行,所有开发团队将Lua的Torch移植到了更流行的语言Python 上.

  1. numpy和torch
import torch
import numpy as np
  1. 激励函数

线性,非线性

y=AF(Wx)

AF就是激励函数(relu(卷积),sigmoid,tanh(循环神经网络),softplus)

  1. 激励函数搭建神经网络(回归,分类)
import torch
from torch.autograd import Variable
import torch.nn.functional as F
import matplotlib.pyplot as plt

x = torch.unsqueeze(torch.linspace(-1,1,100),dim=1)
y = x.pow(2) + 0.2*torch.rand(x.size())

x,y =Variable(x),Variable(y)

plt.scatter(x.data.numpy(),y.data.numpy())
plt.show()


class Net(torch.nn.Module):
    def __init__(self,n_features,n_hidden,n_output):
        super(Net,self).__init__()
        self.hidden =torch.nn.Linear(n_features,n_hidden)
        self.predict =torch.nn.Linear(n_hidden,n_output)


    def forward(self,x):
        x = F.relu(self.hidden(x))
        x = self.predict(x)
        return x

net = Net(1,10,1)
print(net)

plt.ion()
plt.show()


optimizer = torch.optim.SGD(net.parameters(),lr=0.5)
loss_func = torch.nn.MSELoss()

for t in range(100):
    prediction = net(x)

    loss = loss_func(prediction,y)

    optimizer.zero_grad()#梯度设为0
    loss.backward()
    optimizer.step()#优化梯度
    if t % 5 == 0:
        plt.cla()
        plt.scatter(x.data.numpy(),y.data.numpy())
        plt.plot(x.data.numpy(),prediction.data.numpy(),'r-',lw=5)
        plt.text(0.5, 0, 'Loss=%.4f' % loss.data[0], fontdict={'size': 20, 'color': 'red'})
        plt.pause(0.1)
plt.ioff()
plt.show()
4.pytorch神经网络
import torch
import torch.utils.data as Data
#from torch.utils.data import TensorDataset

BATCH_SIZE = 5

x = torch.linspace(1,10,10)
y = torch.linspace(10,1,10)

torch_dataset = Data.TensorDataset(x)
loader = Data.DataLoader(
    dataset=torch_dataset,
    batch_size=BATCH_SIZE,
    shuffle=True,
    num_workers=2,
)

for epoch in range(3):
    for step,(batch_x,batch_y) in enumerate(loader):
        #training...
        print('Epoch:',epoch,'| Step:',step,'| batch x:',
              batch_x.numpy(),'| batch y:',batch_y.numpy)
5. pytorch 神经网络

import torch.utils.data as Data
#from torch.utils.data import TensorDataset

BATCH_SIZE = 5

x = torch.linspace(1,10,10)
y = torch.linspace(10,1,10)

torch_dataset = Data.TensorDataset(x)
loader = Data.DataLoader(
    dataset=torch_dataset,
    batch_size=BATCH_SIZE,
    shuffle=True,
    num_workers=2,
)

for epoch in range(3):
    for step,(batch_x,batch_y) in enumerate(loader):
        #training...
        print('Epoch:',epoch,'| Step:',step,'| batch x:',
              batch_x.numpy(),'| batch y:',batch_y.numpy)




Momentum 参数

AdaGrad 学习率改变

RMSProp 两者结合

Adam 结合改进

6.卷积神经网络

输入-卷积-池化-全连接-分类器

class CNN(nn.Module):
    def __init__(self):
        super(CNN,self).__init__()
        self.conv1 = nn.Sequential(
            nn.Conv2d(
                in_channels=1,
                out_channels=16,
                kernel_size=5,
                stride=1,
                padding=2, 
            ),
            nn.ReLU(),
            nn.MaxPool1d()
        )

NN(分析)

RNN(误差(信息源到达终点,得到误差),梯度消失(反向传递得到误差,乘以自己(小于0)),梯度爆炸(大于1时))

LSTM RNN(长短期记忆) 输入,输出,忘记控制

7.RNN

(回归)

纬度值,

8.Autoencoder(非监督学习)

压缩-解压(incode-encode)

PCA(给特征属性降维)

自编码

class AutoEncoder(nn.Module):
    def __init__(self):
        super(AutoEncoder,self).__init__()
        self.encoder = nn.Sequential(
            nn.Linear(28*28,12),
            nn.Tanh(),
            nn.Linear(128,64),
            nn.Tanh(),
            nn.Linear(64,12),
            nn.Tanh(),
            nn.Linear(12, 3),
        )
        self.decoder = nn.Sequential(
            nn.Linear(3, 12),
            nn.Tanh(),
            nn.Linear(12, 64),
            nn.Tanh(),
            nn.Linear(64, 128),
            nn.Tanh(),
            nn.Linear(128, 28*28),
            nn.Sigmoid(),
        )
    def forward(self,x):
        encoder = self.encoder(x)
        decoded = self.decoder(encoder)
        return encoder,decoded
autoencoder = AutoEncoder()
optimizer = torch.optim.Adam(autoencoder.parameters(),lr=LR)
loss_func = nn.MSELoss()

        encoded,decoded = autoencoder(b_x)
9.DQN

Q-learning(离线学习经历)

q值(神经网络)

10.GAN(生成对抗网络)

接收信息-生成新的

11,动态/静态
12,GPU加速,cuda
13,过拟合

机器模型过于自信,(自负)

解决:增加数据量,正规化(y=Wx+ans(w1))

dropout

14,标准化

batch-normolization

p(数据) -处理

x-全连接层-激励函数

你可能感兴趣的:(python,pytorch,神经网络)