pytorch中的nn.LSTM模块参数详解

直接去官网查看相关信息挺好的,但是为什么有的时候进不去

官网:https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM

使用示例,在使用中解释参数

单向LSTM

import torch
import torch.nn as nn

batch,seq,num_of_feature = (100,10,25)
data = torch.randn(size=(batch,seq,num_of_feature))
lstm = nn.LSTM(input_size=num_of_feature,# :输入的维度,标量
            hidden_size=20,#:h的维度,标量
            num_layers=3,#:堆叠LSTM的层数,默认值为1,标量
            bias=True,#:偏置 ,默认值:True
            batch_first=True,#: 如果是True,则input为(batch, seq, input_size)。默认值为:False(seq_len, batch, input_size)
            bidirectional=False,# :是否双向传播,默认值为False
)

output,(h,c) = lstm(data)
print(output.size()) # [batch,seq,hidden_size]
print(h.size()) # [num_layers,batch,hidden_size] 最后一个time_stamp的h状态
print(c.size())# [num_layers,batch,hidden_size] 最后一个time_stamp的c状态

双向LSTM 

import torch
import torch.nn as nn

batch,seq,num_of_feature = (100,10,25)
data = torch.randn(size=(batch,seq,num_of_feature))
lstm = nn.LSTM(input_size=num_of_feature,# :输入的维度,标量
            hidden_size=20,#:h的维度,标量
            num_layers=3,#:堆叠LSTM的层数,默认值为1,标量
            bias=True,#:偏置 ,默认值:True
            batch_first=True,#: 如果是True,则input为(batch, seq, input_size)。默认值为:False(seq_len, batch, input_size)
            bidirectional=True,# :是否双向传播,默认值为False
)

output,(h,c) = lstm(data)
print(output.size()) # [batch,seq,hidden_size*2]
print(h.size()) # [num_layers*2,batch,hidden_size] 最后一个time_stamp的h状态
print(c.size())# [num_layers*2,batch,hidden_size] 最后一个time_stamp的c状态

参考:https://blog.csdn.net/foneone/article/details/104002372

你可能感兴趣的:(pytorch中的nn.LSTM模块参数详解)