作者主页(文火冰糖的硅基工坊):文火冰糖(王文兵)的博客_文火冰糖的硅基工坊_CSDN博客
本文网址:https://blog.csdn.net/HiWangWenBing/article/details/121665139
目录
第1章 LSTM神经网络理论基础
第2章 业务说明
2.1 业务说明
2.2 环境准备
第3章 构建训练和测试数据集
3.1 下载并查看股票数据
3.2 根据序列的长度, 把数据集,构建成序列数据集
3.3 划分训练集,验证集
3.4 构造数据迭代器dataloader
第4章 构建LSTM网络
4.1 定义网络
4.2 实例化网络
4.3 loss和优化器
第5章 训练LSTM网络
5.1 训练前的准备
5.2 开始训练
5.3 loss损失迭代过程
第6章 测试训练效果
6.2 去归一化
6.3 显示预测结果与实际股价的关系
[Pytorch系列-53]:循环神经网络 - torch.nn.LSTM()参数详解_文火冰糖(王文兵)的博客-CSDN博客https://blog.csdn.net/HiWangWenBing/article/details/121644547
Tushare是一个免费、开源的python财经数据接口包
%matplotlib inline
import torch
import torch.nn as nn
from torch.nn import functional as F
from torch.autograd import Variable
from torch import optim
from torchvision import transforms
import numpy as np
import math, random
import matplotlib.pyplot as plt
import pandas as pd
# Generating a noisy multi-sin wave
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
#from icecream import ic
import tushare as ts #Tushare是一个免费、开源的python财经数据接口包
import pandas as pd
#获取某只股票的历史数据
stock_id = '000001'
stock_start_date = '2020-11-01'
stock_end_date = ''
# https://waditu.com/document/1?doc_id=131
#建立一个与Tushare数据中心的连接
connect = ts.get_apis()
#通过已经建立的连接获取某一只股票的指定时间段的数据
# bar: K线
# asset = 'INDEX': 指数行情, 'X': 期货
# freq = “1min”:1分钟线, ‘D’:日线, 'W': 周
# 复权行情:adj= 'qfq' 向前复权, ‘bfq’向后复权,‘None’默认
df=ts.bar(stock_id, conn=connect, asset='INDEX', start_date=stock_start_date, end_date=stock_end_date)
#关闭session
ts.close_apis(connect)
# 显示读取到的数据
# code: 股票代码
# open:开票价
# close:收盘价
# high:最高价
# low:最低价
# vob:成交量(成交股数)
# amount:成交金额
df
code | open | close | high | low | vol | amount | p_change | |
---|---|---|---|---|---|---|---|---|
datetime | ||||||||
2021-12-01 | 000001 | 3561.89 | 3576.89 | 3576.89 | 3558.69 | 3298735.0 | 4.675228e+11 | 0.36 |
2021-11-30 | 000001 | 3570.75 | 3563.89 | 3582.12 | 3546.36 | 3491879.0 | 4.936055e+11 | 0.03 |
2021-11-29 | 000001 | 3528.67 | 3562.70 | 3563.68 | 3526.36 | 3336183.0 | 4.900382e+11 | -0.04 |
2021-11-26 | 000001 | 3576.11 | 3564.09 | 3576.11 | 3554.88 | 3010000.0 | 4.346252e+11 | -0.56 |
2021-11-25 | 000001 | 3593.39 | 3584.18 | 3597.15 | 3579.53 | 3064515.0 | 4.505589e+11 | -0.24 |
... | ... | ... | ... | ... | ... | ... | ... | ... |
2020-11-06 | 000001 | 3326.46 | 3312.16 | 3326.46 | 3292.15 | 2348641.0 | 3.254851e+11 | -0.24 |
2020-11-05 | 000001 | 3305.58 | 3320.13 | 3320.41 | 3291.60 | 2268014.0 | 3.162927e+11 | 1.30 |
2020-11-04 | 000001 | 3273.43 | 3277.44 | 3286.62 | 3254.11 | 1885141.0 | 2.707238e+11 | 0.19 |
2020-11-03 | 000001 | 3239.81 | 3271.07 | 3278.38 | 3237.85 | 2152978.0 | 3.037302e+11 | 1.42 |
2020-11-02 | 000001 | 3228.72 | 3225.12 | 3242.80 | 3209.91 | 2267791.0 | 3.253219e+11 | NaN |
265 rows × 8 columns
# 按照时间对数据集数据进行排序
df=df.sort_index(ascending=True)
df
code | open | close | high | low | vol | amount | p_change | |
---|---|---|---|---|---|---|---|---|
datetime | ||||||||
2020-11-02 | 000001 | 3228.72 | 3225.12 | 3242.80 | 3209.91 | 2267791.0 | 3.253219e+11 | NaN |
2020-11-03 | 000001 | 3239.81 | 3271.07 | 3278.38 | 3237.85 | 2152978.0 | 3.037302e+11 | 1.42 |
2020-11-04 | 000001 | 3273.43 | 3277.44 | 3286.62 | 3254.11 | 1885141.0 | 2.707238e+11 | 0.19 |
2020-11-05 | 000001 | 3305.58 | 3320.13 | 3320.41 | 3291.60 | 2268014.0 | 3.162927e+11 | 1.30 |
2020-11-06 | 000001 | 3326.46 | 3312.16 | 3326.46 | 3292.15 | 2348641.0 | 3.254851e+11 | -0.24 |
... | ... | ... | ... | ... | ... | ... | ... | ... |
2021-11-25 | 000001 | 3593.39 | 3584.18 | 3597.15 | 3579.53 | 3064515.0 | 4.505589e+11 | -0.24 |
2021-11-26 | 000001 | 3576.11 | 3564.09 | 3576.11 | 3554.88 | 3010000.0 | 4.346252e+11 | -0.56 |
2021-11-29 | 000001 | 3528.67 | 3562.70 | 3563.68 | 3526.36 | 3336183.0 | 4.900382e+11 | -0.04 |
2021-11-30 | 000001 | 3570.75 | 3563.89 | 3582.12 | 3546.36 | 3491879.0 | 4.936055e+11 | 0.03 |
2021-12-01 | 000001 | 3561.89 | 3576.89 | 3576.89 | 3558.69 | 3298735.0 | 4.675228e+11 | 0.36 |
265 rows × 8 columns
# 选择参与训练和预测的数据样本的维度
# 选择如下5个维度的数据,作为样本数据的维度
# 开盘价、收盘价、最高价、最低价、成交量
df=df[["open","close","high","low","vol"]]
df
open | close | high | low | vol | |
---|---|---|---|---|---|
datetime | |||||
2020-11-02 | 3228.72 | 3225.12 | 3242.80 | 3209.91 | 2267791.0 |
2020-11-03 | 3239.81 | 3271.07 | 3278.38 | 3237.85 | 2152978.0 |
2020-11-04 | 3273.43 | 3277.44 | 3286.62 | 3254.11 | 1885141.0 |
2020-11-05 | 3305.58 | 3320.13 | 3320.41 | 3291.60 | 2268014.0 |
2020-11-06 | 3326.46 | 3312.16 | 3326.46 | 3292.15 | 2348641.0 |
... | ... | ... | ... | ... | ... |
2021-11-25 | 3593.39 | 3584.18 | 3597.15 | 3579.53 | 3064515.0 |
2021-11-26 | 3576.11 | 3564.09 | 3576.11 | 3554.88 | 3010000.0 |
2021-11-29 | 3528.67 | 3562.70 | 3563.68 | 3526.36 | 3336183.0 |
2021-11-30 | 3570.75 | 3563.89 | 3582.12 | 3546.36 | 3491879.0 |
2021-12-01 | 3561.89 | 3576.89 | 3576.89 | 3558.69 | 3298735.0 |
265 rows × 5 columns
#获取股票波动的范围
close_max=df["close"].max()
close_min=df['close'].min()
print("最高价=", close_max)
print("最低价=", close_min)
print("波动值=", close_max-close_min)
print("上涨率=", (close_max-close_min)/close_min)
print("下跌率=", (close_max-close_min)/close_max)
最高价= 3715.37 最低价= 3225.12 波动值= 490.25 上涨率= 0.15200984769558962 下跌率= 0.13195186482100033
#对输入数据进行归一化
df = df.apply(lambda x:(x-min(x))/(max(x)-min(x)))
df
open close high low vol
datetime
2020-11-02 0.000000 0.000000 0.000000 0.000000 0.079925
2020-11-03 0.022524 0.093728 0.072777 0.057858 0.055944
2020-11-04 0.090806 0.106721 0.089632 0.091528 0.000000
2020-11-05 0.156102 0.193799 0.158747 0.169162 0.079971
2020-11-06 0.198509 0.177542 0.171122 0.170301 0.096812
... ... ... ... ... ...
2021-11-25 0.740642 0.732402 0.724805 0.765401 0.246338
2021-11-26 0.705547 0.691423 0.681769 0.714357 0.234952
2021-11-29 0.609196 0.688587 0.656344 0.655298 0.303082
2021-11-30 0.694661 0.691015 0.694062 0.696714 0.335603
2021-12-01 0.676666 0.717532 0.683364 0.722246 0.295260
265 rows × 5 columns
#检查归一化后的数据范围:在[0,1] 之间
close_max_n = df["close"].max()
close_min_n = df['close'].min()
print("最高价=", close_max_n)
print("最低价=", close_min_n)
最高价= 1.0 最低价= 0.0
# 思路:
# 根据前n天的数据,预测当天的收盘价(close), 例如,根据1月1-10日的数据(包含5个特征) 预测 1月11日的收盘价(一个值)
# 前n天的所有维度的数据为样本数据,而n+1的收盘价为标签数据。
# sequence的长度,表明了“块”相关数据的长度,即“块长”
# 本案例,并没有把“块”与块在外部连接起来,如果连接了,则相关性就扩展到整个数据集,而不是seq长度。
#这个例子中:
# sequence length = 10: 序列长度
# input_size=5 : 数据数据的维度
total_len = df.shape[0]
print("df shape =", df.shape)
print("df len =", total_len)
print("")
print("按照序列的长度,重新结构化数据集")
sequence = 10
X=[]
Y=[]
# 一个连续sequence长度的数据为一个序列(输入序列),一个序列对应一个样本标签(预测值)
for i in range(df.shape[0] - sequence):
X.append(np.array(df.iloc[i:(i+sequence),].values,dtype=np.float32))
Y.append(np.array(df.iloc[(i+sequence),1],dtype=np.float32))
print("train data of item 0: \n", X[0])
print("train label of item 0: \n", Y[0])
# 序列化后,样本数据的总长少了sequence length
print("\n序列化后的数据形状:")
X = np.array(X)
Y = np.array(Y)
Y = np.expand_dims(Y, 1)
print("X.shape =",X.shape)
print("Y.shape =",Y.shape)
df shape = (265, 5) df len = 265 按照序列的长度,重新结构化数据集 train data of item 0: [[0. 0. 0. 0. 0.07992486] [0.02252371 0.09372769 0.07277711 0.05785757 0.05594364] [0.09080569 0.10672106 0.08963162 0.09152845 0. ] [0.15610212 0.19379908 0.15874736 0.16916196 0.07997143] [0.19850925 0.17754208 0.17112234 0.17030089 0.09681215] [0.2045413 0.30313104 0.282313 0.24749954 0.25420886] [0.3227248 0.2754309 0.29622206 0.28216437 0.18837535] [0.25448343 0.23881693 0.25167215 0.26739973 0.17271727] [0.23506713 0.23163693 0.21986541 0.24762379 0.05845742] [0.20007312 0.17334013 0.17269734 0.16924478 0.08226255]] train label of item 0: 0.24854666 序列化后的数据形状: X.shape = (255, 10, 5) Y.shape = (255, 1)
# 通过切片的方式把数据集且分为训练集+验证集
# X[start: end; step]
# 数据集最前面的70%的数据作为训练集
train_x = X[:int(0.7 * total_len)]
train_y = Y[:int(0.7*total_len)]
# 数据集前70%后的数据(30%)作为验证集
valid_x = X[int(0.7*total_len):]
valid_y = Y[int(0.7*total_len):]
print(train_x.shape)
print(train_y.shape)
print(valid_x.shape)
print(validd_y.shape)
(185, 10, 5) (185, 1) (70, 10, 5) (70, 1)
#把读取到的股票的数据,认为的分为训练集合测试集
class Mydataset(Dataset):
def __init__(self,x, y, transform = None):
self.x = x
self.y = y
def __getitem__(self, index):
x1 = self.x[index]
y1= self.y[index]
return x1,y1
def __len__(self):
return len(self.x)
# 构建适合dataload的数据集
#dataset_train = Mydataset(train_x, train_y, transform=transforms.ToTensor())
dataset_train = Mydataset(train_x, train_y)
dataset_valid = Mydataset(validd_x, valid_y)
# 启动dataloader
batch_size = 8
# 关闭shuffle,这样确保数据的时间顺序与走势与实际一致
train_loader = DataLoader(dataset = dataset_train, batch_size = batch_size, shuffle=False)
test_loader = DataLoader(dataset = dataset_valid, batch_size = batch_size, shuffle=False)
print(train_loader)
print(test_loader)
#闭环模型
class LSTM(nn.Module):
# input_size: 输入层样本特征向量的长度
# hidden_size:隐藏层输出特征向量的长度
# num_layers:隐藏层的层数
# output_size:整个网络的输出特征的长度
def __init__(self, input_size = 5, hidden_size = 32, num_layers = 1, output_size = 1, batch_first=True, batch_size=batch_size, is_close_loop=False):
super(LSTM, self).__init__()
# lstm的输入 #batch,seq_len, input_size
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.batch_first = batch_first
self.is_close_loop = is_close_loop
self.hidden0 = torch.zeros(num_layers, batch_size, hidden_size)
self.cell0 = torch.zeros(num_layers, batch_size, hidden_size)
# 定义LSTM网络
# input_size: 输入层样本特征向量的长度
# hidden_size:隐藏层输出特征向量的长度
# num_layers:隐藏层的层数
# batch_first=true: 数据格式为{batch,sequence,input_size}
self.lstm = nn.LSTM(input_size = self.input_size, hidden_size = self.hidden_size, batch_first = batch_first)
#定义网络的输出层:
# hidden_size:输出层的输入,隐藏层的特征输出
# output_size:输出层的输出
self.linear = nn.Linear(in_features = self.hidden_size, out_features = self.output_size, bias=True)
# 定义前向运算,把各层串联起来
def forward(self, x):
#输入层直接送到lstm网络中
# 输入层数据格式:x.shape = [batch, seq_len, hidden_size]
# 隐藏层输出数据格式:hn.shape = [num_layes * direction_numbers, batch, hidden_size]
# 隐藏层输出数据格式:cn.shape = [num_layes * direction_numbers, batch, hidden_size]
out,(hidden, cell) = self.lstm(x,(self.hidden0, self.cell0))
# 闭环
if(self.is_close_loop == True):
self.hidden0 = hidden
self.cell0 = cell
#隐藏层的形状
a,b,c = hidden.shape
# 隐藏层的输出,就是全连接层的输入
# 把隐藏层的输出hidden,向量化后:hidden.reshape(a*b,c),送到输出层
out = self.linear(hidden.reshape(a*b,c))
#返回输出特征
return out, (hidden,cell)
# 实例化LSTM网络
seq_length = 10
input_size = 5
hidden_size = 32
n_layers = 1
output_size = 1
lstm_model = LSTM(input_size = input_size,
hidden_size = hidden_size,
num_layers = 1,
output_size = 1,
batch_first=True,
batch_size = batch_size,
is_close_loop = False)
print(lstm_model)
LSTM( (lstm): LSTM(5, 32, batch_first=True) (linear): Linear(in_features=32, out_features=1, bias=True) )
# 定义loss
criterion = nn.MSELoss()
#定义优化器
Learning_rate = 0.001
optimizer = optim.Adam(lstm_model.parameters(), lr = Learning_rate) # 使用 Adam 优化器 比课上使用的 SGD 优化器更加稳定
# 训练前的准备
n_epochs = 100
lstm_losses = []
# 开始训练
for epoch in range(n_epochs):
for iter_, (x, label) in enumerate(train_loader):
if(x.shape[0] != batch_size):
continue
pred,(h1,c1) = lstm_model(x)
#梯度复位
optimizer.zero_grad()
#定义损失函数
loss=criterion(pred, label)
# 反向求导
loss.backward(retain_graph=True)
#梯度迭代
optimizer.step()
#记录loss
lstm_losses.append(loss.item())
#显示batch accuracy的历史数据
plt.grid()
plt.xlabel("iters")
plt.ylabel("")
plt.title("loss", fontsize = 12)
plt.plot(lstm_losses, "r")
plt.show()
6.1 验证集预测
# 使用验证集进行预测
# 说明:
# 由于dataloader并不是按顺序读取的,而是随机读取
# 因此,每一次执行的结果都不一样
# 这种方式,实际上模拟了“不确定性”股票序列
data_loader = test_loader
# 存放测试序列的预测结果
predicts =[]
# 存放测试序列的实际发生的结果
labels =[]
for idx, (x, label) in enumerate(data_loader):
if(x.shape[0] != batch_size):
continue
#对测试集样本进行批量预测,把结果保存到predict Tensor中
#开环预测:即每一次序列预测与前后的序列无关。
predict,(h,c) = lstm_model(x)
# 把保存在tensor中的批量预测结果转换成list
predicts.extend(predict.data.squeeze(1).tolist())
# 把保存在tensor中的批量标签转换成list
labels.extend(label.data.squeeze(1).tolist())
predicts = np.array(predicts)
labels = np.array(labels)
print(predicts.shape)
print(labels.shape)
(64,) (64,)
#把验证集的测试结果还原(去归一化)
predicts_unnormalized = close_min + (close_max-close_min) * predicts
labels_unnormalized = close_min + (close_max-close_min) * labels
print("shape:", predicts_unnormalized.shape)
print("正则化后的预测数据:\n",predicts)
print("")
print("正则化前的预测数据:\n",predicts_unnormalized)
shape: (64,) 正则化后的预测数据: [0.59260374 0.48211056 0.48565215 0.51228839 0.43483108 0.50473827 0.59726918 0.64267856 0.56620198 0.55954456 0.59402353 0.60205871 0.61043543 0.70416963 0.72788131 0.78897893 0.8944447 0.9154039 0.88287348 0.89241737 0.94133753 0.89059108 0.85315037 0.77660173 0.71607655 0.76088321 0.86307532 0.81184721 0.71191126 0.72858405 0.64720601 0.69567633 0.78986424 0.7996456 0.66029733 0.65682721 0.7190302 0.73600054 0.70432627 0.7330122 0.74353516 0.75153458 0.73850805 0.75767523 0.78296894 0.69691545 0.57360643 0.60051584 0.65143973 0.59263283 0.57826018 0.63923877 0.59906304 0.57885391 0.60330349 0.55292267 0.60539132 0.67029965 0.66517466 0.63271612 0.63754815 0.62359625 0.66278106 0.73554069] 正则化前的预测数据: [3515.64398504 3461.47470201 3463.21096591 3476.26938398 3438.29593835 3472.56793747 3517.93121424 3540.19316346 3502.70052309 3499.43672215 3516.34003348 3520.27928192 3524.38596771 3570.3391616 3581.96381339 3611.91692253 3663.62151616 3673.89676322 3657.94872139 3662.62761627 3686.61072205 3661.7322794 3643.37696778 3605.848999 3576.17653004 3598.14299473 3648.24267365 3623.12809467 3574.13449575 3582.30833107 3542.41274563 3566.17531919 3612.35094468 3617.14625673 3548.83076809 3547.12954038 3577.62455625 3585.94426369 3570.41595485 3584.47923076 3589.63811269 3593.55982843 3587.17356939 3596.57028175 3608.97052203 3566.78279824 3506.33055304 3519.52289176 3544.48832584 3515.65824498 3508.61205488 3538.50680933 3518.81065478 3508.90312702 3520.88953699 3496.19033704 3521.91309638 3553.73440304 3551.22187857 3535.30907748 3537.67797987 3530.83806206 3550.04841454 3585.71882231]
plt.plot(predicts_unnormalized,"r",label="pred")
plt.plot(labels_unnormalized, "b",label="real")
plt.show()
(1)上述案例是开环案例,即每次LSTM的状态输出,并没有被下一次序列使用。本案例中,LSTM的长时记忆等于seq的长度。
不同seq长度的序列之间是独立的。如果需要支持整个数据集上的长时记忆,这需要打开闭环功能。
(2)从上述图片来看,时候预测结果与实际走势的吻合度很高,但实用价值不大,因为上述的预测有一个,有一个致命的缺陷:
本案例,是用前10天实际的股价,来预测第11天的股价。每个10天数据预测之间,没有任何关联,即本文所说的开环预测。它并没有使用预测结果进一步的预测。
上图的走势吻合度,看起来像是预测走势,但实际上,都是根据10天的实际走势来预测的,因此总体的吻合度自然与实际走势是一致的。
即使预测不准,根据预测结果进行了买卖,然后,第二天预测时,又采用了前10天的实际数据,包括当天的数据,也必须使用实际的数据,而不是前天的预测数据。
也就是说,图像的吻合度,主要原因是每次预测都使用了实际的数据,即使预测有误差,实际的输入数据也能反向总体的走势!!!!!
并且,很显然,红色的预测线滞后于实际走势。
作者主页(文火冰糖的硅基工坊):文火冰糖(王文兵)的博客_文火冰糖的硅基工坊_CSDN博客
本文网址:https://blog.csdn.net/HiWangWenBing/article/details/121665139