2017-6-25 今日收集

时间序列 金融

Deeptrader: Deep Learning in Finance (Auto-encoder) O网页链接 ​​​​

『Hitoshi Harada, CTO at Alpaca - Deep Learning in Finance Summit, London, 2016 #reworkfin - YouTube』O网页链接 ​​​​

《On Feature Reduction using Deep Learning for Trend Prediction in Finance》L Troiano, E Mejuto, P Kriplani [University of Sannio] (2017) O网页链接 ​​​​

dropout

深度学习网络大杀器之Dropout(II)——将丢弃学习视为集成学习之我见-博客-云栖社区-阿里云

  • 那么为了避免过拟合的出现,通用的做法是在算法中使用正则化,这也是Hinton在文献[2]中提出的技巧“(dropout learning)”。“丢弃学习”包含两个步骤:在学习阶段,是以概率p忽略掉一些隐藏节点,这一操作减小了网络的大小;而在测试阶段,将学习的节点和那些没有被学习的节点求和后并乘以丢弃概率p计算得到网络的输出。我们发现可以将学习到的节点与没有学习的节点求和相乘概率p这一过程看作是集成学习。

4.结果
4.1 将丢弃学习与集成学习作对比
针对于集成学习,将隐藏节点设置为50;针对于丢弃学习,将隐藏节点设置为100,并设置丢弃概率p为0.5,即丢弃学习将选择50个隐藏节点作为D(m)及剩余的50个节点不被选择;输入维度N=1000,学习率η=0.01。

2017-6-25 今日收集_第1张图片

正如图5(a)所示,集成学习实现的MSE要比单独一个网络的MSE更小,然而,丢弃学习实现的MSE比集成学习的MSE更小。因此,在每次迭代中,集成学习使用不同的隐藏节点集合比使用相同隐藏节点集合的性能更好。

  • 4.2 将丢弃学习与带有L2范数的随机梯度下降法作对比
    带有L2范数的随机梯度下降法的学习等式可以用下式表示:
    其中,α是L2范数的系数,也称为惩罚系数。
    图6展示了带有L2范数的随机梯度下降算法的结果(实验条件与图5相同):
    2017-6-25 今日收集_第2张图片
    对比图6和图5(b)可以看到,丢弃学习与带有L2范数的随机梯度下降算法的结果几乎相同。因此,丢弃学习的正则化效果与L2范数的相同。注意到,L2范数的随机梯度下降算法中,我们在每次尝试中必须选择α参数,而丢弃学习不需要调节参数。
    附件:Analysis...[【方向】].1498376835.pdf

深度学习网络大杀器之Dropout——深入解析Dropout-博客-云栖社区-阿里云

摘要: 本文详细介绍了深度学习中dropout技巧的思想,分析了Dropout以及Inverted Dropout两个版本,另外将单个神经元与伯努利随机变量相联系让人耳目一新。

  • Dropout的思想是训练整体DNN,并平均整个集合的结果,而不是训练单个DNN。DNNs是以概率P舍弃部分神经元,其它神经元以概率q=1-p被保留,舍去的神经元的输出都被设置为零。

在标准神经网络中,每个参数的导数告诉其应该如何改变,以致损失函数最后被减少。因此神经元元可以通过这种方式修正其他单元的错误。但这可能导致复杂的协调,反过来导致过拟合,因为这些协调没有推广到未知数据。Dropout通过使其他隐藏单元存在不可靠性来防止共拟合。

简而言之:Dropout在实践中能很好工作是因为其在训练阶段阻止神经元的共适应。
由于在训练阶段神经元保持q概率,在测试阶段必须仿真出在训练阶段使用的网络集的行为。
为此,作者建议通过系数q来缩放激活函数:


训练阶段

测试阶段

Inverted Dropout
一组神经元的Dropout

2017-6-25 今日收集_第3张图片

此外可以注意到,围绕在p = 0.5值附近的分布是对称。
**总结
**
1 Dropout存在两个版本:直接(不常用)和反转
2 单个神经元上的dropout可以使用伯努利随机变量建模
3 可以使用二项式随机变量来对一组神经元上的舍弃进行建模
4 即使舍弃神经元恰巧为np的概率是低的,但平均上np个神经元被舍弃。
5 Inverted Dropout提高学习率
6 Inverted Dropout应该与限制参数值的其他归一化技术一起使用,以便简化学习速率选择过程
7 Dropout有助于防止深层神经网络中的过度拟合
Blog:
https://pgaleone.eu/

lstm时间序列

LSTM Neural Network for Time Series Prediction | Jakob Aungiers

2017-6-25 今日收集_第4张图片

2017-6-25 今日收集_第5张图片

seq2seq 加速训练

Accelerating Deep LSTM Net's Training Time:MachineLearning

  • Hey Guys, Lately, I've been experimenting with pretty substantially deep neural nets:
    Six to Ten LSTM Layers
    Anywhere from 512 to 1024 Hidden Variables.
    To clarify, chars are not characters, they are words. I take all the words in the text and cluster them. From that I assign a cluster id and a word id. This allows chars to be around 400. I used to do chars which is why the variable is named that way.
    For reference, my Keras model is below: : number_of_layers = 6 hidden_variables = 512 chars = 400
  • I am inputting sentences and predicting the next word. I have much training data (about 10M samples), and I end up doing a softmax over ~400 outputs. However, these large neural nets on a 980 Ti take about 9 days to train for 100 epochs. I'm trying to figure out ways to speed up testing so that I can estimate what parameters are best, and then do a full 9 training days.
    My goal is to have each test to take at most 48 hours. Once the most promising configuration is observed, we'll run the full LSTM 9 day training.
  • Here are a few ideas to speed things up:
    Idea 1: Change all the LSTMS to GRU's -- GRU's are about 2x faster, but LSTM's seem to outperform them in the long run (as the number of epochs increase).
    Idea 2: Change the learning rate on the Adam Optimizer from the standard 0.001 to 0.010. After about epoch 10, decrease the learning rate by half each time. Stop learning after epoch 20.
    Idea 3: Decrease hidden variable numbers or LSTM layer numbers -- I dislike this option the most because I feel that the level of abstraction reached will be significantly lower.
    Are there any other ideas you guys can think of? Maybe I'm missing something obvious. I'm also considering buying another 980 TI to run two separate configurations at once.
    As it currently stands with the config above, it takes me about 9 days to get 100 epochs.
  • I was using Adam. I am not sure how many epochs I was running (I had way too much data so I went by iterations / weight updates instead of epochs.). My method was set learning rate as high as I could for it to be stable, run the model and save out weights every hour or so ish. Once the cost went funky (very noisy or just exploded) I would stop the old model, drop the learning rate in my code, and restart the model loading in the previous saved weights instead of random initializations. Not great for reproducibility, but did work for me. I was also for speech to text, so completely different problem and might not transfer
  • So I haven’t done as much testing as I would like, but in that one application I dropped it by 10x. .002 --> .0002. That was the first and only schedule I tried.
    In terms of time it was fairly clear. cost function completely exploded after like 6 hours so I jumped back to maybe 5ish hours in and dropped it. Sorry I don't have exact numbers, the runs are on a different computer...
  • (1) Not sure it makes sense to have an embedding layer with characters. Perhaps you should try and reproducehttp://arxiv.org/abs/1509.01626
    (2) Also, consider using less layers. More layers can reduce performance in some cases. Start with LSTM 1-2 layers and add layers until the results are getting worse. It should also speed up your training.
    (3) Adding more fully connected layers can be more effective than more LSTM layers (you already have one - consider more). This should also speed things up. (4) Now you have reduced your GPU memory consumption significantly so you can run multiple models on the same GPU simultaneously.
  • Bardelaz, thanks for the pointers.
    1), I'm actually embedding words, so it helps me to reduce memory consumption. It is just that each word is represented by two numbers: a cluster id and a word id.
  1. I have done this in the past, and it works pretty well. I have had more success with more layers (6 to 10), which what has brought me to the 9 day training time. Good suggestion though
    3)I did not think of the fully connected layer idea. Can you explain why they could be potentially more effective? I do the fully connected layer at the end to condense down the number of variables to 400 so I can do a softmax on them.
    4), Sounds like a good strategy using the fully connected layers. Thanks for the tips!

How are you loading and transforming your data for training? With that large of a dataset, I'm assuming it is probably not all in memory. You may be starving the gpu computation if you are not pipelining the data properly.

  • meep, you're right. I do not put it all into ram at once. I build about two to three separate matricies and pickle them. Then I load the entire matrix onto regular ram, train the model on that matrix. Aftewards, I load the next matrix on regular ram, train it, and the cycle continues. I think this is the most efficient way to do this? It takes about 20 secs to load each matrix. But each epoch takes about 3 hrs, so its pretty insubstantial I believe
  • Interesting -- I haven't tried neon yet. The memory required for the whole dataset would probably be around 20gb of ram. I load about 7gb of ram at a time. Keep in mind that the y labels are a 2d matrix one hot of 400 x number of samples = 400 x 500 million = ~20gb. The x train is not that much because I don't do any one-hotting (at most x_train is 2gb). I have an embedding layer that allows me just to do integers instead of one hotting.

Factorization tricks for lstm networks

practical methods for training LSTM using large corpus of data with keras · Issue 5264 · fchollet/keras · GitHub

I'm training a large LSTM network using around 10 million sequences of 6-12 length characters with Keras, I'm using the one-hot representation for input and output, with a large learning rate, it doesn't converge, but with a very small learning rate, it converges and the leaning curve looks fine, but got a validation result worse than large learning rate, can anyone suggest me a good way to find good hyperparams and a way to decide if I have got the best result the model can get?

Just an idea of for general LSTM debugging techniques :
start with non-LSTM network on a fixed length sequence. Even MLP will probably provide better than average result.
Start with the minimal length sequence for train, validation+ test on some LSTM network and see how it behave. From my experience, it is a good starting point.
You mentioned you only succeed to converge with very small learning rate. Try build the network such that it will contain much fewer parameter. For example, you can create a first layer as temporal convolution in order to make the input size for the LSTM smaller. Only after you are able to achieve good results with the "simpler" network, start to increase the network complexity gradually. You can post your network architecture here if you want.

I think the first LSTM layer with 4000 cells is huge and probably cause the number of parameter to explode. You can use model.summary() to see the number of parameters. Have you tried the full dataset with the LSTM of 150 like the first example? BTW, can you explain what you are trying to do? Predict character?

I read Karpathy's blog. I still suggest you to try with a much smaller LSTM layer. Maybe the problem is that due to the big LSTM layer, the training become very unstable (this is something that happened to me). My other suggestion is just general development suggestion - if something work in A and doesn't work in B, move gradually and slowly from A to B and check the A-B intermediate results. I will be happy to know what you find.

LSTM training is really slow · Issue #1063 · fchollet/keras · GitHub

It may be, and it is not only a matter of GPU in these cases most of overhead is inside the scan loop for recurrent neural networks. Try to profile your output and you will understand where is the overhead. set training epoch to 1 and the theano flag profile=1. It will return a profile indicating where is the overhead of your model. Probably most of the time is spend inside loops. Unfortunately in Keras at the moment there is no easy solution for that it's due to the fact that the Theano scan is really slow.
它可能是,并且这不仅仅是GPU的问题在这些情况下,大多数开销在复发神经网络的扫描循环内部。 尝试配置您的输出,您将了解开销在哪里。 将训练时期设置为1,theano标志配置文件= 1。 它将返回一个配置文件,指示模型的开销在哪里。 大部分时间大概是花在里面循环。 不幸的是,现在Keras目前还没有简单的解决方案,因为Theano扫描真的很慢。
I don't know how old is your theano version there has been other optimisation in the recent past maybe you can obtain something more.In my model that uses 2 LSTM i obtained a speedup from 38s-40s per epoch to 35s per epoch ~10% speed up as pointed out by @nouiz .If you cannot deal with these times you can try to unfold the scan as it was done in the Lasagne library but it works only some times and you need to partially modify Keras.

The problem isn't the scan, you spend 93% of time in scan. Scan have just a 2% overhead. Inside scan, you spend 80% of your time in gemm. So the gemm are your real bottleneck, not scan.
But there is trick that can be done. In the DLT LSTM[1] example, there is a trick that was used to speed up the computation. It is to bundle some of the weights in only one shared variable and do a big gemm instead of a few smaller one. I don't remember the speed difference, but it gave a significant difference, but I'm not sure of the magnitude. There is another thread/PR to keras with speed up of 4x to a modele with scan that gave 4x speed up from memory of reading rapidly keras related emails. Check it, maybe you can reuse the same tricks. 但是可以做到这一点。 在DLT LSTM [1]示例中,有一个用于加速计算的技巧。 它是将一些权重捆绑在一个共享变量中,并做一个大的gemm而不是一个较小的。 我不记得速度的差异,但它有很大的差异,但我不确定的大小。 还有另一个线程/ PR到keras,加速到4x的扫描模式,从读取快速keras相关电子邮件的内存提供4倍的速度。 检查它,也许你可以重复使用相同的技巧。
following up @elanmart comment. @pranv suggestion was the same as the one proposed by @nouiz which is just concat a big tensor and run a single big gemm. @fchollet seemed interested in that approach in the discussion about generalized backends as well. At one point we will have to go back in our RNN models (LSTM, GRU, and Neural Turing Machines) and unify the multiplications for speed up. This will make the code less readable, but people usually feel happier with performance. Also, with good comments on the code the problem can be reduced. But for now, @Vict0rSch please try updating your Theano and letting us know what happens.跟进@elanmart评论。 @pranv建议与@nouiz提出的建议是一样的,它只是一个大张量,运行一个大的宝石。 @fchollet在讨论广义后端时也似乎对这种做法感兴趣。有一点,我们必须回到我们的RNN模型(LSTM,GRU和神经图灵机),并统一乘法加速。这将使代码的可读性降低,但人们通常会感觉到性能更高。此外,对代码的好评论可以减少问题。但是现在,@ Vict0rSch请尝试更新您的Theano,让我们知道会发生什么。

slow training of LSTM · Issue #415 · fchollet/keras · GitHub

  • And I compared the efficiency with char-rnn, and found that the implementation in keras is about 4 times slower than Karpathy's (with the same batchsize).Am I doing something wrong?I've attached the theano profile resultThanks you!

lstm - How to speedup rnn training speed of tensorflow? - Stack Overflow

Tradeoff batch size vs. number of iterations to train a neural network - Cross Validated

When training a neural network, what difference does it make to set:batch size to a and number of iterations to b
vs. batch size to c and number of iterations to d
where ab=cd?
To put it otherwise, assuming that we train the neural network with the same amount of training examples, how to set the optimal batch size and number of iterations? (where batch size * number of iterations = number of training examples shown to the neural network, with the same training example being potentially shown several times)

参考:On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
I am aware that the higher the batch size, the more memory space one needs, and it often makes computations faster. But in terms of performance of the trained network, what difference does it make?

**It has been observed in practice that when using a larger batch there is a significant degradation in the quality of the model, as measured by its ability to generalize. **
**The lack of generalization ability is due to the fact that large-batch methods tend to converge to sharp minimizers of the training function. **


2017-6-25 今日收集_第6张图片

Xu Cui » Deep learning speed test, my laptop vs AWS g2.2xlarge vs AWS g2.8xlarge vs AWS p2.xlarge vs Paperspace p5000

你可能感兴趣的:(2017-6-25 今日收集)