训练深度学习模型时出现nan的原因

在训练深度学习的网络时候,迭代一定次数,会出现loss是nan,然后acc很快降低到了0.1,训练也就无法继续了。这个是什么原因?有说法是“尺度不平衡的初始化”,这个是什么意思?怎么才能解决呢?

There are lots of things I have seen make a model diverge.

Too high of a learning rate. You can often tell if this is the case if the loss begins to increase and then diverges to infinity.

I am not to familiar with the DNNClassifier but I am guessing it uses the categorical cross entropy cost function. This involves taking the log of the prediction which diverges as the prediction approaches zero. That is why people usually add a small epsilon value to the prediction to prevent this divergence. I am guessing the DNNClassifier probably does this or uses the tensorflow opp for it. Probably not the issue.

Other numerical stability issues can exist such as division by zero where adding the epsilon can help. Another less obvious one if the square root who's derivative can diverge if not properly simplified when dealing with finite precision numbers. Yet again I doubt this is the issue in the case of the DNNClassifier.

You may have an issue with the input data. Try calling assert not np.any(np.isnan(x)) on the input data to make sure you are not introducing the nan. Also make sure all of the target values are valid. Finally, make sure the data is properly normalized. You probably want to have the pixels in the range [-1, 1] and not [0, 255].

The labels must be in the domain of the loss function, so if using a logarithmic-based loss function all labels must be non-negative (as noted by evan pu and the comments below).

说明训练不收敛了, 学习率太大,步子迈的太大导致梯度爆炸等都是有可能的,另外也有可能是网络的问题,网络结构设计的有问题。我现在的采用方式是: 1. 弱化场景,将你的样本简化,各个学习率等参数采用典型配置,比如10万样本都是同一张复制的,让这个网络去拟合,如果有问题,则是网络的问题。否则则是各个参数的问题。 2. 如果是网络的问题,则通过不断加大样本的复杂度和调整网络(调整拟合能力)来改变。 3. 参数的微调,我个人感觉是在网络的拟合能力和样本的复杂度匹配的情况下,就是可以train到一定水平,然后想进行进一步优化的时候采用。 4. 参数的微调,楼上说得几个也算是一种思路吧,其他的靠自己去积累,另外将weights可视化也是一个细调起来可以用的方法,现在digits tf里面都有相关的工具.

loss出现Nan,说明你的loss已经发散了。下面是一点个人经验,无理论指导,欢迎板砖。解决办法:1、减小整体学习率。学习率比较大的时候,参数可能over shoot了,结果就是找不到极小值点。减小学习率可以让参数朝着极值点前进。2、改变网络宽度。有可能是网络后面的层参数更新异常,增加后面层的宽度试试。3、增加网络层数。4、改变层的学习率。每个层都可以设置学习率,可以尝试减小后面层的学习率试试。

1、数据归一化(减均值,除方差,或者加入normalization,例如BN、L2 norm等);
2、更换参数初始化方法(对于CNN,一般用xavier或者msra的初始化方法);
3、减小学习率、减小batch size;
4、加入gradient clipping;

你可能感兴趣的:(训练深度学习模型时出现nan的原因)