PyTorch——BUG记录与解决方法

BUG 1

THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/THCGeneral.cpp line=844 error=11 : invalid argument

BUG 2

ValueError: Expected more than 1 value per channel when training, got input size [1, 512, 1, 1]

这个是在使用 BatchNorm 时不能把batchsize设置为1,一个样本的话y = (x - mean(x)) / (std(x) + eps)的计算中,xmean(x)导致输出为0,注意这个情况是在feature map为1的情况时,才可能出现xmean(x)。

Most likely you have a nn.BatchNorm layer somewhere in your model, which expects more then 1 value to calculate the running mean and std of the current batch.
In case you want to validate your data, call model.eval() before feeding the data, as this will change the behavior of the BatchNorm layer to use the running estimates instead of calculating them for the current batch.
If you want to train your model and can’t use a bigger batch size, you could switch e.g. to InstanceNorm.

PyTorch——BUG记录与解决方法_第1张图片

参考:https://blog.csdn.net/u011276025/article/details/73826562

你可能感兴趣的:(深度学习,#,PyTorch)