pytorch报错大全

(1)IndentationError: unexpected indent

在这里插入图片描述
缩进有问题

(2)IndexError: list index out of range

情况一:
list[index]中的index下标超出范围了,所以出现了访问越界;

情况二:
list本身就是一个空的,没有一个元素,所以当访问到list[0]的时候,就会出现该错误。
from

(3)IndexError: too many indices for tensor of dimension 0

维度上不匹配,需要检查tensor的维度

(4)IndentationError: unindent does not match any outer indentation level

还是缩进问题!可能是多了一个空格!

(5)ConnectionResetError: [Errno 104] Connection reset by peer

网络问题,直接重新训练就可以了
目前只在极客打榜遇到过这个问题

(6)RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed)

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

可以看一下这篇

loss1.backward(retain_graph=True)

其实就是在backward后面补上一个参数!!!

(7)CUDA out of memory. 超内存

CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 10.76 GiB total capacity; 9.32 GiB already allocated; 3.44 MiB free; 9.59 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

【参考文档】cuda超内存
但是实际上我最后将batchsize改小就没问题了!我是改到4
但是batchsize好像是会影响召回率的,因此最好是选一个极限大值


虽然但是,还在持续更新!这部分是留给我自己的小计,因为有些错确实是有点呆了。

你可能感兴趣的:(#,pytorch快速开发与实战,pytorch,深度学习,机器学习)