Pytorch 中 Runtime Error & TypeError 之问题详解

  • TypeError: img should be PIL Image.
    需要注意transform属性中Resize要在ToTensor前面!!!

  • RuntimeError: stack expects each tensor to be equal size, but got []….
    需要在transform中添加Resize((224))属性,224不是绝对,具体看图片大小!!!

  • TypeError: ‘module’ object is not callable.
    注意是否将模块作为函数名调用,注意 import 和 from…import的区别!!!

  • RuntimeError: Expected object of scalar type Long. but got scalar type Float for argument #2 ‘target’.
    需要注意将相关数据加上long(),或者在Tensor数据时,使用 dtype = torch.int64!!!

  • RuntimeError: view size is not compatible with input tensor‘s size and stride.
    这是因为view()需要Tensor中的元素地址是连续的,但可能出现Tensor不连续的情况,所以先用 contiguous() 将其在内存中变成连续分布: a= a.contiguous().view()!!!

  • RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
    一个网络有两个 loss 需要分别执行 backward 的时候,需要在第一个backward 内加上 retain_graph=True ,如下代码。

d_loss.backward(retain_graph=True);
g_loss.backward();
  • RuntimeError: one of the variables needed for gradient computation has been modified by an inplace.
    如下方式修改相应代码,把更新梯度的步骤调后即可:
optimizerD.zero_grad();
optimizerG.zero_grad();

d_loss.backward(retain_graph=True);
g_loss.backward();

optimizerD.step();
optimizerG.step();
  • RuntimeError: grad can be implicitly created only for scalar outputs.
    注意 backward() 回传的损失是个标量,检查Loss!

你可能感兴趣的:(深度学习与神经网络,pytorch,深度学习)