pytorch-errors

0.

RuntimeError: save_for_backward can only save input or output tensors, but argument 0 doesn't satisfy this condition


When custom Funciton & Module, and the module need backward, the input should be Variable not Tensor


1. 

RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed

e.g. lab[lab>=n_classes] = 0


2. RuntimeError: std::bad_cast pytorch

check date type

e.g.

Variable( torch.from_numpy(data) ).float().cuda()

Variable( torch.from_numpy(label).long().cuda()


3. RuntimeError: tensors are on different GPUs

some part not use gpu eg. model

but data use gpu


4. RuntimeError: CUDNN_STATUS_BAD_PARAM

check input and output channels of some layer


5. THCudaCheck FAIL file=/b/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.c line=79 error=2 : out of memory
Segmentation fault

https://discuss.pytorch.org/t/segmentation-fault-when-loading-weight/1381/8


6. RuntimeError: CHECK_ARG(input->nDimension == output->nDimension) failed at torch/csrc/cudnn/Conv.cpp:275

input data shape is different from desired input shape of model 


7. torch.utils.data Dataset...  

File "//anaconda3/lib/python3.6/site-packages/torch/functional.py", line 60, in stack
    return torch.cat(inputs, dim, out=out)
TypeError: cat received an invalid combination of arguments - got (list, int, out=torch.ByteTensor), but expected one of:
 * (sequence[torch.ByteTensor] seq)
 * (sequence[torch.ByteTensor] seq, int dim)


TypeError: cat received an invalid combination of arguments - got (list, int), but expected one of:
 * (sequence[torch.ByteTensor] seq)
 * (sequence[torch.ByteTensor] seq, int dim)
      didn't match because some of the arguments have invalid types: (list, int)


Important: each iteration should return same data type 

convert to same dtype then in train process convert it to desired dtype

concatenate operation operate on the items of same dtype 


8.   File "/home/wenyu/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
  File "/home/wenyu/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
    variables, grad_variables, retain_graph)

RuntimeError: CUDNN_STATUS_MAPPING_ERROR


class_num with loss maybe not match


9 . RuntimeError: CUDNN_STATUS_INTERNAL_ERROR

model and data maybe on the different GPU




你可能感兴趣的:(PyTorch)