RuntimeError: grad can be implicitly created only for scalar outputs

 Traceback (most recent call last):
  File "E:\deep-learning-for-image-processing-master\deep-learning-for-image-processing-master\pytorch_classification\grad_cam\main_deep.py", line 40, in
    main()
  File "E:\deep-learning-for-image-processing-master\deep-learning-for-image-processing-master\pytorch_classification\grad_cam\main_deep.py", line 32, in main
    grayscale_cam = cam(input_tensor=input_tensor, target_category=target_category)
  File "E:\deep-learning-for-image-processing-master\deep-learning-for-image-processing-master\pytorch_classification\grad_cam\utils.py", line 148, in __call__
    loss.backward(retain_graph = True)
  File "E:\Anaconda3\envs\pytorch\lib\site-packages\torch\_tensor.py", line 363, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "E:\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd\__init__.py", line 166, in backward
    grad_tensors_ = _make_grads(tensors, grad_tensors_, is_grads_batched=False)
  File "E:\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd\__init__.py", line 67, in _make_grads
    raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs

问题解决了,记录一下。在DeeplabV3+模型上使用Grad CAM进行模型可解释化时,出现了这样的错误,翻译过来就是grad只能使用标量,因为只有标量才能计算梯度,根据网上查找的答案,我在加入了一各torch.ones_like(loss),原码为

loss.backward(retain_graph = True)

 改了之后

oss.backward(torch.ones_like(loss),retain_graph=True)

你可能感兴趣的:(pytorch,深度学习,python)