load 模型的时候,会有将模型加载到 cpu 还是 gpu 内存的区别。
会遇到这种问题。
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
# 只要在 load 中加入下面就可以解决了
map_location='cpu'
model.load_state_dict(torch.load('model.pth'))
torch.load('model.pth', map_location= lambda storage, loc: storage.cuda(0)).
torch.load('model.pth', map_location={'cuda:0' : 'cuda:1'})
torch.load('model.pth', map_location= lambda storage, loc: storage)