pytorch并行计算,显存溢出问题解决

使用的是torch模型,遇到一个报错问题,信息如下:

    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

经过排查发现是因为多显卡的问题导致显存超出引起的报错问题

解决方法:

model.load_state_dict(torch.load(args.model_path, map_location='cuda:1'))

你可能感兴趣的:(人工智能,pytorch,深度学习,python)