[Debug] pytorch分布式训练报错

Traceback (most recent call last):
  File "/opt/conda/envs/py39_torch12/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/conda/envs/py39_torch12/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/opt/conda/envs/py39_torch12/lib/python3.9/site-packages/torch/distributed/launch.py", line 340, in 
    main()
  File "/opt/conda/envs/py39_torch12/lib/python3.9/site-packages/torch/distributed/launch.py", line 326, in main
    sigkill_handler(signal.SIGTERM, None)  # not coming back
  File "/opt/conda/envs/py39_torch12/lib/python3.9/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
    raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/envs/py39_torch12/bin/python', '-u', 'moby_main.py', '--local_rank=0', '--cfg', 'configs/moby_swin_tiny.yaml', '--data-path', '/cheung/docker/project/int8/qat/classification/data/loadingrate', '--batch-size', '8']' died with .

    model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[config.LOCAL_RANK], broadcast_buffers=False, find_unused_parameters=True)

在torch.nn.parallel.DistributedDataParallel函数加上find_unused_parameters=True

你可能感兴趣的:(python,深度学习,开发语言)