torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not sup

目录

If this was unintended please pass in `--num_processes=1`什么意思

设置gpu_id

设置多显卡格式:


报错:

The following values were not passed to `accelerate launch` and had defaults used instead:
	`--num_processes` was set to a value of `2`
		More than one GPU was found, enabling multi-GPU training.
		If this was unintended please pass in `--num_processes=1`.
	`--num_machines` was set to a value of `1`
	`--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.

torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs.

If this was unintended please pass in `--num_processes=1`什么意思

ChatGPT回答

这个警告指出某些参数未被明确传递,因此采用了默认值。在警告中指出,如果你意图是使用单个 GPU 进行训练,你可以传递 --num_processes=1 这个参数来指定只使用一个进程。通常情况下,如果系统中有多个 GPU,而你只想使用其中一个进行训练,就需要明确指定使用单个进程(即 --num_processes=1)。

这个警告的目的是提醒你可能意外地启用了多 GPU 训练,因为系统检测到有多个 GPU,但你没有明确地指定只使用一个 GPU。如果你确实只想使用一个 GPU 进行训练,就需要传递 --num_processes=1 这个参数来明确设置。

设置gpu_id

    gpu_ids = getattr(args, "gpu_ids", "all")
    # args.gpu_ids=gpu_ids='0,1'
    if gpu_ids != "all" and args.gpu_ids is not None:

设置多卡训练格式 解决问题:

gloo

位置:Lib\site-packages\accelerate\state.py

            elif int(os.environ.get("LOCAL_RANK", -1)) != -1 and not cpu and torch.cuda.is_available():
                self.distributed_type = DistributedType.MULTI_GPU
                if not torch.distributed.is_initialized():
                    self.backend = kwargs.pop("backend", "nccl")
                    # Special case for `TrainingArguments`, where `backend` will be `None`
                    if self.backend is None:
                        self.backend = "gloo"

你可能感兴趣的:(python基础,人工智能,pytorch)