1. nn.DataParallel
model = nn.DataParallel(model.cuda(1), device_ids=[1,2,3,4,5])
criteria = nn.Loss() # i. .cuda(1) 20G-21G ii. cuda() 18.5G-12.7G iii. nothing 16.5G-12.7G. these all use almost same time per batch
data = data.cuda(1)
label = data.cuda(1)
-
out = model(data)
or.
model = nn.DataParallel(model, device_ids=[1,2,3,4,5]).cuda(1)
note:
original module == model.module
if device_ids[0] use much mem than others, data[other] label[other]
output_gpu = other
3. NEW API
torch.version.cuda
torch.cuda.get_device_name(0)
-------------------errors---
1.
data = data.cuda()
RuntimeError: Assertion `THCTensor_(checkGPU)(state, 4, input, target, output, total_weight)' failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /b/wheel/pytorch-src/torch/lib/THCUNN/generic/SpatialClassNLLCriterion.cu:46
2.
nn.DataParallel(model.cuda(), device_ids=[1,2,3,4,5])
result = self.forward(*input, **kwargs)
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 60, in forward
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 65, in replicate
return replicate(module, device_ids)
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate
param_copies = Broadcast(devices)(*params)
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 18, in forward
outputs = comm.broadcast_coalesced(inputs, self.target_gpus)
File "/anaconda3/lib/python3.6/site-packages/torch/cuda/comm.py", line 52, in broadcast_coalesced
raise RuntimeError('all tensors must be on devices[0]')
RuntimeError: all tensors must be on devices[0]
3.
nn.DataParallel(model, device_ids=[1,2,3,4,5])
out = model(data, train_seqs.index(name))
File "/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 60, in forward
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
File "/data1/ailab_view/wenyulv/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 65, in replicate
return replicate(module, device_ids)
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate
param_copies = Broadcast(devices)(*params)
File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 14, in forward
raise TypeError('Broadcast function not implemented for CPU tensors')
TypeError: Broadcast function not implemented for CPU tensors
-------------reference-----------
1. https://github.com/GunhoChoi/Kind_PyTorch_Tutorial/blob/master/09_GAN_LayerName_MultiGPU/GAN_LayerName_MultiGPU.py
2. http://pytorch.org/docs/master/nn.html#dataparallel
if device_ids[0] use much mem than others,