id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st177168 | paepcke:
The difference ends up to be multiple epochs, even though both processes do finish.
The way that NCCL allreduce calls (basically the backward sync point you mention) work is by enqueuing the ops on GPU, and then the CPU host can continue, so while unlikely this desynchronization can occur.
Are you noticing any significant performance issues when this happens? If you do need to synchronize the GPUs, you can use torch.cuda.synchronize() or dist.barrier(), though this might affect the performance, especially if called very frequently. |
st177169 | Sorry, Rohan; I needed to move forward for now, and
will run on a single GPU for now. I did try the synchronize()
and barrier(), but somehow one process ends up taking
100% of a CPU, and all memory on its GPU. So something
is wrong; I’ll have to go through my own code again when I
get the chance. Thank you nonetheless! |
st177170 | Hi, I’m writing an algorithm where the data pool is affected by the outcome of the trained model at each epoch. This means that while I can use DDP to create copies of the model on GPUs, the data pool where the training samples are drawn from should be shared among all processes. Is there a good way to achieve this? How do I collect results from multiple processes to update the shared pool, before continuing to the next training epoch?
Many thanks! |
st177171 | Hi, if you’re within a single node you can probably coordinate these updates with something like a multiprocessing.Manager, see multiprocessing — Process-based parallelism — Python 3.9.1 documentation 13.
Alternatively, if your training is across different nodes entirely, you have a few options:
Have a per-node data pool, and just proceed with the above approach
Have a global data pool that is replicated across each node with the MP manager. You can then probably use pytorch APIs such as dist.broadcast_object_list and dist.scatter_object_list to share the required data. |
st177172 | Thank you for the suggestion! So far I’m satisfied with single-machine, multi-gpu setting. I did a bit of research and here’s a good post on how to use mp.Manager: https://stackoverflow.com/questions/10415028/how-can-i-recover-the-return-value-of-a-function-passed-to-multiprocessing-proce 13, in case anyone else is interested. BTW I should point out that torch.multiprocessing is just a thin layer of wrapping around Python’s native multiprocessing, so most stuff are similar and can be used directly in the same style.
Basically a dict can be shared among processes spawned, which suits my case. I guess any pickable primitives like List will also do, though.
Will be back and update this thread once I’ve extended to a multi-machine setting. |
st177173 | CDhere:
Will be back and update this thread once I’ve extended to a multi-machine setting.
For multiple machine data sharing, one option would be letting one process to serve as the data store, and use torch.distributed.rpc to push/pull data across processes. |
st177174 | Hello,
I am trying to train a VGG model using DataParallel on a new computer with multiple GPUs. Before, my code was working properly when training on my other computer with a single GPU. I have changed the code to include:
class MyDataParallel(nn.DataParallel) :
def __getattr__(self, name):
try:
return super().__getattr__(name)
except AttributeError:
return getattr(self.module, name)
...
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
net = MyDataParallel(net, device_ids=range(torch.cuda.device_count()))
net.to(device)
where the top subclass being created is to just allow my custom forward method to access model attributes (I am training the model on CIFAR100 instead of Imagenet, therefore I change the FC layers to 1 FC layer and I also don’t need the averagepool). With these changes, I get this error:
return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Expected tensor for argument #1 ‘input’ to have the same device as tensor for argument #2 ‘weight’; but device 1 does not equal 0 (while checking arguments for cudnn_convolution)
When looking at the library files for torch.nn.modules.conv, I noticed that the forward function for conv2d is different than conv1d and conv3d
def forward(self, input: Tensor) -> Tensor:
if self.padding_mode != 'zeros':
return F.conv1d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
self.weight, self.bias, self.stride,
_single(0), self.dilation, self.groups)
return F.conv1d(input, self.weight, self.bias, self.stride,
self.padding, self.dilation, self.groups)
...
def _conv_forward(self, input, weight):
if self.padding_mode != 'zeros':
return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
weight, self.bias, self.stride,
_pair(0), self.dilation, self.groups)
return F.conv2d(input, weight, self.bias, self.stride,
self.padding, self.dilation, self.groups)
...
def forward(self, input: Tensor) -> Tensor:
if self.padding_mode != 'zeros':
return F.conv3d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
self.weight, self.bias, self.stride, _triple(0),
self.dilation, self.groups)
return F.conv3d(input, self.weight, self.bias, self.stride,
self.padding, self.dilation, self.groups)
Is there a particular reason why conv2d is the only one that uses weight instead of self.weight in it’s forward function? I feel like that might be the reason why DataParallel is erroring during runtime. I am using Spyder as an IDE, and I have already put breakpoints in dataparallel.py. I can confirm that when the code hits
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
that the individual replicas are on their correct devices, and that each of the model’s conv layers have a weight tensor that exists on their proper device, but for some reason the forward method keeps trying to get the weight tensor on device 0 |
st177175 | Solved by randophilus in post #2
The conv2d library was not the problem. I found out problem was listed here : Since I was running VGG on cifar100, I had to rewrite the forward method on pytorch’s default VGG network since its built for ImageNet and includes a averagepool layer that will error with cifar100’s data size. Using types… |
st177176 | The conv2d library was not the problem. I found out problem was listed here 7 : Since I was running VGG on cifar100, I had to rewrite the forward method on pytorch’s default VGG network since its built for ImageNet and includes a averagepool layer that will error with cifar100’s data size. Using types.MethodType to replace methods in a network is incompatible with DataParallel. My solution was to create my own “MyVGG” class that takes a VGG model as an input and takes all of its parameters, and then I could write my own forward function within that class. |
st177177 | Hi, I’m in trouble with distributed.barrier(), I use this to let other ranks wait for rank0 to do test and save parm, when using DDP training all rank share param so I think no need to use all ranks to do test and save.
Some code here.
distributed.barrier() # first barrier
for epoch in range(resume_epoch, epochs):
tic = time.time()
if not cfg.data.transform.dali_pipe:
train_sampler.set_epoch(epoch)
train_one_epoch(model, train_loader, Loss, optimizer, epoch, lr_scheduler, logger, (top1_acc, loss_record, *train_dc),
scaler, gpu, args, cfg)
if is_first_rank:
one_epoch_time_cost = int(time.time() - tic)
train_speed = cfg.data.num_training_samples // one_epoch_time_cost
train_time_cost = "%02d:%02d:%02d" % seconds_to_time(one_epoch_time_cost)
logger.info(f'Finish one epoch cost {train_time_cost}, speed: {train_speed} samples/s.')
if not cfg.test.no_test:
test(model, val_loader, Loss, epoch, logger, (top1_acc, top5_acc, loss_record), gpu)
acc = top1_acc.get()
checkpoint = {
'model': model.state_dict(),
'optimizer': optimizer.state_dict(),
'scaler': scaler.state_dict(),
'lr_scheduler': lr_scheduler.state_dict(),
}
torch.save(checkpoint, '{}/{}_{}_{:.5}.pt'.format(args.out_path, cfg.model.network, epoch, acc))
if acc > best_top1_acc:
old_backbone = '{}/{}_backbone_{:.5}.pth'.format(args.out_path, cfg.model.network, best_top1_acc)
if os.path.exists(old_backbone):
os.remove(old_backbone)
best_top1_acc = acc
torch.save(checkpoint['model'], '{}/{}_backbone_{:.5}.pth'.format(args.out_path, cfg.model.network, acc))
if cfg.data.transform.dali_pipe.enable:
train_loader.reset()
logger.info(f"rank:{gpu} got here.")
distributed.barrier()
logger.info(f"rank:{gpu} pass here.")
My issue is all rank could pass first barrier, and all rank could get second barrier but no one pass it.
Could you please give me some advice? |
st177178 | Would it be possible to come up with a minimal script that reproduces the issue that we can run to reproduce it on our end? Any logs you may also have would be helpful.
Also - which distributed comm. backend are you using (Gloo, NCCL, MPI)? If all ranks indeed call into the second barrier but none make it out, I’m guessing there is likely a hang or some other form of desynchronization going on. |
st177179 | Hi, @rvarm1 , I could give you full log of this, but minimal script may need some time to prepare.
And I use NCCL backend.
Use GPU: 2 for training
Use GPU: 5 for training
Use GPU: 4 for training
Use GPU: 6 for training
Use GPU: 0 for training
Use GPU: 3 for training
Use GPU: 7 for training
Use GPU: 1 for training
Namespace(data=(classes=1000; num_training_samples=1281167; input_size=224; transform=(type=normal; color_jit=0.4; dali_pipe=(enable=True; dali_cpu=False)); dataloader=(num_workers=40; sampler=distributed_sampler)); model=(network=GhostNetRE; model_setting=width=0.5,dropout=0.1; model_info=True); trainer=(epochs=360; batch_size=256; dtype=float16; mix_precision_training=True; lr_scheduler=(type=cosine; warmup_lr=0; warmup_epochs=5); optimizer=(learning_rate=2.6; momentum=0.9; weight_decay=3e-05; nesterov=True)); test=(no_test=False; crop_ratio=0.875); tricks=(label_smoothing=(enable=True; smoothing=0.1); no_weight_decay=True; lookahead=False; mixup=(enable=False; alpha=0.2); last_gamma=False; sgd_gc=False); logs=(tensorboard=True; log_interval=200; logging_file_name=distribute_train_imagenet.log); resume=(resume_epoch=0; resume_param=None))
Train with FP16.
Reducer buckets have been rebuilt in this iteration.
Reducer buckets have been rebuilt in this iteration.
Reducer buckets have been rebuilt in this iteration.
Reducer buckets have been rebuilt in this iteration.
Reducer buckets have been rebuilt in this iteration.
Reducer buckets have been rebuilt in this iteration.
Reducer buckets have been rebuilt in this iteration.
Reducer buckets have been rebuilt in this iteration.
Epoch 0, Node 0, GPU 7, Iter 200, Top1 Accuracy:0.0029928, Loss:6.847, 622 samples/s. lr: 0.16806.
Epoch 0, Node 0, GPU 3, Iter 200, Top1 Accuracy:0.0031095, Loss:6.8501, 622 samples/s. lr: 0.16806.
Epoch 0, Node 0, GPU 2, Iter 200, Top1 Accuracy:0.0022932, Loss:6.8461, 622 samples/s. lr: 0.16806.
Epoch 0, Node 0, GPU 6, Iter 200, Top1 Accuracy:0.0026625, Loss:6.8474, 622 samples/s. lr: 0.16806.
Epoch 0, Node 0, GPU 0, Iter 200, Top1 Accuracy:0.0027596, Loss:6.8466, 622 samples/s. lr: 0.16806.
Epoch 0, Node 0, GPU 5, Iter 200, Top1 Accuracy:0.0026819, Loss:6.8472, 622 samples/s. lr: 0.16806.
Epoch 0, Node 0, GPU 4, Iter 200, Top1 Accuracy:0.0026819, Loss:6.8492, 622 samples/s. lr: 0.16806.
Epoch 0, Node 0, GPU 1, Iter 200, Top1 Accuracy:0.0023515, Loss:6.8483, 622 samples/s. lr: 0.16806.
Epoch 0, Node 0, GPU 4, Iter 400, Top1 Accuracy:0.012313, Loss:6.5364, 587 samples/s. lr: 0.33446.
Epoch 0, Node 0, GPU 3, Iter 400, Top1 Accuracy:0.012235, Loss:6.5393, 587 samples/s. lr: 0.33446.
Epoch 0, Node 0, GPU 6, Iter 400, Top1 Accuracy:0.012293, Loss:6.5403, 587 samples/s. lr: 0.33446.
Epoch 0, Node 0, GPU 1, Iter 400, Top1 Accuracy:0.012595, Loss:6.5376, 587 samples/s. lr: 0.33446.
Epoch 0, Node 0, GPU 5, Iter 400, Top1 Accuracy:0.012089, Loss:6.5356, 587 samples/s. lr: 0.33446.
Epoch 0, Node 0, GPU 7, Iter 400, Top1 Accuracy:0.012118, Loss:6.5386, 587 samples/s. lr: 0.33446.
Epoch 0, Node 0, GPU 0, Iter 400, Top1 Accuracy:0.012313, Loss:6.5372, 587 samples/s. lr: 0.33446.
Epoch 0, Node 0, GPU 2, Iter 400, Top1 Accuracy:0.01169, Loss:6.5363, 587 samples/s. lr: 0.33446.
Epoch 0, Node 0, GPU 2, Iter 600, Top1 Accuracy:0.027331, Loss:6.2461, 567 samples/s. lr: 0.50086.
Epoch 0, Node 0, GPU 4, Iter 600, Top1 Accuracy:0.02813, Loss:6.2486, 567 samples/s. lr: 0.50086.
Epoch 0, Node 0, GPU 5, Iter 600, Top1 Accuracy:0.027961, Loss:6.2488, 567 samples/s. lr: 0.50086.
Epoch 0, Node 0, GPU 6, Iter 600, Top1 Accuracy:0.027968, Loss:6.2517, 567 samples/s. lr: 0.50086.
Epoch 0, Node 0, GPU 1, Iter 600, Top1 Accuracy:0.028065, Loss:6.2518, 567 samples/s. lr: 0.50086.
Epoch 0, Node 0, GPU 7, Iter 600, Top1 Accuracy:0.027786, Loss:6.2526, 567 samples/s. lr: 0.50086.
Epoch 0, Node 0, GPU 3, Iter 600, Top1 Accuracy:0.027428, Loss:6.2534, 567 samples/s. lr: 0.50086.
Epoch 0, Node 0, GPU 0, Iter 600, Top1 Accuracy:0.027636, Loss:6.2534, 567 samples/s. lr: 0.50086.
Finish one epoch cost 00:04:43, speed: 4527 samples/s.
rank:3 get second barrier.
rank:4 get second barrier.
rank:1 get second barrier.
rank:6 get second barrier.
rank:7 get second barrier.
rank:5 get second barrier.
rank:2 get second barrier.
Test Epoch 0, Top1 Accuracy:0.08558, Top5 Accuracy:0.22848, Loss:5.3841
rank:0 get second barrier. |
st177180 | I see only one log line for Test Epoch 0, Top1 Accuracy:0.08558, Top5 Accuracy:0.22848, Loss:5.3841 , is it possible that rank 0 is issuing some additional collective comm. as part of this (e.g. all_reduce) that isn’t matched by the other ranks? That may explain the hang. |
st177181 | Hi, @rvarm1 .
I don’t use any distributed op in test, below is whole code of test. Only one line because I only use rank 0 to do test.
@torch.no_grad()
def test(model, val_loader, criterion, epoch, logger, collectors, gpu):
top1_acc, top5_acc, loss_record = collectors
top1_acc.reset()
top5_acc.reset()
loss_record.reset()
model.eval()
for meta_data in val_loader:
data = meta_data[0].cuda(gpu, non_blocking=True)
labels = meta_data[1].cuda(gpu, non_blocking=True)
outputs = model(data)
losses = criterion(outputs, labels)
top1_acc.update(outputs, labels)
top5_acc.update(outputs, labels)
loss_record.update(losses)
test_msg = 'Test Epoch {}, {}:{:.5}, {}:{:.5}, {}:{:.5}'.format(epoch, top1_acc.name, top1_acc.get(), top5_acc.name,
top5_acc.get(), loss_record.name, loss_record.get())
logger.info(test_msg)
top1_acc.write_tb(name='Val Top1 Accuracy', iteration=epoch)
top5_acc.write_tb(name='Val Top5 Accuracy', iteration=epoch)
loss_record.write_tb(name='Val Loss', iteration=epoch)
And if I use any distributed op I should not pass that until all ranks do this op right? |
st177182 | I am using pytorch-lightning as my training framework. And I am have tried training on 1, 2, 4 GPUs (all T4). My model, video action classification network, hangs at the same spot each time. It only hangs when I set the trainer flags
Trainer(
gpus=(something greater than 1)
sync_batchnorm=True,
accelerator="ddp"
)
I noticed that when it hangs GPU utilization stays pinned at 100% with no power fluctuations.
I am able to train my model with sync_batchnorm=False.
Does anyone have experience or tips on what a solution might be or how to properly debug this?
I have also tested this on 2 V100s and it hangs not at the exact same spot but same issue.
Version/OS:
Ubuntu 18.04LTS
CUDA 11.1
Driver Version 455.45.01
Pytorch 1.7.1
Pytorch-Lightning 1.1.5 |
st177183 | Solved by ChickenTarm in post #2
[Solved] My problem was that I have random alternating training that go down different branches of my model. I needed to set the random seed that samples the probability of which alternating loss it will perform. This is probably because when pytorch does it reduce_all somewhere, it notices a differ… |
st177184 | [Solved] My problem was that I have random alternating training that go down different branches of my model. I needed to set the random seed that samples the probability of which alternating loss it will perform. This is probably because when pytorch does it reduce_all somewhere, it notices a difference in batch norm statistics since I believe it assumes some ordering on the statistics. |
st177185 | ChickenTarm:
I needed to set the random seed that samples the probability of which alternating loss it will perform
This sounds correct, there would probably be a hang if a different alternating loss is used on different ranks. |
st177186 | I think this should probably be noted in the docs since alternating training isn’t uncommon. |
st177187 | I’m using gloo along with CPU. it gives me the following warning when I submit a job and it wouldn’t continue after the warning:
UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /opt/conda/conda-bld/pytorch_1607370172916/work/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
However when I run in an “interact” mode without submitting jobs it runs fine. Any clue how to get the jobs running? |
st177188 | Hi,
Would it be possible to get a minimal script that reproduces the issue so we can investigate? Also, information about your environment (PyTorch version, etc) would be great as well. Also, this seems like it may be an unexpected bug, so the issues Issues · pytorch/pytorch · GitHub might be a better place for this. |
st177189 | Sure, in the meantime, is there a way to suppress the use of GPU for gloo? like there is --mca mpi_cuda_support 0 for mpi? |
st177190 | Hi,
I am training a model where the learning loss is automatically updated (decay on plateau).
This works fine on single GPU.
I am porting my code to work with DDP.
After every epoch, I run a validation loop (only if local_rank == 0) and then decide whether to update or not the learning rate.
Pseudocode:
if local_rank == 0:
new_lr = run_validation_and_get_lr()
for param_group in optim.param_groups:
param_group["lr"] = new_lr
However, this seems to only update the learning rate in process 0, and not in the other processes.
How can I ensure the LR is updated in all processes ? |
st177191 | Solved by rvarm1 in post #2
Hi,
I’m guessing this is because your update is conditioned on if local_rank == 0. If you only want to run this validation on rank 0, you can broadcast the result to the rest of the ranks as follows:
if local_rank == 0:
new_lr = compute_new_lr()
dist.broadcast(torch.tensor(new_lr), src=0)
… |
st177192 | Hi,
I’m guessing this is because your update is conditioned on if local_rank == 0. If you only want to run this validation on rank 0, you can broadcast the result to the rest of the ranks as follows:
if local_rank == 0:
new_lr = compute_new_lr()
dist.broadcast(torch.tensor(new_lr), src=0)
else:
new_lr = torch.zeros()
dist.broadcast(new_lr, src=0)
# all ranks set param_group[lr] to new_lr.item() |
st177193 | Hi PyTorch Team,
I’m trying to use AWS p4 instances to train Neural Machine Translation model using fairseq. I am trying to perform training using single node. So this is not the distributed training. I am able to train the NMT model with single GPU but when I wanted to use the multiple GPUs on single instance it is throwing the following error log.
| distributed init (rank 1): tcp://localhost:13852
| distributed init (rank 2): tcp://localhost:13852
| distributed init (rank 5): tcp://localhost:13852
| distributed init (rank 3): tcp://localhost:13852
| distributed init (rank 6): tcp://localhost:13852
| distributed init (rank 4): tcp://localhost:13852
| distributed init (rank 0): tcp://localhost:13852
| distributed init (rank 7): tcp://localhost:13852
| initialized host ip-10-7-6-34 as rank 7
| initialized host ip-10-7-6-34 as rank 1
| initialized host ip-10-7-6-34 as rank 2
| initialized host ip-10-7-6-34 as rank 5
| initialized host ip-10-7-6-34 as rank 3
| initialized host ip-10-7-6-34 as rank 6
| initialized host ip-10-7-6-34 as rank 4
| initialized host ip-10-7-6-34 as rank 0
ip-10-7-6-34:26807:26807 [0] NCCL INFO Bootstrap : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26807:26807 [0] NCCL INFO NET/OFI Running on P4d platform, Setting NCCL_TOPO_FILE environment variable to /usr/local/cuda-10.1/efa/s hare/aws-ofi-nccl/xml/p4d-24xl-topo.xml
ip-10-7-6-34:26807:26807 [0] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1.
ip-10-7-6-34:26807:26807 [0] ofi_init:1136 NCCL WARN NET/OFI Only EFA provider is supported
ip-10-7-6-34:26807:26807 [0] NCCL INFO NET/IB : No device found.
ip-10-7-6-34:26807:26807 [0] NCCL INFO NET/Socket : Using [0]ens32:10.7.6.34<0>
NCCL version 2.4.8+cuda10.1
ip-10-7-6-34:26807:27073 [0] NCCL INFO Setting affinity for GPU 0 to ff,ffff0000,00ffffff
ip-10-7-6-34:26814:26814 [7] NCCL INFO Bootstrap : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26814:26814 [7] NCCL INFO NET/OFI Running on P4d platform, Setting NCCL_TOPO_FILE environment variable to /usr/local/cuda-10.1/efa/s hare/aws-ofi-nccl/xml/p4d-24xl-topo.xml
ip-10-7-6-34:26814:26814 [7] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1.
ip-10-7-6-34:26814:26814 [7] ofi_init:1136 NCCL WARN NET/OFI Only EFA provider is supported
ip-10-7-6-34:26814:26814 [7] NCCL INFO NET/IB : No device found.
ip-10-7-6-34:26814:26814 [7] NCCL INFO NET/Socket : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26814:27100 [7] NCCL INFO Setting affinity for GPU 7 to ffffff00,0000ffff,ff000000
ip-10-7-6-34:26810:26810 [3] NCCL INFO Bootstrap : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26810:26810 [3] NCCL INFO NET/OFI Running on P4d platform, Setting NCCL_TOPO_FILE environment variable to /usr/local/cuda-10.1/efa/s hare/aws-ofi-nccl/xml/p4d-24xl-topo.xml
ip-10-7-6-34:26810:26810 [3] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1.
ip-10-7-6-34:26810:26810 [3] ofi_init:1136 NCCL WARN NET/OFI Only EFA provider is supported
ip-10-7-6-34:26810:26810 [3] NCCL INFO NET/IB : No device found.
ip-10-7-6-34:26810:26810 [3] NCCL INFO NET/Socket : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26810:27104 [3] NCCL INFO Setting affinity for GPU 3 to ff,ffff0000,00ffffff
ip-10-7-6-34:26811:26811 [4] NCCL INFO Bootstrap : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26811:26811 [4] NCCL INFO NET/OFI Running on P4d platform, Setting NCCL_TOPO_FILE environment variable to /usr/local/cuda-10.1/efa/s hare/aws-ofi-nccl/xml/p4d-24xl-topo.xml
ip-10-7-6-34:26811:26811 [4] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1.
ip-10-7-6-34:26811:26811 [4] ofi_init:1136 NCCL WARN NET/OFI Only EFA provider is supported
ip-10-7-6-34:26811:26811 [4] NCCL INFO NET/IB : No device found.
ip-10-7-6-34:26811:26811 [4] NCCL INFO NET/Socket : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26811:27118 [4] NCCL INFO Setting affinity for GPU 4 to ffffff00,0000ffff,ff000000
ip-10-7-6-34:26813:26813 [6] NCCL INFO Bootstrap : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26813:26813 [6] NCCL INFO NET/OFI Running on P4d platform, Setting NCCL_TOPO_FILE environment variable to /usr/local/cuda-10.1/efa/s hare/aws-ofi-nccl/xml/p4d-24xl-topo.xml
ip-10-7-6-34:26813:26813 [6] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1.
ip-10-7-6-34:26813:26813 [6] ofi_init:1136 NCCL WARN NET/OFI Only EFA provider is supported
ip-10-7-6-34:26813:26813 [6] NCCL INFO NET/IB : No device found.
ip-10-7-6-34:26813:26813 [6] NCCL INFO NET/Socket : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26813:27129 [6] NCCL INFO Setting affinity for GPU 6 to ffffff00,0000ffff,ff000000
ip-10-7-6-34:26809:26809 [2] NCCL INFO Bootstrap : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26809:26809 [2] NCCL INFO NET/OFI Running on P4d platform, Setting NCCL_TOPO_FILE environment variable to /usr/local/cuda-10.1/efa/s hare/aws-ofi-nccl/xml/p4d-24xl-topo.xml
ip-10-7-6-34:26809:26809 [2] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1.
ip-10-7-6-34:26809:26809 [2] ofi_init:1136 NCCL WARN NET/OFI Only EFA provider is supported
ip-10-7-6-34:26809:26809 [2] NCCL INFO NET/IB : No device found.
ip-10-7-6-34:26809:26809 [2] NCCL INFO NET/Socket : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26809:27139 [2] NCCL INFO Setting affinity for GPU 2 to ff,ffff0000,00ffffff
ip-10-7-6-34:26808:26808 [1] NCCL INFO Bootstrap : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26808:26808 [1] NCCL INFO NET/OFI Running on P4d platform, Setting NCCL_TOPO_FILE environment variable to /usr/local/cuda-10.1/efa/s hare/aws-ofi-nccl/xml/p4d-24xl-topo.xml
ip-10-7-6-34:26808:26808 [1] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1.
ip-10-7-6-34:26808:26808 [1] ofi_init:1136 NCCL WARN NET/OFI Only EFA provider is supported
ip-10-7-6-34:26808:26808 [1] NCCL INFO NET/IB : No device found.
ip-10-7-6-34:26808:26808 [1] NCCL INFO NET/Socket : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26808:27144 [1] NCCL INFO Setting affinity for GPU 1 to ff,ffff0000,00ffffff
ip-10-7-6-34:26812:26812 [5] NCCL INFO Bootstrap : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26812:26812 [5] NCCL INFO NET/OFI Running on P4d platform, Setting NCCL_TOPO_FILE environment variable to /usr/local/cuda-10.1/efa/s hare/aws-ofi-nccl/xml/p4d-24xl-topo.xml
ip-10-7-6-34:26812:26812 [5] NCCL INFO NET/OFI Setting RDMAV_FORK_SAFE environment variable to 1.
ip-10-7-6-34:26812:26812 [5] ofi_init:1136 NCCL WARN NET/OFI Only EFA provider is supported
ip-10-7-6-34:26812:26812 [5] NCCL INFO NET/IB : No device found.
ip-10-7-6-34:26812:26812 [5] NCCL INFO NET/Socket : Using [0]ens32:10.7.6.34<0>
ip-10-7-6-34:26812:27154 [5] NCCL INFO Setting affinity for GPU 5 to ffffff00,0000ffff,ff000000
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 00 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 01 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 02 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 03 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 04 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 05 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 06 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 07 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 08 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 09 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 10 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26807:27073 [0] NCCL INFO Channel 11 : 0 1 2 3 4 5 6 7
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 00 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 00 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 00 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 00 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 00 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 00 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 00 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 00 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 01 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 01 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 01 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 01 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 01 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 01 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 01 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 01 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 02 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 02 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 02 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 02 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 02 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 02 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 02 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 02 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 03 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 03 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 03 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 03 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 03 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 03 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 03 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 03 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 04 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 04 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 04 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 04 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 04 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 04 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 04 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 04 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 05 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 05 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 05 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 05 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 05 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 05 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 05 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 05 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 06 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 06 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 06 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 06 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 06 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 06 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 06 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 06 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 07 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 07 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 07 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 07 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 07 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 07 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 07 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 07 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 08 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 08 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 08 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 08 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 08 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 08 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 08 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 08 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 09 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 09 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 09 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 09 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 09 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 09 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 09 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 09 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 10 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 10 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 10 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 10 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 10 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 10 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 10 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 10 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26808:27144 [1] NCCL INFO Ring 11 : 1[1] -> 2[2] via P2P/IPC
ip-10-7-6-34:26810:27104 [3] NCCL INFO Ring 11 : 3[3] -> 4[4] via P2P/IPC
ip-10-7-6-34:26809:27139 [2] NCCL INFO Ring 11 : 2[2] -> 3[3] via P2P/IPC
ip-10-7-6-34:26812:27154 [5] NCCL INFO Ring 11 : 5[5] -> 6[6] via P2P/IPC
ip-10-7-6-34:26811:27118 [4] NCCL INFO Ring 11 : 4[4] -> 5[5] via P2P/IPC
ip-10-7-6-34:26813:27129 [6] NCCL INFO Ring 11 : 6[6] -> 7[7] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Ring 11 : 0[0] -> 1[1] via P2P/IPC
ip-10-7-6-34:26814:27100 [7] NCCL INFO Ring 11 : 7[7] -> 0[0] via P2P/IPC
ip-10-7-6-34:26807:27073 [0] NCCL INFO Using 256 threads, Min Comp Cap 8, Trees disabled
ip-10-7-6-34:26808:27144 [1] NCCL INFO comm 0x7fa7e40028a0 rank 1 nranks 8 cudaDev 1 nvmlDev 1 - Init COMPLETE
ip-10-7-6-34:26808:26808 [1] enqueue.cc:197 NCCL WARN Cuda failure 'invalid device function'
ip-10-7-6-34:26808:26808 [1] NCCL INFO misc/group.cc:148 -> 1
ip-10-7-6-34:26812:27154 [5] NCCL INFO comm 0x7f21f40028a0 rank 5 nranks 8 cudaDev 5 nvmlDev 5 - Init COMPLETE
ip-10-7-6-34:26813:27129 [6] NCCL INFO comm 0x7f74c40028a0 rank 6 nranks 8 cudaDev 6 nvmlDev 6 - Init COMPLETE
ip-10-7-6-34:26813:26813 [6] enqueue.cc:197 NCCL WARN Cuda failure 'invalid device function'
ip-10-7-6-34:26813:26813 [6] NCCL INFO misc/group.cc:148 -> 1
ip-10-7-6-34:26810:27104 [3] NCCL INFO comm 0x7f2d640028a0 rank 3 nranks 8 cudaDev 3 nvmlDev 3 - Init COMPLETE
ip-10-7-6-34:26812:26812 [5] enqueue.cc:197 NCCL WARN Cuda failure 'invalid device function'
ip-10-7-6-34:26812:26812 [5] NCCL INFO misc/group.cc:148 -> 1
ip-10-7-6-34:26810:26810 [3] enqueue.cc:197 NCCL WARN Cuda failure 'invalid device function'
ip-10-7-6-34:26810:26810 [3] NCCL INFO misc/group.cc:148 -> 1
ip-10-7-6-34:26809:27139 [2] NCCL INFO comm 0x7f2e380028a0 rank 2 nranks 8 cudaDev 2 nvmlDev 2 - Init COMPLETE
ip-10-7-6-34:26811:27118 [4] NCCL INFO comm 0x7fc5f00028a0 rank 4 nranks 8 cudaDev 4 nvmlDev 4 - Init COMPLETE
ip-10-7-6-34:26809:26809 [2] enqueue.cc:197 NCCL WARN Cuda failure 'invalid device function'
ip-10-7-6-34:26809:26809 [2] NCCL INFO misc/group.cc:148 -> 1
ip-10-7-6-34:26814:27100 [7] NCCL INFO comm 0x7f09080028a0 rank 7 nranks 8 cudaDev 7 nvmlDev 7 - Init COMPLETE
ip-10-7-6-34:26807:27073 [0] NCCL INFO comm 0x7f50e40028a0 rank 0 nranks 8 cudaDev 0 nvmlDev 0 - Init COMPLETE
ip-10-7-6-34:26811:26811 [4] enqueue.cc:197 NCCL WARN Cuda failure 'invalid device function'
ip-10-7-6-34:26811:26811 [4] NCCL INFO misc/group.cc:148 -> 1
ip-10-7-6-34:26807:26807 [0] NCCL INFO Launch mode Parallel
ip-10-7-6-34:26807:26807 [0] enqueue.cc:197 NCCL WARN Cuda failure 'invalid device function'
ip-10-7-6-34:26807:26807 [0] NCCL INFO misc/group.cc:148 -> 1
ip-10-7-6-34:26814:26814 [7] enqueue.cc:197 NCCL WARN Cuda failure 'invalid device function'
ip-10-7-6-34:26814:26814 [7] NCCL INFO misc/group.cc:148 -> 1
Traceback (most recent call last):
File "/home/ubuntu/train.py", line 315, in <module>
cli_main()
File "/home/ubuntu/train.py", line 307, in cli_main
nprocs=args.distributed_world_size,
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in tart_processes
while not context.join():
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 119, in join
raise Exception(msg)
Exception:
-- Process 5 terminated with the following error:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/ubuntu/train.py", line 274, in distributed_main
File "/home/ubuntu/train.py", line 36, in main
args.distributed_rank = distributed_utils.distributed_init(args)
File "/home/ubuntu/fairseq/distributed_utils.py", line 85, in distributed_init
dist.all_reduce(torch.rand(1).cuda())
File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 898, in all_reduce
work = _default_pg.allreduce([tensor], opts)
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1587428091666/work/torch/lib/c10d/ProcessGroupNCCL.cpp:32, unhandled cuda error, NCCL version 2.4.8
Environment:
Aws DeepLearning AMI with Ubuntu 18.04
Cuda: 10.1
PyTorch: 1.5.0 with Cuda 10.1
Fairseq: 0.9
NCCL version: 2.4.8
GPU type: Nvidia - A100
Number of GPUs - 8
Number of Nodes - 1
I have referred the following GitHub issues and tried out most of the suggestions which are given there but I’m not able to resolve this issue.
BART-Large: RuntimeError: CUDA error: the launch timed out and was terminated · Issue #2311 · pytorch/fairseq · GitHub 1
Pytorch 1.5.0 (installed from conda) errors with complaints about incompatibility between MKL and libgomp when using Pytorch's multiprocessing · Issue #37377 · pytorch/pytorch · GitHub 1
NCCL backend fails when calling broadcast from different threads · Issue #18300 · pytorch/pytorch · GitHub 3
unhandled cuda error while training using multiple nodes · Issue #973 · pytorch/fairseq · GitHub 3
Crash when initializing distributed training across 2 machines - #4 by naykun 2 |
st177194 | The “unhandled cuda error” is more of a generic error message - the actual error may be indicated by the “Cuda failure ‘invalid device function’”.
These errors are usually caused by some incorrect configurations, and you may try setting the following:
* NCCL_TREE_THRESHOLD=0
* NCCL_SOCKET_IFNAME=
* NCCL_IB_DISABLE=1
* NCCL_P2P_DISABLE=1
A similar error from Horovod (link here: Cuda failure 'invalid device function' · Issue #1171 · horovod/horovod · GitHub 3) was fixed by setting NCCL_IB_DISABLE=1 |
st177195 | This error might be raised, as you are using a node with A100s (sm_80) and an old PyTorch 1.5.0 binary with the CUDA10.1 runtime, which doesn’t support this GPU architecture.
While the CUDA JIT might kick in to compile native PyTorch kernels, other libraries might raise the posted error, so you would have to update PyTorch to the latest release with CUDA11.0. |
st177196 | Thanks @osalpekar and @ptrblck for your suggestions. I will try with upgraded PyTorch version with CUDA11.0 and see if that is working for me or not. |
st177197 | Hello!
I am using DistributedDataParallel on one node with 4 GPUs. Also using DistributedSampler for training.
self.model = torch.nn.parallel.DistributedDataParallel(
self.model,
device_ids=[self.local_rank],
output_device=self.local_rank,
find_unused_parameters=True
)
Doing evaluation after every train epoch only on rank_0.
During evaluation I observed (through nvidia-smi) that other (1, 2, 3) ranks/gpus continue to be processing something with 100% load.
My questions:
Is it possible that other ranks continuing training next epoch without waiting for rank_0 to finish evaluation?
In case (1) is true, is it ok to leave it like this (will rank_0 process it’s part of the next epoch after it finishes evaluation)? Or is it better to set a barrier so that other ranks will wait for rank_0 to do evaluation?
Thanks! |
st177198 | Solved by mrshenli in post #2
Is it possible that other ranks continuing training next epoch without waiting for rank_0 to finish evaluation?
Yes, it is possible. Because all communication/synchronization happen in the backward pass. So other ranks will proceed with their next forward pass and local backward pass, and then b… |
st177199 | Is it possible that other ranks continuing training next epoch without waiting for rank_0 to finish evaluation?
Yes, it is possible. Because all communication/synchronization happen in the backward pass. So other ranks will proceed with their next forward pass and local backward pass, and then block on the AllReduce operation in the backward pass.
In case (1) is true, is it ok to leave it like this (will rank_0 process it’s part of the next epoch after it finishes evaluation)?
Yes, it should be OK to leave it this way. Other ranks is just block waiting on AllReduce until rank_0 finishes evaluations and then runs the subsequent backward pass. It shouldn’t affect the correctness.
During evaluation I observed (through nvidia-smi ) that other (1, 2, 3) ranks/gpus continue to be processing something with 100% load.
Yes, when block waiting for AllReduce, CUDA would show busy, although there might be no real computation running. |
st177200 | Thank you for such a great explanation. Really help me figure out how to do evaluation. |
st177201 | So is it possible to evaluate on all GPUs and combine the results, or will that be very complicated? |
st177202 | To verify my understanding of DDP’s model parameter synchronization, I starting with a [tutorial snippet][1]. I instrumented the code to save model snapshots before and after each call to backward().
- Ubuntu 20.04
- Pytorch torch-1.7.1-py3.8
- torch.cuda.nccl.version(): 2708
- 2xNvidia GTX Titan
- Single machine, 2 process, one for each of the GPUs
What I expected was:
Within one GPU, the model parameters after back prop would differ
from their values before the back prop
Observed: they were equal.
Across the two GPUs, model parameters would be the same after
back prop, because DDP synchronizes them.
Observed: they are indeed equal, but given 1., can I believe any sync is occurring?
In a separate experiment I used Wireshark to observe packet level activity between pytorch processes, but none was occurring.
What is the hitch in my thinking? Or the test implementation?
The code below is run via
python src/birdsong/minimal_ddp_launcher.py
Its output is:
python src/birdsong/minimal_ddp_launcher.py
Starting /home/paepcke/EclipseWorkspaces/birds/src/birdsong/minimal_ddp.py[0] of 2
Starting /home/paepcke/EclipseWorkspaces/birds/src/birdsong/minimal_ddp.py[1] of 2
Running basic DDP example on rank 1.
Running basic DDP example on rank 0.
Proc1: saving arrays of before and after models.
Proc0: saving arrays of before and after models.
Rank 1 is done.
Rank 0 is done.
Suspicious: corresponding pre-backward model parms match exactly across all processes
Good: corresponding post-backward model parms match exactly across processes
Suspicious: back prop has no impact on model parms
minimal_ddp.py:
#!/usr/bin/env python
import os
import sys
import copy
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP
class MinimalDDP:
'''Test whether DDP really does something'''
epochs = 2
samples = 3
#------------------------------------
# setup
#-------------------
def setup(self, rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
#------------------------------------
# demo_basic
#-------------------
def demo_basic(self, rank, world_size, model_save_dir='/tmp'):
'''The action: train model; save intermediate states'''
print(f"Running basic DDP example on rank {rank}.")
self.setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
# For saving model copies
# before and after back prop
# for each loop iteration:
before = []
after = []
for _epoch in range(self.epochs):
for _i in range(self.samples):
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10).to(rank))
labels = torch.randn(20, 5).to(rank)
# Copy and save model copies before and
# after back prop:
before.append(copy.deepcopy(ddp_model))
loss_fn(outputs, labels).backward()
after.append(copy.deepcopy(ddp_model))
optimizer.step()
# Clean GPU memory:
outputs.cpu()
labels.cpu()
dist.barrier()
# Save the state_dirs of all before-prop
# and after-prop model copies; each in its
# own file:
self.save_model_arrs(rank, before, after, model_save_dir)
self.cleanup()
if rank == 0:
# Using the saved files,
# verify that model parameters
# change, and are synchronized
# as expected:
self.report_model_diffs()
#------------------------------------
# save_model_arrs
#-------------------
def save_model_arrs(self, rank, before_arr, after_arr, model_save_dir):
'''Save state_dict of modesl in arrays to files'''
print(f"Proc{rank}: saving arrays of before and after models.")
for i, (model_before, model_after) in enumerate(zip(before_arr, after_arr)):
model_before.cpu()
model_after.cpu()
torch.save(model_before.state_dict(),
os.path.join(model_save_dir, f"before_models_r{rank}_{i}.pth"))
torch.save(model_after.state_dict(),
os.path.join(model_save_dir, f"after_models_r{rank}_{i}.pth"))
#------------------------------------
# report_model_diffs
#-------------------
def report_model_diffs(self, model_save_dir='/tmp'):
'''Check that model parms changed or
were synched as expected '''
model_arrs_len = self.epochs * self.samples
# Among GPUs, model parms should differ
# before backprop...
befores_differ_among_GPUs = True # that's the hope
# ... but be synched by DDP after
afters_differ_among_GPUs = False # that's the hope
# Wihin a single GPU, the model should be
# changed by the backprop:
befores_differ_from_afters = True # that's the hope
for i in range(model_arrs_len):
before_path_r0 = os.path.join(model_save_dir, f"before_models_r0_{i}.pth")
before_path_r1 = os.path.join(model_save_dir, f"before_models_r1_{i}.pth")
after_path_r0 = os.path.join(model_save_dir, f"after_models_r0_{i}.pth")
after_path_r1 = os.path.join(model_save_dir, f"after_models_r1_{i}.pth")
before_state0 = torch.load(before_path_r0)
before_state1 = torch.load(before_path_r1)
after_state0 = torch.load(after_path_r0)
after_state1 = torch.load(after_path_r1)
# The between-GPUs test:
for (param_tns0, param_tns1) in zip(before_state0, before_state1):
if before_state0[param_tns0].eq(before_state1[param_tns1]).all():
# Dang!
befores_differ_among_GPUs = False
for (param_tns0, param_tns1) in zip(after_state0, after_state1):
if after_state0[param_tns0].ne(after_state1[param_tns1]).any():
# Dang!
afters_differ_among_GPUs = False
# The within-GPUs test:
for (param_tns_pre, param_tns_post) in zip(before_state0, after_state0):
if before_state0[param_tns_pre].eq(before_state0[param_tns_post]).all():
# Dang!
befores_differ_from_afters = False
if befores_differ_among_GPUs:
print("Good: corresponding pre-backward model parms in processes differ")
else:
print("Suspicious: corresponding pre-backward model parms match exactly across all processes")
if afters_differ_among_GPUs:
print("Bad: backward does not seem to broadcast parms")
else:
print("Good: corresponding post-backward model parms match exactly across processes")
# Within one GPU, model parms before and
# after back prop should be different.
if befores_differ_from_afters:
print("Good: back prop does change model parms")
else:
print("Suspicious: back prop has no impact on model parms")
#------------------------------------
# cleanup
#-------------------
def cleanup(self):
dist.destroy_process_group()
print(f"Rank {rank} is done.")
# ------------------------ Toy Model ----------
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
# ------------------------ Main ------------
if __name__ == '__main__':
rank = int(sys.argv[1])
world_size = 2
model_save_dir = '/tmp'
min_ddp = MinimalDDP()
min_ddp.demo_basic(rank, world_size, model_save_dir)
minimal_ddp_launcher.py:
import subprocess
import os
class MinimalDDPLauncher:
def run_demo(self, demo_script, world_size):
procs = []
for rank in range(world_size):
print(f"Starting {demo_script}[{rank}] of {world_size}")
procs.append(subprocess.Popen([demo_script, str(rank), str(world_size)]))
for proc in procs:
proc.wait()
# ------------------------ Main ------------
if __name__ == '__main__':
curr_dir = os.path.dirname(__file__)
script_path = os.path.join(curr_dir, 'minimal_ddp.py')
launcher = MinimalDDPLauncher()
launcher.run_demo(script_path, 2)
[1] Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.7.1 documentation |
st177203 | paepcke:
# Copy and save model copies before and
# after back prop:
before.append(copy.deepcopy(ddp_model))
loss_fn(outputs, labels).backward()
after.append(copy.deepcopy(ddp_model))
Hi, it seems like this is the essential portion of your code that saves params before/after backward. Although, simply calling backward() is not enough to modify the model parameters, you also need to call the optimizer.step() which will actually apply the averaged grads to the parameters. |
st177204 | Thank you Rohan! Moving the test to after the optimizer step confirmed that parameters before and after are indeed different as expected. Both with and without DDP.
However, with the code below I still do not see evidence of DDP’s synchronization across two GPUs (single machine, 2 processes). I marked the code of interest; the rest makes the code runnable.
The output is:
python minimal_ddp_launcher.py minimal_across_two_gpus_ddp.py
Starting minimal_across_two_gpus_ddp.py[0] of 2
Starting minimal_across_two_gpus_ddp.py[1] of 2
Running basic DDP on two GPUs same machine: rank 0.
Running basic DDP on two GPUs same machine: rank 1.
Epoch0 batch0: Before states across gpus are equal
Epoch0 batch0: After states across gpus are equal
Epoch0 batch1: Before states across gpus are equal
Epoch0 batch1: After states across gpus are different
Epoch0 batch2: Before states across gpus are different
Epoch0 batch2: After states across gpus are different
Epoch1 batch0: Before states across gpus are different
Epoch1 batch0: After states across gpus are different
Epoch1 batch1: Before states across gpus are different
Epoch1 batch1: After states across gpus are different
Epoch1 batch2: Before states across gpus are different
Epoch1 batch2: After states across gpus are different
Rank 0 is done.
Rank 1 is done.
In minimal_across_two_gpus_ddp.py:
#!/usr/bin/env python
import os
import sys
import copy
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
from torch import randn
from torch.nn.parallel import DistributedDataParallel as DDP
class MinimalDDP:
'''Test whether DDP really does something'''
epochs = 2
batches = 3
#------------------------------------
# setup
#-------------------
def setup(self, rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
#------------------------------------
# demo_basic
#-------------------
def demo_basic(self, rank, world_size):
print(f"Running basic DDP on two GPUs same machine: rank {rank}.")
self.setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
dist.barrier()
for epoch_num in range(self.epochs):
for batch_num in range(self.batches):
optimizer.zero_grad()
outputs = ddp_model(randn(20, 10).to(rank))
labels = randn(20, 5).to(rank)
#********* Begin Portion of Interest ******
before_model = ddp_model.cpu()
before_state = copy.deepcopy(before_model.state_dict())
if rank == 1:
torch.save(before_state, f"/tmp/before_rank1.pth")
ddp_model.to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
after_model = ddp_model.cpu()
after_state = after_model.state_dict()
if rank == 1:
torch.save(after_state, f"/tmp/after_rank1.pth")
ddp_model.to(rank)
dist.barrier()
# Read the other's before/after states:
if rank == 0:
other_before_state = torch.load(f"/tmp/before_rank1.pth")
other_after_state = torch.load(f"/tmp/after_rank1.pth")
# Before states should be different:
states_equal = True
for before_parm, other_before_parm in zip(other_before_state.values(),
before_state.values()):
if before_parm.ne(other_before_parm).any():
states_equal = False
print(f"Epoch{epoch_num} batch{batch_num}: Before states across gpus are {('equal' if states_equal else 'different')}")
# After states should be the same:
states_equal = True
for after_parm_other, after_parm in zip(other_after_state.values(),
after_state.values()):
if after_parm_other.ne(after_parm).any():
states_equal = False
print(f"Epoch{epoch_num} batch{batch_num}: After states across gpus are {('equal' if states_equal else 'different')}")
#********* End Portion of Interest ******
# Clean GPU memory:
outputs.cpu()
labels.cpu()
dist.barrier()
self.cleanup()
#------------------------------------
# cleanup
#-------------------
def cleanup(self):
dist.destroy_process_group()
print(f"Rank {rank} is done.")
# ------------------------ Toy Model ----------
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
# ------------------------ Main ------------
if __name__ == '__main__':
# Started via minimal_ddp_launcher.py,
# which sets the rank:
rank = int(sys.argv[1])
world_size = 2
min_ddp = MinimalDDP().demo_basic(rank, world_size)
And in minimal_ddp_launcher.py:
import subprocess
import os, sys
class MinimalDDPLauncher:
def run_demo(self, demo_script, world_size):
procs = []
for rank in range(world_size):
print(f"Starting {demo_script}[{rank}] of {world_size}")
procs.append(subprocess.Popen([demo_script, str(rank), str(world_size)]))
for proc in procs:
proc.wait()
# ------------------------ Main ------------
if __name__ == '__main__':
if len(sys.argv) < 2:
print("Usage: {minimal_within_two_gpus_ddp.py | minimal_across_two_gpus_ddp.py}")
sys.exit(1)
curr_dir = os.path.dirname(__file__)
script_path = os.path.join(curr_dir, sys.argv[1])
launcher = MinimalDDPLauncher()
launcher.run_demo(script_path, 2) |
st177205 | Further support for the impression that DDP does not average the gradients automatically: if I add
for param in ddp_model.parameters():
dist.all_reduce(param.grad.data, op=dist.reduce_op.SUM)
param.grad.data /= world_size
right after the backward() operation, the output is what I expect (other than the use of the deprecated op within pytorch):
Running basic DDP on two GPUs same machine: rank 1.
Running basic DDP on two GPUs same machine: rank 0.
/home/paepcke/anaconda3/envs/birds/lib/python3.9/site-packages/torch-1.7.1-py3.9-linux-x86_64.egg/torch/distributed/distributed_c10d.py:142: UserWarning: torch.distributed.reduce_op is deprecated, please use torch.distributed.ReduceOp instead
warnings.warn("torch.distributed.reduce_op is deprecated, please use "
/home/paepcke/anaconda3/envs/birds/lib/python3.9/site-packages/torch-1.7.1-py3.9-linux-x86_64.egg/torch/distributed/distributed_c10d.py:142: UserWarning: torch.distributed.reduce_op is deprecated, please use torch.distributed.ReduceOp instead
warnings.warn("torch.distributed.reduce_op is deprecated, please use "
Epoch0 batch0: Before states across gpus are equal
Epoch0 batch0: After states across gpus are equal
Epoch0 batch1: Before states across gpus are equal
Epoch0 batch1: After states across gpus are equal
Epoch0 batch2: Before states across gpus are equal
Epoch0 batch2: After states across gpus are equal
Epoch1 batch0: Before states across gpus are equal
Epoch1 batch0: After states across gpus are equal
Epoch1 batch1: Before states across gpus are equal
Epoch1 batch1: After states across gpus are equal
Epoch1 batch2: Before states across gpus are equal
Epoch1 batch2: After states across gpus are equal
Rank 0 is done.
Rank 1 is done.
If I set async_op=True the states are different as they are in the absence of the explicit gradient averaging. This effect seems to indicate that the two processes really do need to run in lockstep.
Simply for cut/paste convenience, I attach the runnable code below. Only the above modification was added to the code in the previous reply.
run like this: python minimal_ddp_launcher.py minimal_across_two_gpus_ddp.py
Two files:
=============== CUT: File minimal_across_two_gpus_ddp.py:
class MinimalDDP:
'''Test whether DDP really does something'''
epochs = 2
batches = 3
#------------------------------------
# setup
#-------------------
def setup(self, rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
#------------------------------------
# demo_basic
#-------------------
def demo_basic(self, rank, world_size):
print(f"Running basic DDP on two GPUs same machine: rank {rank}.")
self.setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
dist.barrier()
for epoch_num in range(self.epochs):
for batch_num in range(self.batches):
optimizer.zero_grad()
outputs = ddp_model(randn(20, 10).to(rank))
labels = randn(20, 5).to(rank)
#********* Begin Portion of Interest ******
before_model = ddp_model.cpu()
before_state = copy.deepcopy(before_model.state_dict())
if rank == 1:
torch.save(before_state, f"/tmp/before_rank1.pth")
ddp_model.to(rank)
loss_fn(outputs, labels).backward()
#******
for param in ddp_model.parameters():
dist.all_reduce(param.grad.data,
op=dist.reduce_op.SUM,
async_op=False)
param.grad.data /= world_size
#******
optimizer.step()
after_model = ddp_model.cpu()
after_state = after_model.state_dict()
if rank == 1:
torch.save(after_state, f"/tmp/after_rank1.pth")
ddp_model.to(rank)
dist.barrier()
# Read the other's before/after states:
if rank == 0:
other_before_state = torch.load(f"/tmp/before_rank1.pth")
other_after_state = torch.load(f"/tmp/after_rank1.pth")
# Before states should be different:
states_equal = True
for before_parm, other_before_parm in zip(other_before_state.values(),
before_state.values()):
if before_parm.ne(other_before_parm).any():
states_equal = False
print(f"Epoch{epoch_num} batch{batch_num}: Before states across gpus are {('equal' if states_equal else 'different')}")
# After states should be the same:
states_equal = True
for after_parm_other, after_parm in zip(other_after_state.values(),
after_state.values()):
if after_parm_other.ne(after_parm).any():
states_equal = False
print(f"Epoch{epoch_num} batch{batch_num}: After states across gpus are {('equal' if states_equal else 'different')}")
#********* End Portion of Interest ******
# Clean GPU memory:
outputs.cpu()
labels.cpu()
dist.barrier()
self.cleanup()
#------------------------------------
# cleanup
#-------------------
def cleanup(self):
dist.destroy_process_group()
print(f"Rank {rank} is done.")
# ------------------------ Toy Model ----------
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
# ------------------------ Main ------------
if __name__ == '__main__':
# Started via minimal_ddp_launcher.py,
# which sets the rank:
rank = int(sys.argv[1])
world_size = 2
min_ddp = MinimalDDP().demo_basic(rank, world_size)
========== CUT And file minimal_ddp_launcher.py:
import subprocess
import os, sys
class MinimalDDPLauncher:
def run_demo(self, demo_script, world_size):
procs = []
for rank in range(world_size):
print(f"Starting {demo_script}[{rank}] of {world_size}")
procs.append(subprocess.Popen([demo_script, str(rank), str(world_size)]))
for proc in procs:
proc.wait()
# ------------------------ Main ------------
if __name__ == '__main__':
if len(sys.argv) < 2:
print("Usage: {minimal_within_two_gpus_ddp.py | minimal_across_two_gpus_ddp.py}")
sys.exit(1)
curr_dir = os.path.dirname(__file__)
script_path = os.path.join(curr_dir, sys.argv[1])
launcher = MinimalDDPLauncher()
launcher.run_demo(script_path, 2) |
st177206 | Hi everyone!
Does PyTorch provide a way to do the bitwise-xor of all elements in a tensor and return the output value?
I need to do this for a tensor of tensors, hence need a way to parallelize this computation, instead of using loops. Any kind of help would be great.
Thanks in advance. |
st177207 | Do you mean
import torch
x = torch.rand(2, 3, 4, 5) > 0.5
y = torch.rand(3, 4, 5) > 0.5
print(x)
print(y)
z = x | y
print(z) |
st177208 | Hi @Eta_C
Thanks for replying.
No, actually I have to do something like this:
import torch
# Sample tensor
X = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])
lst = []
for i in X:
XorI = 0
for elem in i:
XorI = elem ^ XorI
lst.append(XorI)
But without loops.
P.S. It would be better if there’s way for more than 2 dimensions as well, where I can reduce in 1 specified direction. |
st177209 | Sorry, since pytorch only have logical_xor and bitwise_xor, I have to find some “math expression” for this operator. See here 4, it is too complicated!
I think the best way is to make a pytorch extension 5. |
st177210 | See here 5, it is too complicated!
Yeah, this one seems a bit too complicated.
I think the best way is to make a pytorch extension 4.
I never knew about this feature of PyTorch. Can give it a try!
Thanks, @Eta_C ; though let’s keep the discussion open for any other possible solutions. |
st177211 | I am trying to use gradient compression (powerSGD: GitHub - epfml/powersgd: Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727 1) along with apex amp mixed precision “O2”, but I got the following error:
ray::ImplicitFunc.step() (pid=61519, ip=10.0.1.224)
File "/home/anaconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 248, in run
self._entrypoint()
File "/home/anaconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 316, in entrypoint
self._status_reporter.get_checkpoint())
File "/home/anaconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 575, in _trainable_func
output = fn()
File "/home/resnetTraceDDP.py", line 647, in train
File "/home/resnetTraceDDP.py", line 400, in reduce
RuntimeError: Expected object of scalar type Half but got scalar type Float for argument #0 'result' in call to _th_mm_out
My script (integrated into ray tune) is as follows, basically the error happens after the scaled_loss.backward() and during the all_reduce, in the RankKReducer(Reducer): def reduce function, particularly line 400 refers to where torch.matmul(matrix, q, out=p) is at.
class TensorBuffer():
"""
Packs multiple tensors into one flat buffer for efficient
intra-worker communication.
"""
def __init__(self, tensors):
indices = [0]
for tensor in tensors:
new_end = indices[-1] + tensor.nelement()
indices.append(new_end)
self._start_idx = indices[:-1]
self._end_idx = indices[1:]
self._tensors = tensors
self.buffer = torch.cat([t.view(-1) for t in tensors]) # copies
def __getitem__(self, index):
return self.buffer[self._start_idx[index] : self._end_idx[index]].view(*self._tensors[index].shape)
def __len__(self):
return len(self._tensors)
def pack(self, tensors=None):
# Optional. init already does this.
if tensors is None:
tensors = self._tensors
for tensor, entry in zip(tensors, self):
entry[:] = tensor
def unpack(self, tensors):
for tensor, entry in zip(tensors, self):
tensor[:] = entry
def nelement(self):
return self.buffer.nelement()
def element_size(self):
return self.buffer.element_size()
def bits(self):
return 8 * self.nelement() * self.element_size()
def all_reduce(self, async_op=False):
return torch.distributed.all_reduce(self.buffer, async_op=async_op)
def all_gather(self, async_op=False):
n_workers = torch.distributed.get_world_size() if torch.distributed.is_available() else 1
buffers = [torch.empty_like(self.buffer) for i in range(n_workers)]
handle = all_gather(buffers, self.buffer, async_op=async_op)
if async_op:
return buffers, handle
else:
return buffers
class Reducer:
def __init__(self, random_seed, device):
self.rng = np.random.RandomState(random_seed)
M = 1024 * 1024
self.precalc_numbers = (
torch.from_numpy(self.rng.randn(128 * M)).to(device).type(torch.float32)
)
if torch.distributed.is_available():
self.n_workers = torch.distributed.get_world_size()
self.rank = torch.distributed.get_rank()
else:
self.n_workers = 1
self.rank = 0
self.device = device
def reduce(self, grad_in, grad_out, memory_out):
"""Return communicated bits"""
raise NotImplementedError()
class RankKReducer(Reducer):
def __init__(self, random_seed, device, n_power_iterations=0, reuse_query=False, rank=1):
super().__init__(random_seed, device)
assert n_power_iterations == 0
self.rank = rank
self.p_memory = None
self.q_memory = None
self.reuse_query = reuse_query
def set_random(self, vector):
torch.manual_seed(self.rng.randint(1_000_000_000))
vector.data[:] = torch.randn(*vector.shape, device=self.device)
# orthogonalize(vector)
def reduce(self, grad_in, grad_out, memory_out):
"""
Reduce gradients between the workers in place
:param grad_in: dictionary -- send_buffers
:param grad_out: dictionary -- grads
:param memory_out: dictionary -- memories
"""
bits_communicated = 0
# Split the tensors into rank1-ones that will be reduced un-compressed
# and rank > 1 tensors that are compressed
rank1_tensors = [
(tensor, out, mem)
for tensor, out, mem in zip(grad_in, grad_out, memory_out)
if tensor.ndimension() <= 1
]
high_rank_tensors = [
(tensor, out, mem)
for tensor, out, mem in zip(grad_in, grad_out, memory_out)
if tensor.ndimension() > 1
]
# We are building a rank-1 approximation of every tensor
# that can be interpreted as a matrix. Let the approximation be
# M = p q^T
# We are allocating consequtive memory for the p's and q's
memory_is_uninitialized = self.p_memory is None
p_total_size = 0
q_total_size = 0
for tensor, _, _ in high_rank_tensors:
matrix = tensor.view(tensor.shape[0], -1)
n, m = matrix.shape
rank = min(n, m, self.rank)
p_total_size += n * rank
q_total_size += m * rank
if self.p_memory is None:
self.p_memory = torch.empty(p_total_size, device=self.device)
self.q_memory = torch.empty(q_total_size, device=self.device)
# Find them again and make lists of pointers
ps = []
qs = []
p_idx = 0
q_idx = 0
for tensor, _, _ in high_rank_tensors:
matrix = tensor.view(tensor.shape[0], -1)
n, m = matrix.shape
rank = min(n, m, self.rank)
ps.append(self.p_memory[p_idx : p_idx + n * rank].view(n, rank))
qs.append(self.q_memory[q_idx : q_idx + m * rank].view(m, rank))
p_idx += n * rank
q_idx += m * rank
for (tensor, _, _), q, p in zip(high_rank_tensors, qs, ps):
matrix = tensor.view(tensor.shape[0], -1)
n, m = matrix.shape
if self.reuse_query and not memory_is_uninitialized:
# orthogonalize(q)
pass
else:
# Sample a query vector q
self.set_random(q)
for (tensor, _, _), q, p in zip(high_rank_tensors, qs, ps):
matrix = tensor.view(tensor.shape[0], -1)
torch.matmul(matrix, q, out=p) #?
ts = datetime.now().timestamp()
all_reduce(self.p_memory)
ts = datetime.now().timestamp() - ts
bits_communicated += n_bits(self.p_memory)
# Start communicating rank 1 tensors
# TODO: check all times - all_reduce() time
rank1_tensor_list = TensorBuffer([tensor for (tensor, _, _) in rank1_tensors])
ts_2 = datetime.now().timestamp()
rank1_handle = rank1_tensor_list.all_reduce(async_op=True)
ts_2 = datetime.now().timestamp() - ts_2
bits_communicated += rank1_tensor_list.bits()
for p in ps:
orthogonalize(p)
for p, q, (tensor, _, _) in zip(ps, qs, high_rank_tensors):
matrix = tensor.view(tensor.shape[0], -1)
torch.matmul(matrix.t(), p, out=q)
ts_3 = datetime.now().timestamp()
all_reduce(self.q_memory)
ts_3 = datetime.now().timestamp() - ts_3
bits_communicated += n_bits(self.q_memory)
self.q_memory.data[:] /= self.n_workers
for p, q, (tensor, out, mem) in zip(ps, qs, high_rank_tensors):
# Set the output gradient
torch.matmul(p, q.t(), out=out.data[:])
mem.data[:] = tensor - out
rank1_handle.wait()
rank1_tensor_list.buffer /= self.n_workers
rank1_tensor_list.unpack([out for (_, out, _) in rank1_tensors])
return bits_communicated, ts+ts_2+ts_3
def ExactReducer(grad_in, grad_out, memory_out):
n_workers = float(dist.get_world_size())
for mem in memory_out:
mem.zero_()
list_in = grad_in
list_out = grad_out
if n_workers == 1:
for t_in, t_out in zip(list_in, list_out):
t_out[:] = t_in
return 0
buffer = TensorBuffer(list_in)
ts = datetime.now().timestamp()
buffer.all_reduce()
ts = datetime.now().timestamp() - ts
buffer.buffer /= n_workers
bits_communicated = buffer.bits()
buffer.unpack(list_out)
return bits_communicated, ts
def inf_loop(data_loader):
''' wrapper function for endless data loader. '''
for loader in repeat(data_loader):
yield from loader
def test_accuracy(net, device, testloader, criterion):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(testloader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
return test_loss/(batch_idx+1), 100.*correct/total
def load_data(config, distributed=True, data_dir="./data"):
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
train_dataset = datasets.CIFAR10(root=data_dir, train=True, transform=transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(32, 4),
transforms.ToTensor(),
normalize,
]), download=True)
if distributed:
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset)
else:
train_sampler = None
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=config["batch_size"], sampler=train_sampler,
num_workers=8, pin_memory=True)
# Guideline of setting the num. of workers:
# https://discuss.pytorch.org/t/guidelines-for-assigning-num-workers-to-dataloader/813/4
val_loader = torch.utils.data.DataLoader(
datasets.CIFAR10(root=data_dir, train=False, transform=transforms.Compose([
transforms.ToTensor(),
normalize,
])),
batch_size=128, shuffle=False,
num_workers=8, pin_memory=True)
return train_loader, val_loader
def all_reduce(*args, **kwargs):
if torch.distributed.is_available() and torch.distributed.get_world_size() > 1:
return torch.distributed.all_reduce(*args, **kwargs)
def set_random(random_seed, vector, device):
rng = np.random.RandomState(random_seed)
torch.manual_seed(rng.randint(1_000_000_000))
vector.data[:] = torch.randn(*vector.shape, device=device)
# orthogonalize(vector)
def orthogonalize(matrix, eps=torch.tensor(1e-8)):
n, m = matrix.shape
for i in range(m):
# Normalize the i'th column
col = matrix[:, i : i + 1]
col /= torch.sqrt(torch.sum(col ** 2)) + eps
# Project it on the rest and remove it
if i + 1 < m:
rest = matrix[:, i + 1 :]
# rest -= torch.matmul(col.t(), rest) * col
rest -= torch.sum(col * rest, dim=0) * col
def n_bits(tensor):
return 8 * tensor.nelement() * tensor.element_size()
def l2norm(x):
return torch.sqrt(torch.sum(x ** 2))
def train(config, checkpoint_dir=None, data_dir=None):
# Read in the model related hyperparameters
activation_function = config["activation"]
pool = config["pool"]
opt_level, rank = config["optrank"]
iters = config["num_iters"]
batch = config["batch_size"]
mul = config["multiplier"]
n = config["n"]
ex_rate = config["rate"]
# os.environ["CUDA_VISIBLE_DEVICES"]="0"
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = resnet(depth=n, activation=activation_function, pool=pool,
inplanes=16, num_classes=10, mul=mul)
# model = rn.ResNet18()
model.to(device)
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(model.parameters(), config["lr"],
momentum=config["momentum"],
weight_decay=config["weight_decay"])
if opt_level != "O3":
model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level)
else:
model, optimizer = amp.initialize(model, optimizer, opt_level="O3", keep_batchnorm_fp32=True)
lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[iters*0.6, iters*0.8], last_epoch=-1)
# Warm up and roll back
lr_rollback = False
warm_up_iters = 0
if n >= 110 and config["lr"] == 0.1:
warm_up_iters = 400
lr_rollback=True
train_loader, val_loader = load_data(data_dir=data_dir, config=config)
model.train()
ts = datetime.now().timestamp()
# Model parameters for gradient compression
# Note: for Rank-R PowerSGD, optimizer_memory = True, reducer_reuse_query = True
state = [parameter for parameter in model.parameters()]
memories = [torch.zeros_like(param) for param in state]
send_buffers = [torch.zeros_like(param) for param in state]
bits = 0
exchange_time = 0 # gradient exchange total time
com_time = 0 # compressed & decompressed time. In rank setting, this equal to tot time - all reduce time
ex_ts_acc = 0
com_ts_acc = 0
iter_cnt = 0
if rank != 0:
reducer = RankKReducer(device=device, random_seed=seed, rank=rank)
# Iteration-based training
for i, (input, target) in enumerate(inf_loop(train_loader)):
if n >= 110 and config["lr"] == 0.1:
if warm_up_iters > 0:
for param_group in optimizer.param_groups:
param_group['lr'] = 0.01
warm_up_iters -= 1
else:
if lr_rollback:
for param_group in optimizer.param_groups:
param_group['lr'] = config['lr']
lr_rollback=False
input = input.cuda(non_blocking=True)
target = target.cuda(non_blocking=True)
input_var = torch.autograd.Variable(input)
target_var = torch.autograd.Variable(target)
# compute output
output = model(input_var)
loss = criterion(output, target_var)
optimizer.zero_grad()
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward() # calculate the gradients
# TODO: Gradient Exchange
# average_gradients(model)
# in what frequency?
if i >= ex_rate-1 and (i+1)%ex_rate == 0:
iter_cnt += 1
if rank != 0:
exchange_time = datetime.now().timestamp()
grads = [param.grad for param in model.parameters()] # different from param.grad.data?
for grad, memory, send_bfr in zip(grads, memories, send_buffers):
send_bfr.data[:] = grad + memory
bits, time = reducer.reduce(send_buffers, grads, memories)
exchange_time = datetime.now().timestamp() - exchange_time
com_time = exchange_time - time # compressed & decompressed time
# accumulated
ex_ts_acc += exchange_time
com_ts_acc += com_time
else:
exchange_time = datetime.now().timestamp()
grads = [param.grad for param in model.parameters()] # different from param.grad.data?
for grad, memory, send_bfr in zip(grads, memories, send_buffers):
send_bfr.data[:] = grad + memory
bits, time = ExactReducer(send_buffers, grads, memories)
exchange_time = datetime.now().timestamp() - exchange_time
com_time = exchange_time - time
# accumulated
ex_ts_acc += exchange_time
com_ts_acc += com_time
if iter_cnt == (150 / ex_rate):
ex = ex_ts_acc / iter_cnt
com = com_ts_acc / iter_cnt
iter_cnt = 0
ex_ts_acc = 0
com_ts_acc = 0
optimizer.step()
lr_scheduler.step()
# Every 150 iterations, report metrics
if i >= 149 and (i+1)%150 == 0:
test_loss, accuracy = test_accuracy(model, device, val_loader, criterion)
tune.report(
time_per_150_iterations = datetime.now().timestamp()-ts,
compression_t_avg = com, # compression & decompression time
grad_exchange_t_avg = ex, # gradient exchange time
test_accuracy = accuracy,
bits_communicated = bits,
opt = opt_level,
rank = rank
)
ts = datetime.now().timestamp()
model.train()
if i+1 == iters:
break
def _iter():
# in total 3
opt = ["O2"]
rank = [7]
for a in opt:
for b in rank:
if (a == "O0" and b != 0) or (a == "O2" and b != 7):
continue
yield a, b
def main(max_num_epochs=200, gpus_per_trial=2):
config = {
"multiplier": tune.grid_search([4]),
"batch_size": tune.grid_search([256]),
"momentum": tune.grid_search([0.9]),
"n": tune.grid_search([110]),
# Fixed
"lr": tune.grid_search([0.1]),
"weight_decay": tune.grid_search([1e-4]),
"activation": tune.grid_search(["relu"]),
"pool": tune.grid_search(["avg"]),
"optrank": tune.grid_search(list(_iter())),
"rate": tune.grid_search([10]),
"num_iters": tune.grid_search([78000])
}
data_dir = os.path.abspath("/home/data")
distributed_train = DistributedTrainableCreator(
partial(train, data_dir=data_dir),
backend="nccl",
num_gpus_per_worker=1,
num_workers=2,
num_cpus_per_worker=10,
num_workers_per_host=1,
timeout_s=300
)
result = tune.run(
distributed_train,
name="Final_train_DDP_Mixed_RESNET_1_21",
config=config,
local_dir=os.path.abspath("/home/result"))
if __name__ == '__main__':
import argparse
# ray.tune.ray_trial_executor.DEFAULT_GET_TIMEOUT = 300000000
parser = argparse.ArgumentParser()
parser.add_argument(
"--smoke-test", action="store_true", help="Finish quickly for testing")
parser.add_argument(
"--ray-address",
help="Address of Ray cluster for seamless distributed execution.")
args, _ = parser.parse_known_args()
if args.smoke_test:
ray.init(local_mode=True)
main(max_num_epochs=10, gpus_per_trial=0)
else:
ray.init(address=args.ray_address)
main(max_num_epochs=100, gpus_per_trial=1)
Any one know how to fix this? |
st177212 | We recommend to use the native mixed-precision training via torch.cuda.amp.
Have a look at the docs 38 for some examples. |
st177213 | Hi, I’m currently working on encoder-decoder transformer model pre-training and trying to train it with apex’s distributed dataparallel settings.
As I know, it executes multi-processing and each process is assigned to one GPU.
And as I searched, this distributed training certainly has an advantage that it reduces memory imbalance problem than other methods do.
But in my training, GPU’s memory allocations are different and also GPU-utils are so low, which indicates that each GPU does not work fully.
image637×868 22.9 KB
Is there any reason why each GPU’s memory is used differently? Is there I misunderstood?
Why is GPU’s utilization so low? Is there anything I missed?
One more thing, before I tried with full data, I tested the smaller set with same batch sizes. In that setting, the training was completed with no problems. But with full data, I had to decrease the batch size quite a lot since it raised CUDA out-of-memory error. I don’t know why this happened because I think at least the size of one batch put into each GPU is same in each iteration.
Here is the training code only with essential parts.
import torch.distributed as dist
from apex.parallel import DistributedDataParallel as DDP
from apex import amp
from torch.utils.data.distributed import DistributedSampler
torch.backends.cudnn.enabled = True
torch.backends.cudnn.benchmark = True
torch.cuda.set_device(args.local_rank)
device = torch.device(f'cuda:{args.local_rank}')
dist.init_process_group(backend='nccl', init_method='env://', rank=args.local_rank, world_size=args.nproc_per_node)
data = PretrainDataset(path, data)
ppd = PretrainPadCollate(pad_id)
sampler = DistributedSampler(data)
data_loader = DataLoader(
data,
collate_fn=ppd.pad_collate,
batch_size=args.batch,
shuffle=True if sampler is None else False,
pin_memory=True,
num_workers=4,
sampler=sampler
)
model = ModelClass().to(device)
opt = Adafactor(model.parameters(), lr=args.lr, relative_step=False)
scheduler = lr_scheduler.ExponentialLR(opt, gamma=1/math.sqrt(args.warmup_steps))
model, opt = amp.initialize(model, opt, opt_level='O1')
model = DDP(model, delay_allreduce=True)
# Train iterations
train_step = 1
for batch in data_loader:
opt.zero_grad()
outputs = model(batch)
loss = outputs[0]
with amp.scale_loss(loss, opt) as scaled_loss:
scaled_loss.backward()
opt.step()
if train_step > args.warmup_steps:
scheduler.gamma = 1/math.sqrt(train_step)
scheduler.step()
Thank you. |
st177214 | We recommend to use the native DDP implementation instead of apex/DDP, as it’ll be further developed and improved. Could you switch and rerun your code?
Also, use torch.cuda.amp instead of apex/amp for the same reason. |
st177215 | When I am training a job with multiple GPUs on a single server, I suspect that one of the GPUs is running somehow slower than the other GPU.
For example, I suspect that 3 of 4 GPUs finish the forward and backward in 1s while the rest completes by (1 + X) s. During X, 3 of 4 GPUs are waiting without any GPU utilization. After X, PyTorch starts to synchronize the model weights/parameters among 4 GPUs.
How can I measure the X? Thanks! |
st177216 | You could use e.g. Nsight Systems 1 to create a timeline for the workload and see the workloads of each GPU. |
st177217 | Hi,
I’m implementing Microsoft’s DeBERTa from HuggingFace 1 in PyTorch.
Can I implement PyTorch’s Model Parallelism 7 with this HuggingFace transformer?
If yes, is there any documentation regarding this ? |
st177218 | Hi, I got this error when using torch.distributed on training a model on a single machine multiple GPUs.
File "run_classifier_bertgcn.py", line 364, in <module>
main()
File "run_classifier_bertgcn.py", line 319, in main
train(train_dataset, model, mlb, G, args.batch_sz, args.num_epochs, criterion, device, optimizer, lr_scheduler)
File "run_classifier_bertgcn.py", line 136, in train
output = model(input_ids, attention_mask)
File "/home/xdwang/ml4h/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/xdwang/ml4h/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 511, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/xdwang/ml4h/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/lustre03/project/6001103/xdwang/MeSH_Indexing_RGCN/model.py", line 349, in forward
output, _ = self.bert(src_input_ids, src_attention_mask)
File "/home/xdwang/ml4h/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/xdwang/ml4h/lib/python3.7/site-packages/transformers/modeling_bert.py", line 838, in forward
input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/home/xdwang/ml4h/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/xdwang/ml4h/lib/python3.7/site-packages/transformers/modeling_bert.py", line 197, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/home/xdwang/ml4h/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/xdwang/ml4h/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 126, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/xdwang/ml4h/lib/python3.7/site-packages/torch/nn/functional.py", line 1814, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorIndex.cu:403
My model is a simple BERT model (using huggingface transformer) with a sigmoid activation after BERT output. |
st177219 | Could you check, if you are creating tensors inside the forward method using .cuda() or to('cuda')?
If so, make sure these new tensors are created on the currently used device by using the .device attribute of any parameter or the input via .to(x.device). |
st177220 | Hi everyone,
I’m trying to train the FactorizableNet 7 using multi-GPUs. My GPUs are Titan Xp.
This network can be trained with the batch size equal to the number of GPUs. As you can see in the below code snip, the main input of the model is a list of images of diffirent sizes, not a tensor. The dataloader and the collate function just return a list of tuples.
for i, sample in enumerate(loader): # (im_data, im_info, gt_objects, gt_relationships)
# measure the data loading time
batch_size = len(sample['visual'])
# measure data loading time
meters['data_time'].update(time.time() - end, n=batch_size)
input_visual = [item for item in sample['visual']]
target_objects = sample['objects']
target_relations = sample['relations']
image_info = sample['image_info']
# RPN targets
rpn_anchor_targets_obj = [[
np_to_variable(item[0],is_cuda=False, dtype=torch.LongTensor),
np_to_variable(item[1],is_cuda=False),
np_to_variable(item[2],is_cuda=False),
np_to_variable(item[3],is_cuda=False)
] for item in sample['rpn_targets']['object']]
# compute output
try:
raw_losses = model(
im_data=input_visual,
im_info=image_info,
gt_objects=target_objects,
gt_relationships=target_relations,
rpn_anchor_targets_obj=rpn_anchor_targets_obj)
.....
The problem is that the time to complete a batch with batch size 1 is 1/8 of time for a batch of size 8. This means the training time is not reduced at all. Memory on the The GPU volatile utilization of all GPUs are very low.
I have asked the author, but it seems that he does not have time to answer questions.
What can I do now to reduce training time?
Thank you. |
st177221 | Hi Cao,
With low memory utilization on all GPUs and a batch of 1 per GPU, you should try and increase the batch size per GPU. I took a brief look at the underlying code and it looks like it is explicitly hard coded to 1 per GPU (see this commit 7). If this can be modified to allow for >1 then you’re likely to see some speedups.
Good luck. |
st177222 | Thank for your reply and sorry for my bad writing as well.
The main point here is processing time for batch size 1 (1 GPUs) is 0.48s, and that for batch 8 (8 GPUs) is 3.7s. There is no parallel processing at all.
I think feeding a list into the model is the main reason, then I would like to get help from you and other Pytorch experts. |
st177223 | It depends what the underlying implementation does. If it wraps nn.DataParallel, you should see a speedup. If it just processes the examples serially then not. When you run this, do you see GPU utilization on all the GPUs you expect to be participating (e.g. with nvidia-smi)? |
st177224 | I used gpustat to view the GPUs utilization every 2 second.
Most of running time, the GPUs utilization is very low. Sometime it jumps to a high value for a moment then go back to a idle. |
st177225 | After carefully inspecting the code, I have found that the author didn’t use the nn.DataParallel but their own DataParallel.
Code for the DataParallel is below:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import DataParallel as DataParallel_raw
import numpy as np
class DataParallel(DataParallel_raw):
"""
we do the scatter outside of the DataPrallel.
input: Scattered Inputs without kwargs.
"""
def __init__(self, module):
# Disable all the other parameters
super(DataParallel, self).__init__(module)
def forward(self, *inputs, **kwargs):
assert len(inputs) == 0, "Only support arguments like [variable_name = xxx]"
new_inputs = [{} for _ in self.device_ids]
for key in kwargs:
if key == 'im_data':
for i, device in enumerate(self.device_ids):
new_inputs[i][key] = kwargs[key][i].to(device)
elif key.startswith("rpn_anchor_targets"):
for i, device in enumerate(self.device_ids):
new_inputs[i][key] = [item.to(device) for item in kwargs[key][i]]
else:
assert isinstance(kwargs[key], list)
for i in range(len(self.device_ids)):
new_inputs[i][key] = [kwargs[key][i], ]
nones = [[] for _ in self.device_ids]
replicas = self.replicate(self.module, self.device_ids)
outputs = self.parallel_apply(replicas, nones, new_inputs)
return self.gather(outputs, self.output_device) |
st177226 | You could add some timing information to those paths. For example, if all time is spent in parallel_apply you know there is something inside the model that’s causing this, instead of this custom nn.DataParallel wrapper. Alternatively, wait for the author to have time to debug this with you. |
st177227 | My solution:
class Model(nn.Module):
def __init__():
pass
def forward(inputs, index):
inputs = inputs[int(index[0]) : int(index[-1]) + 1]
inputs = [torch.from_numpy(i).to(index.device) for i in inputs]
...
model = nn.DataParalle(Model())
index = torch.tensor(list(range(batch_size))).to(device)
x = [ndarray1, ndarray2, ..., ndarray16]
out = model(x, index) |
st177228 | I know that a torch.cuda.synchronize() waits until all operations on GPUs are completed. Does that also include communication operations?
More specifically, does calling torch.cuda.synchronize() just after an allreduce operation waits till the allreduce operation gets completed?
tensor_reduce_op = torch.distributed.all_reduce(tensor=tensor async_op=True)
torch.cuda.synchronize() |
st177229 | Hi,
“Batch sizes” in “train loader” should be changed during the training based on the result of some other function. Maybe collate_fn in “train loader” can be useful. Could you give an example to change Batch_size(for example based on the following simple function):
if Sth%2==0:
Batch_size += 1 |
st177230 | Hi all,
I have been using DataParallel so far to train on single-node multiple machines. As i have seen on the forum here that DistributedDataParallel is preferred even for single node and multiple GPUs. So i switched to Distributed training.
My network is kind of large with numerous 3D convolutions so i can only fit a batch size of 1 (stereo image pair) on a single GPU.
I have noticed that the time taken by BackwardPass increases from 0.7 secs to 1.3 secs.
I am have setup the distributed setup as following.
Also GPU utilization is low. Can you kindly suggest what shall be done to increase GPU utilization and reduce backward pass time.
p.s DataLoading does not seem to be the bottleneck as it currently takes 0.08 secs.
if config.distributed_training.enable:
logging.info(f"spawning multiprocesses with {config.distributed_training.num_gpus} gpus")
multiprocessing.spawn( # type: ignore
_train_model,
nprocs=config.distributed_training.num_gpus,
args=(pretrained_model, config, train_dataset, output_dir),
)
def _train_model(
gpu_index: int, pretrained_model: str, config: CfgNode, train_dataset: Dataset, output_dir: Path
) -> None:
train_sampler = None
world_size = _get_world_size(config)
local_rank = gpu_index
if config.distributed_training.enable:
local_rank = _setup_distributed_process(gpu_index, world_size, config)
train_sampler = torch.utils.data.DistributedSampler(
train_dataset, num_replicas=world_size, rank=local_rank
)
model = MyModel(config)
torch.cuda.set_device(local_rank)
_transfer_model_to_device(model, local_rank, gpu_index, config)
.........
def _setup_distributed_process(gpu_index: int, world_size: int, config: CfgNode) -> int:
logging.info("Setting Distributed DataParallel ....")
num_gpus = config.distributed_training.num_gpus
local_rank = config.distributed_training.ranking_within_nodes * num_gpus + gpu_index
torch.cuda.set_device(local_rank)
_init_process(rank=local_rank, world_size=world_size, backend="nccl")
logging.info(f"Done...")
return local_rank
def _init_process(rank: int, world_size: int, backend="gloo"):
""" Initialize the distributed environment. """
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "29500"
dist.init_process_group( # type:ignore
backend=backend, init_method="env://", world_size=world_size, rank=rank
)
def _transfer_model_to_device(model: nn.Module, local_rank: int, gpu_index: int, config: CfgNode) -> None:
if config.distributed_training.enable:
model.cuda(local_rank)
model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[local_rank], output_device=[local_rank] # type:ignore
)
elif torch.cuda.device_count() > 1:
model = torch.nn.DataParallel(model).cuda()
else:
torch.cuda.set_device(gpu_index)
model = model.cuda(gpu_index)
GPU utilization is the following
Screen Shot 2020-05-16 at 7.07.38 PM1316×856 286 KB |
st177231 | tyb_10:
I have noticed that the time taken by BackwardPass increases from 0.7 secs to 1.3 secs.
Have you tried using the NCCL backend? It should be considerably faster than Gloo.
And have measured the time spent on the entire iteration? Most overhead (replicating model, scatter input, gather output) of DataParallel is incurred during the forward pass. |
st177232 | @mrshenli thank you for your reply.
I am using the nccl backend actually.
Following line from the code above.
_init_process(rank=local_rank, world_size=world_size, backend="nccl")
Yes, I have measured the time taken over the entire iteration for both Distributed and DataParallel.
The forward pass takes similar time in both or is a bit faster in DistributedDataParallel (0.75 secs vs 0.8secs in DataParallel).
The overall iteration time in DataParallel is 1.75 secs vs 2.4 secs DistributedDataParallel, where similar time is spend in Dataloading (~0.09 secs).
p.s just saw a typo in the first line of my post. My scenario is Single-node multiple GPUs (not machines). |
st177233 | Hey @tyb_10, I tried a toy model locally, but cannot reproduce this behavior. With 2 GPUs, the code below shows DP is about 9X slower than DDP. Can you try this code in your environment, or can you share a min repro of your code that I can try locally?
DP execution time (ms) by CUDA event: 2938.427490234375
DP execution time (s) by Python time: 2.9386751651763916
DDP rank-1 execution time (ms) by CUDA event 326.289306640625
DDP rank-0 execution time (ms) by CUDA event 326.19061279296875
DDP rank-1 execution time (s) by Python time 0.3264338970184326
DDP rank-0 execution time (s) by Python time 0.32636237144470215
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.nn.parallel import DataParallel as DP
import time
X = 100
B = 200
def ddp_example(rank, world_size):
# create default process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
b = B // world_size
# create local model
model = nn.Linear(X, X).to(rank)
# construct DDP model
ddp_model = DDP(model, device_ids=[rank])
# define loss function and optimizer
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
with torch.cuda.device(rank):
tik = time.time()
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
for _ in range(20):
# forward pass
outputs = ddp_model(torch.randn(b, X).to(rank))
labels = torch.randn(b, X).to(rank)
# backward pass
loss_fn(outputs, labels).backward()
# update parameters
optimizer.step()
end.record()
print(f"DDP rank-{rank} execution time (ms) by CUDA event {start.elapsed_time(end)}")
torch.cuda.synchronize()
tok = time.time()
print(f"DDP rank-{rank} execution time (s) by Python time {tok - tik} ")
def dp_example():
b = B # don't need to divide by 2 here as DataParallel will scatter inputs
model = nn.Linear(X, X).to(0)
# construct DDP model
dp_model = DP(model, device_ids=[0, 1])
# define loss function and optimizer
loss_fn = nn.MSELoss()
optimizer = optim.SGD(dp_model.parameters(), lr=0.001)
tik = time.time()
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
for _ in range(20):
# forward pass
outputs = dp_model(torch.randn(b, X).to(0))
labels = torch.randn(b, X).to(0)
# backward pass
loss_fn(outputs, labels).backward()
# update parameters
optimizer.step()
end.record()
print(f"DP execution time (ms) by CUDA event: {start.elapsed_time(end)}")
torch.cuda.synchronize()
tok = time.time()
print(f"DP execution time (s) by Python time: {tok - tik} ")
def main():
dp_example()
world_size = 2
mp.spawn(ddp_example,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__=="__main__":
main() |
st177234 | @mrshenli thanks a lot for the toy example.
I can reproduce your results and DDP is indeed faster than DP in this case.
I will debug my code shortly and post here the outcome if the problem still persists or otherwise the solution that fixed the issue. |
st177235 | @mrshenli so i debugged the code and found one bug that i was not returning the wrapped model to my parent function.
so after these changes DDP and DP take similar time ~ 1.6 seconds per iteration but still DDP is not faster than DP in my case.
.....
if config.distributed_training.enable:
logging.info(f"spawning multiprocesses with {config.distributed_training.num_gpus} gpus")
multiprocessing.spawn( # type: ignore
_train_model,
nprocs=config.distributed_training.num_gpus,
args=(pretrained_model, config, train_dataset, output_dir),
)
def _train_model(
gpu_index: int, pretrained_model: str, config: CfgNode, train_dataset: Dataset, output_dir: Path
) -> nn.Module: **# this was returning None before**
train_sampler = None
world_size = _get_world_size(config)
local_rank = gpu_index
device = None
if config.distributed_training.enable:
local_rank = _setup_distributed_process(gpu_index, world_size, config)
train_sampler = torch.utils.data.DistributedSampler(
train_dataset, num_replicas=world_size, rank=local_rank
)
device = torch.device(local_rank)
else:
device = torch.device("cuda")
model = MyModel(config)
model = _transfer_model_to_device(model, local_rank, gpu_index, config) # **now i assing to the model**
dataloader = _initialize_data_loader(train_dataset, config, train_sampler)
_execute_training(config, device, model, train_sampler, dataloader, output_dir)
def _execute_training(
config: CfgNode,
device: torch.device,
model: nn.Module,
train_sampler: Optional[torch.utils.data.DistributedSampler],
dataloader: DataLoader,
output_dir: Path,
) -> None:
network_config = config.network_config
loss_combiner = MultiTaskLoss(num_tasks=2).to(device)
trainable_params_model = list(filter(lambda p: p.requires_grad, model.parameters()))
trainable_params = trainable_params_model + list(loss_combiner.parameters())
optimizer = optim.Adam(
trainable_params, lr=network_config.learning_rate, betas=(0.9, 0.99), weight_decay=0.001
)
logging.info(f"logging to tensorboard at {DEFAULT_TENSORBOARD_LOG_LOCATION}")
with SummaryWriter(DEFAULT_TENSORBOARD_LOG_LOCATION) as summary_writer: # type: ignore
for epoch_idx in range(config.network_config.epochs):
if config.distributed_training.enable and train_sampler is not None:
train_sampler.set_epoch(epoch_idx)
logging.info(f"starting epoch {epoch_idx}")
_adjust_learning_rate(optimizer, epoch_idx, network_config.learning_rate, network_config.lrepochs)
#**This is the entry point to the rest of the agnostic code that is same for both DP and DDP**
_train_batches(
epoch_idx, dataloader, model, optimizer, device, config, loss_combiner, summary_writer
)
def _initialize_data_loader(
train_dataset: Dataset, config: CfgNode, train_sampler: Optional[torch.utils.data.DistributedSampler]
) -> DataLoader:
network_config = config.network_config
dataloader = DataLoader(
train_dataset,
network_config.batch_size,
collate_fn=custom_collate,
shuffle=network_config.training_data_shuffle,
num_workers=network_config.data_loader_num_workers,
drop_last=network_config.drop_last_training_sample,
pin_memory=network_config.data_loader_pin_memory,
sampler=train_sampler,
)
return dataloader |
st177236 | Does your model use any buffers? You can check that by running list(mode.buffers()). |
st177237 | @mrshenli doesn’t look like it , it prints empty list
print(list(model.buffers()))
[]``` |
st177238 | mrshenli:
print(f"DP execution time (ms) by CUDA event: {start.elapsed_time(end)}")
torch.cuda.synchronize()
Shouldn’t it be
torch.cuda.synchronize()
print(f"DP execution time (ms) by CUDA event: {start.elapsed_time(end)}")
Coz I got RuntimeError: CUDA error: device not ready when execute your code as it is. |
st177239 | hey,
actually soon after this I switched to using Pytorch-lightning 28 and using it for all my setups regarding compute and data. This resolved the issue. Could have been due to some issue in manual setup of DDP within Pytorch at my end. I didn’t investigate further. I would highly recommend using lightning and save yourself a lot of time you might spend in various parts of your setup. |
st177240 | OK, thanks. I solved it. I was using my own data-loader. After using torch.utils.data.DataLoader, the problem vanished. I didn’t debug further to understand the issue. |
st177241 | I tried to run the MNIST model on 2 nodes each with 4 GPUs.
I can run it with full 8 GPUs, but when only use part of GPUs on each node, it’ll get stuck.
Here is the code snippet
init_process_group(backend='nccl', init_method='env://', world_size=world_size, rank=rank)
torch.cuda.set_device(local_rank)
rank here is the global rank, it will be
[0,1,2,3] for node1 and [4,5,6,7] for node2, when i use full GPUs
[0,1] for node1 and [2,3] for node2, when i use only 2 GPUs on each node.
and local_rank is same for both node which is [0, num_used_GPUs_each_node -1]
When I use only two GPUs on each node: on node2, after running init_process_group, something is loaded to GPU[2,3], which is wired.
Since I’ve set the Cuda device, model and data will be loaded into GPU[local_rank].
Then the program will get stuck and waiting forever.
But, when I tried to load model and data to GPU[2,3], everything will work fine.
So my guess is that NCCL will load something to GPU[rank%available_GPU] that is crucial for communication, and if it’s not put together with data and model, the program will get stuck.
I’m not sure if this is a bug or is there is another way to ask NCCL to put things to GPU[local_rank]?
Thanks! |
st177242 | Just to try understanding your situation better. Let me know if the following description is correct:
You’re using 2 nodes, each with 4 GPUs. You want to use 2 GPUs on each node, which means your intended world size is 4.
The global rank of processes on node 1 are {0, 1}, and the global ranks of processes on node 2 are {2, 3}.
To achieve this, you can use CUDA_VISIBLE_DEVICES before launching your training script. For example, if you set CUDA_VISIBLE_DEVICES=1,2, the training script will not see the rest of the GPUs. Then you can simply initialize the process groups on each process with the correct rank passed in (and no need to do torch.cuda.set_device). This will ensure the correct GPUs are used for the training processes without any manual configurations. |
st177243 | Thanks for your reply.
Yes, you are right, I can lunch the script with CUDA_VISIBLE_DEVICES=0,1.
What I’m trying to do is to set the GPU id manually within the script so that I can distribute the model to more than one GPU with nn.parallel.DistributedDataParallel(model, device_ids=[**more than one**])
For example, on node two I want to set the GPU for [0,1] for processes with global rank [2,3] and allow model parallel on GPU [2,3]. But if I use CUDA_VISIBLE_DEVICES=0,1, both process can only see two GPU instead of four, I can only set device_ids=[0]. When I tried to use torch.cuda.set_device(local_rank), it has the problem like I described above.
Let me know if you need to know anything. |
st177244 | @Jing-Bi Can you check if your problem is resolved in the pytorch nightly release? This might be happening earlier due to a barrier() call in init_process_group which was removed in https://github.com/pytorch/pytorch/pull/49419 10. |
st177245 | When I use ‘DataParallel’, the codes work fine. But when I convert to ‘DistributedDataParallel’, something weird happens:
File “/home/20798/HTCN/lib/model/faster_rcnn/vgg16_HTCN.py”, line 145, in forward
x2_bn = self.bn2(x2_fc)
And the codes File "/home/20798/HTCN/lib/model/faster_rcnn/vgg16_HTCN.py", line 145, in forward metioned above is as follows:
class netD_da(nn.Module): # line 129
def __init__(self, feat_d):# line 130
super(netD_da, self).__init__()# line 131
self.fc1 = nn.Linear(feat_d,100)# line 132
self.bn1 = nn.BatchNorm1d(100)# line 133
self.fc2 = nn.Linear(100,100)# line 134
self.bn2 = nn.BatchNorm1d(100)# line 135
self.fc3 = nn.Linear(100,2)# line 136
def forward(self, x):# line 137
#x1 = F.dropout(F.relu(self.bn1(self.fc1(x))),training=self.training)# line 138
x1_fc = self.fc1(x)# line 139
x1_bn = self.bn1(x1_fc)# line 140
x1_relu = F.relu(x1_bn)# line 141
x1 = F.dropout(x1_relu)# line 142
#x2 = F.dropout(F.relu(self.bn2(self.fc2(x1))),training=self.training)# line 143
x2_fc = self.fc2(x1)# line 144
x2_bn = self.bn2(x2_fc)# line 145
x2_relu = F.relu(x2_bn)# line 146
x2 = F.dropout(x2_relu)# line 147
ret = self.fc3(x2)# line 148
return ret #[256, 2]# line 149 |
st177246 | Hey,
Please have a look at the following example:
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
class MyDataset(Dataset):
def __init__(self):
self.data = np.random.randint(0, 100, (10, 2, 2))
self.max_id = np.ones(len(self.data), dtype=np.int16)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
self.max_id[idx] = np.max(self.data[idx, :, :])
return self.data[idx, :, :]
def main():
np.random.seed(42)
torch.manual_seed(42)
dataset = MyDataset()
loader = DataLoader(dataset, batch_size=4, shuffle=True, num_workers=1)
loader_iter = iter(loader)
print(f'Initial State: {dataset.max_id}')
batch = next(loader_iter)
print(f'After num_worker=1: {dataset.max_id}')
loader = DataLoader(dataset, batch_size=4, shuffle=True, num_workers=0)
loader_iter = iter(loader)
batch = next(loader_iter)
print(f'After num_worker=0: {dataset.max_id}')
if __name__ == '__main__':
main()
In the last case, where DataLoader(..., num_workers=0, ...), I observe the intended behaviour: dataset.max_id is updated.
How can I achieve this with e.g. DataLoader(..., num_workers=1, ...)?
I am using this in the context of large image files, where I want to read information from these images only upon calling them for training/validation. Thank you for any hint/advice!
Cheers |
st177247 | Solved by pritamdamania87 in post #5
I don’t think you need a custom dataloader and using a multiprocessing queue in the dataset should suffice:
# Initialize a queue.
import multiprocessing as mp
q = mp.Queue()
# Pass the queue to the dataset
class MyDataset(Dataset):
def __init__(self, q):
self.data = np.random.randint(… |
st177248 | Is the question trivial? Is the explanation insufficient?
I would very much appreciate any hint as to what property of the DataLoader causes this behaviour and how to best circumvent it. |
st177249 | @dsethz When you specify num_workers > 0, multiple child processes are spawned to perform the actual data loading. As a result, the dataset object would be updated in the child processes and not the parent process where you are printing max_id.
One way to get the max_ids from the child processes would be to put it in a multiprocessing queue on the child processes and read from the same queue on the parent proces. |
st177250 | Hey @pritamdamania87,
thank you for your feedback. I am not experienced with multiprocessing, but if I understand you correctly, I need a custom data loader in which I adapt _MultiProcessingDataLoaderIter ?
Cheers |
st177251 | I don’t think you need a custom dataloader and using a multiprocessing queue in the dataset should suffice:
# Initialize a queue.
import multiprocessing as mp
q = mp.Queue()
# Pass the queue to the dataset
class MyDataset(Dataset):
def __init__(self, q):
self.data = np.random.randint(0, 100, (10, 2, 2))
self.max_id = np.ones(len(self.data), dtype=np.int16)
self.q = q
q.put(self.max_id)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
self.max_id[idx] = np.max(self.data[idx, :, :])
q.put(self.max_id)
return self.data[idx, :, :]
# Then in the main process
def main():
np.random.seed(42)
torch.manual_seed(42)
dataset = MyDataset(q)
loader = DataLoader(dataset, batch_size=4, shuffle=True, num_workers=1)
loader_iter = iter(loader)
print(f'Initial State: {dataset.max_id}')
batch = next(loader_iter)
max_id = q.get()
print(f'After num_worker=1: {max_id}') |
st177252 | Hi,
I am working with DDP and I have a doubt about the loss computation per process. Currently, using DDP, I have the possibility to distribute the batch among different processes, in this way I can increase the size of each batch. For instance, if I have a batch of 128 and I use 2 processes, I will end up having an effective batch size of 128*2=256. For simplicity, I will refer to local batch (i.e., the batch seen by a single process, which is 128) and global batch (i.e., the batch seen by the entire DDP, which is 256)
The doubt I have is the following. When I compute the loss for each process, this loss is averaged on the local batch and not on the global batch, thus resulting in gradient computation that depends on the local batch. When I compute the loss.backward(), DDP will raise hooks each time all gradients for a bucket are ready and average them among all processes. Anyway it is not clear whether DDP re-adjust the loss (divide the total loss for the global batch) or it is something I need to take care of.
Thank you |
st177253 | Solved by Seo in post #2
I arrived at the conclusion that there is no need to re-adjust the loss value since averaging gradients will produce an effect that is the same as using the global batch size.
This can be proven computing the loss for each local batch. This loss is sum(l)/batch_size, where l is the list containing … |
st177254 | I arrived at the conclusion that there is no need to re-adjust the loss value since averaging gradients will produce an effect that is the same as using the global batch size.
This can be proven computing the loss for each local batch. This loss is sum(l)/batch_size, where l is the list containing the losses for each sample in the local batch. The gradient for the i-th parameter wi will be the gradient of sum(l) w.r.t. to wi scaled by the local batch size (e.g., 128). When we average all the gradients of the various processes we will divide by the n_proc which results in the scaling factor being multiplied by n_gpus (e.g., 2), thus resulting in a scaling factor equal to the global batch size (e.g., 256). |
st177255 | So I’m working on a project where I had to modify NCCL a bit to serve my purpose.
Now my question is how would I force pytorch to use my version of NCCL?
To start with, Is NCCL dynamically linked so pytorch would automatically link to any version of NCCL available? or is it statically linked that I need to recompile Pytorch with my custom NCCL version?
Any pointers or tips are appreciated.
Cheers |
st177256 | Solved by ptrblck in post #7
You can see here that NCCL is statically linked to the binaries and can take a look at the repository for more information about the build process. |
st177257 | @OasisArtisan PyTorch has a specific version of NCCL as a submodule. If you want to use a different version of NCCL, you can rebuild PyTorch with the USE_SYSTEM_NCCL flag.
Here’s a similar forums question: NCC version and Pytorch NCCL version mismatch 33 |
st177258 | I see. And if i set the USE_SYSTEM_NCCL flag, then would NCCL be linked dynamically or statically to pytorch?
To illustrate my intention, I want to know that if I need to recompile pytorch everytime I change something in my custom NCCL version. If its linked dynamically, then as long as I keep the same NCCL interface I do not need to recompile PyTorch. |
st177259 | I believe it’s dynamically linked, but it seems that can be toggled with USE_STATIC_NCCL. |
st177260 | If I can have some follow up questions…
First, I implicitly understood that if PyTorch was using its own NCCL submodule then it is linking to it statically. Is my understanding correct?
Second, is there a way to know what NCCL compilation flags were used to produce the PyTorch binaries installed by Conda?
Thanks again. |
st177261 | You can see here 64 that NCCL is statically linked to the binaries and can take a look at the repository for more information about the build process. |
st177262 | I am trying to run two cuda streams in parallel, I initiate the streams then use them to run computations in the processes. The problem I have is that the processes are not firing. i.e., thecode is not executed inside the processes.
Please refer to the code below.
from torch.multiprocessing import Process, set_start_method
import torch
import time
stream1 = torch.cuda.Stream()
stream2 = torch.cuda.Stream()
torch.cuda.synchronize()
def process1():
global stream1, stream2
with torch.cuda.stream(stream1):
print("IM HERE 1\n")
print(time.time(),"time in process 1")
time.sleep(5)
def process2():
global stream1, stream2
with torch.cuda.stream(stream2):
print("IM HERE 2\n")
print(time.time(),"time in process 2")
time.sleep(5)
if __name__ == "__main__":
set_start_method('spawn',force = True)
start = time.time()
p1 = Thread(target = process1)
p2 = Thread(target = process2)
p1.start()
p2.start()
p1.join()
p2.join()
torch.cuda.synchronize()
print("Time for parallel implementation: {}".format(time.time() - start)) |
st177263 | Was able to run your sample code successfully with a few minor tweaks:
from torch.multiprocessing import Process, set_start_method
import torch
import time
stream1 = torch.cuda.Stream()
stream2 = torch.cuda.Stream()
torch.cuda.synchronize()
def process1():
global stream1, stream2
with torch.cuda.stream(stream1):
print("IM HERE 1\n")
print(time.time(),"time in process 1")
time.sleep(5)
def process2():
global stream1, stream2
with torch.cuda.stream(stream2):
print("IM HERE 2\n")
print(time.time(),"time in process 2")
time.sleep(5)
if __name__ == "__main__":
set_start_method('spawn',force = True)
start = time.time()
p1 = Process(target = process1)
p2 = Process(target = process2)
p1.start()
p2.start()
p1.join()
p2.join()
torch.cuda.synchronize()
The output shows both processes are firing:
IM HERE 2
1610497442.7076824 time in process 2
IM HERE 1
1610497442.7735846 time in process 1
Time for parallel implementation: 8.420467615127563 |
st177264 | I am trying to speed up an algorithm responsible for producing 3d skeleton joints from 2D images. The algorithm (GAST-NET) consists of 4 main blocks running sequentially for every frame. I’m trying to parallelize the 4 blocks. I have some questions regarding the process.
Will parallelization help speed up the algorithm? I am trying to parallelize on one GPU only.
What other ways can I look into that can help with speeding up the algorithm?
Slightly related question, Isn’t PyTorch already trying to use maximum GPU resources to produce output as fast as possible? I monitored the GPU utilization and it was between 38 and 50%. Is there a way I can ensure that the GPU is used to the fullest? |
st177265 | Mamdouh_Aljoud:
Will parallelization help speed up the algorithm? I am trying to parallelize on one GPU only.
This really depends on the algorithm.
Mamdouh_Aljoud:
What other ways can I look into that can help with speeding up the algorithm?
Slightly related question, Isn’t PyTorch already trying to use maximum GPU resources to produce output as fast as possible? I monitored the GPU utilization and it was between 38 and 50%. Is there a way I can ensure that the GPU is used to the fullest?
If you could share some sample PyTorch code for the algorithm, I can look into it to see if there are some optimization opportunities. |
st177266 | I am training with a large amount of data per batch such that for batch_size = 1, I am observing the following:
I cannot fit it on a single gpu.
I can fit it on a machine with 2 gpus when using DataParallel. This is a little confusing to me since I have read that batch_size=1 cannot be used with DataParallel. Does this mean half of a batch is on each gpu, and therefore “disconnected” when seen by the model?
I cannot fit it using DistributedDataParallel.
I then modified my data so that each batch contains a smaller amount of data – I cannot go any smaller in terms of data per batch. I observe that:
A) I can fit batch_size=2 on a single gpu, but not batch_size=3.
B) I can fit batch_size=4 on a machine with 2 gpus using DataParallel, but not batch_size=5
C) I can fit batch_size=1 using DistributedDataParallel, but not batch_size=2.
I was under the impression that DistributedDataParallel is more efficient than DataParallel from the tutorials I read, and thus somewhat counter to what I am observing. I am wondering if DistributedDataParallel has memory overhead that DataParallel does not have and therefore explains what I am observing, as I have not been able to find any references regarding this situation. It is also a bit odd to me that a single gpu using DistributedDataParallel fits less than a single gpu standalone. Is it correct to assume that there is some type of workload imbalance afflicting DistributedDataParallel? Even with workload imbalance, wouldn’t it still be better than that for DataParallel, which seems to be able to fit more data? |
st177267 | DistributedDataParallel in default uses extra buffer to synchronize gradients, thus consumes more memory (one extra copy of all grads) than what local training consumes.
In PyTorch 1.8, we added a new prototype flag “grad_as_bucket_view”, if you set it as True, it can save this extra copy of all grads, and should consume almost the same memory as local training. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.