id
stringlengths
3
8
text
stringlengths
1
115k
st176168
Solved by Yanli_Zhao in post #4 Gloo supports send/recv CPU tensors only, NCCL supports send/recv CUDA tensors only
st176169
Based on the results above, I assume reduce has worked on GPU because my tensors are put on CUDA deivces, am I right? However, I noticed that all the ranks have participated in the reduce algo. Meaning that rank 1, 2, 3 tensor values have also changed when performing the reduction. It seems that the reduction algo is quite naive, if I have 4 processes, it would run 3 rounds. 1st round, add rank 3 to rank 2. 2nd round, add rank 2 to rank 1. 3rd round, add rank 1 to rank 0. So when I print the end result, not only rank 0 has the desired reduced sum value, but also rank 1 ~ (world_size -2) has also changed the value. Is this the supposed result of reduction? I thought rank 1 ~ (world_size -2) doesn’t store the imtermediate results. Hi the documentation might need to be updated. I ran the reduce operation using gloo and nccl. gloo reduce runs [75745] After reduction: rank 0 has data tensor([4.], device=‘cuda:0’), backend is gloo [75747] After reduction: rank 2 has data tensor([2.], device=‘cuda:2’), backend is gloo [75746] After reduction: rank 1 has data tensor([3.], device=‘cuda:1’), backend is gloo [75748] After reduction: rank 3 has data tensor([1.], device=‘cuda:3’), backend is gloo [76980] After reduction: rank 2 has data tensor([2.]), backend is gloo [76981] After reduction: rank 3 has data tensor([1.]), backend is gloo [76979] After reduction: rank 1 has data tensor([3.]), backend is gloo [76978] After reduction: rank 0 has data tensor([4.]), backend is gloo nccl reduce runs [75986] After reduction: rank 1 has data tensor([1.], device=‘cuda:1’), backend is nccl [75985] After reduction: rank 0 has data tensor([4.], device=‘cuda:0’), backend is nccl [75987] After reduction: rank 2 has data tensor([1.], device=‘cuda:2’), backend is nccl [75988] After reduction: rank 3 has data tensor([1.], device=‘cuda:3’), backend is nccl The result of the gloo reduce operation is wrong. It should be as described in the tutorial. This is a known bug, and there is an open issue for it ProcessGroupGloo reduce produces wrong result · Issue #21480 · pytorch/pytorch · GitHub 2.
st176170
From the doc, Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 1, seems like no backend supports send and recv with CUDA tensors. But I still tried to test it. So, here is my run function: def run1(rank, size): """ run1: Simple P2P synchronously.""" tensor = torch.zeros(1).cuda(rank) if rank == 0: tensor += 1 # Send the tensor to process 1 dist.send(tensor=tensor, dst=1) else: # Receive the tensor from process 0 # tensor += 10 # dist.send(tensor=tensor, dst=1) dist.recv(tensor=tensor, src=0) # dist.recv(tensor=tensor) print("Rank {} has data {}, with addr {}".format(rank, tensor[0], tensor.data_ptr())) With 2 processes, NCCL backend, it seems I can get correct results: root@298562e873aa:/opt/sw_home/pytorch-distributed# python distributed.py -f 1 -b nccl Rank 1 has data 1.0, with addr 139654443565056 Rank 0 has data 1.0, with addr 139731618758656 However, with 2 processes, gloo backend, I get runtime errors: root@298562e873aa:/opt/sw_home/pytorch-distributed# python distributed.py -f 1 -b gloo Process Process-2: Process Process-1: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/opt/sw_home/pytorch-distributed/distributed.py", line 98, in init_process fn(rank, size) File "/opt/sw_home/pytorch-distributed/distributed.py", line 41, in run1 dist.recv(tensor=tensor, src=0) File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 850, in recv pg.recv([tensor], src, tag).wait() RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [172.17.0.13]:31389 Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/opt/sw_home/pytorch-distributed/distributed.py", line 98, in init_process fn(rank, size) File "/opt/sw_home/pytorch-distributed/distributed.py", line 36, in run1 dist.send(tensor=tensor, dst=1) File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 805, in send default_pg.send([tensor], dst, tag).wait() RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:378] writev [172.17.0.13]:5547: Bad address So, I’m not sure with CUDA tensors, can we use send, recv for NCCL or GLOO? Then actual reason why I do these experiments is that I’m running on a machine without CUDA UVA support, so it only supports cudaMemcpyPeer(Async), not cudaMemcpy(Async). But as I checked in gloo source code, there is no cudaMemcpyPeer usage at all, only cudaMemcpyAsync. Thus I’m not sure if pytorch DDP with gloo with CUDA tensors will work as expected.
st176171
Gloo supports send/recv CPU tensors only, NCCL supports send/recv CUDA tensors only
st176172
Hello. I am interested in multi-task learning (MTL) and encountered some difficulties when trying the distributed API with inspiration from the TorchVision examples: vision/references/detection at master · pytorch/vision · GitHub 3. The short description is that the distributed forward call fails when one or more task-models (or heads) do not contribute to the total loss of the previous iteration: Traceback (most recent call last): File "src/main.py", line 160, in <module> main(args) File "src/main.py", line 148, in main step = multi_model.train_one_epoch(dataloader_train, optimizer, epoch=epoch, step=step, write_freq=200, writer=writer) File "/workspace/src/models/multi_model.py", line 103, in train_one_epoch losses = self(images, targets) File "/workspace/src/models/multi_model.py", line 87, in __call__ model_out = model(images, targets) File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 804, in forward if grad_enabled and self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by making sure all `forward` function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 2: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 2 (pid: 11861) of binary: /opt/conda/bin/python ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed Each head is only trained with a subset of the training data which is filtered from the batch during training. If the batch do not include an input to one or more heads, those heads do not (or cannot) contribute to the loss. I tried to circumvent this by “faking” a forward call of the task but then the loss will potentially not be “good”. The question is probably two-fold: is such an input-filtering approach correct when doing MTL and how would you still call the forward of unused heads? One solution could be to force a batch to include input for each head but if we would use multiple heads, the batch size might become problematic with the limited resources I have. Another solution would be to calculate the loss in a different way and account for “false” examples correctly but this probably requires a lot of custom code. A similar question has been asked before: Process got stuck when set find_unused_parameters=True in DDP - #3 by oliver_ss 5 and the solutions seems similar to what I have done by faking a forward pass. However, it remains to be said if this approach is sound and good. The longer description is that I have tried to use a single pretrained backbone (ResNet) and multiple TorchVision detection heads (Faster-RCNN) to detect different objects. Given that there is no joint dataset for these objects, one naive approach is to use multiple heads. I have managed to get the “combined model” to train without the distributed API but it would be desirable to get it to run on a machine with 4 GPUs that I have available. Each head gives a FasterRcnn loss dict which we reduce with the approach given by: vision/references/detection at master · pytorch/vision · GitHub 3. But as said above in the short description: the next forward pass will encounter a lot of “hanging” parameters when the filtering “fails”. This approach with multiple TorchVision detection heads might not be the best approach, and I would gladly take hints or examples of other approaches as well as I cannot come up with good search terms… However, I feel that this naive approach is a good starting point. For completeness I have included most of the relevant model code below. At this point it is not possible to include everything but it might give some hint of what I am trying to do. Below is the code for setting up the combined model: class MultiModel: """ An attempt on a modular approach where a single backbone supplies features for multiple task-specific networks. The FasterRcnn network calls the backbone networks forward function within its own forward. If multiple FasterRcnns are used the backbone features should in theory be reusable. Linear inference speed in the number of original networks might be reduced to constant backbone + linear task specific network inference speed. """ def __init__(self, backbone, device, distributed_device_ids=None): self.backbone = backbone self.model_list = [] self.device = device self.distributed_device_ids = distributed_device_ids self.model_modules = [] # necessary to keep track of "undistributed models" def add_frcnn_model(self, frcnn_class, num_classes, model_label, frcnn_kwargs): if frcnn_kwargs == None: frcnn_kwargs = {} model = frcnn_class(self.backbone, num_classes, model_label, frcnn_kwargs=frcnn_kwargs) model.to(self.device) if self.distributed_device_ids: print(self.distributed_device_ids) model = torch.nn.parallel.DistributedDataParallel(model, self.distributed_device_ids, find_unused_parameters=False) self.model_modules.append(model.module) self.model_list.append(model) where each head is currently only given by a FasterRCNNModel: from torchvision.models.detection.faster_rcnn import FasterRCNN from torchvision.models.detection.faster_rcnn import FastRCNNPredictor, load_state_dict_from_url, model_urls, overwrite_eps from torchvision.transforms.transforms import Normalize def custom_fasterrcnn_resnet50_fpn(backbone, pretrained=False, progress=True, num_classes=3, **kwargs): """ Custom FasterRcnn model with provided backbone """ # we cannot overwrite num_classes here, we need to load the weights before changing the head model = FasterRCNN(backbone, num_classes=91, **kwargs) if pretrained: state_dict = load_state_dict_from_url(model_urls['fasterrcnn_resnet50_fpn_coco'], progress=progress) model.load_state_dict(state_dict) overwrite_eps(model, 0.0) in_features = model.roi_heads.box_predictor.cls_score.in_features model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) return model class FasterRCNNModel(nn.Module): """ Simplificates the construction of a Faster RCNN model from the torchvision detection model module. Seamless handling of input filtering so that only applicable input is processed """ def __init__(self, backbone, num_classes, model_label, frcnn_kwargs={}): super(FasterRCNNModel, self).__init__() self.backbone = backbone self.num_classes = num_classes self.model_label = model_label self.frcnn_kwargs = frcnn_kwargs self.model = custom_fasterrcnn_resnet50_fpn(backbone, pretrained=True, progress=True, num_classes=num_classes, **frcnn_kwargs) def forward(self, images: List, targets: List): """ Returns losses or detections depending on train or eval mode. Filters out the usable image, target pairs based on "model_label" in target """ if self.model.training: mask = self._get_mask(targets) else: mask = np.ones(len(images)) n_inputs = len(images) masked_images = [images[i] for i in range(n_inputs) if mask[i]] masked_targets = [targets[i] for i in range(n_inputs) if mask[i]] n_masked_inputs = len(masked_images) if n_masked_inputs > 0: out = self.model(masked_images, masked_targets) return out else: """ In a distributed setting, a empty loss dict results in a silent crash if we try to reduce the dicts across processes. However, tensors that are not from a computation graph lead to a crash at .backwards() as there are non finished reductions. How to solve? For now, we introduce a dummy pass through our model which we then zero out. All grad_fns should be there. """ device = torch.cuda.current_device() images = [torch.zeros((3, 256, 256), dtype=torch.float32, device=device)] boxes = torch.tensor(np.zeros((0, 4)), dtype=torch.float32, device=device) labels = torch.tensor(np.zeros((0)), dtype=torch.int64, device=device) targets = [{"boxes": boxes, "labels": labels}] out = self.model(images, targets) for k, v in out.items(): v *= 0 return out Thank you for any discussion regarding this. I can also provide more code if needed but I think it will be difficult to provide a minimal working example. It is more of a question if this is the correct approach.
st176173
A similar question has been asked before: Process got stuck when set find_unused_parameters=True in DDP - #3 by oliver_ss 36 and the solutions seems similar to what I have done by faking a forward pass. However, it remains to be said if this approach is sound and good. @zalador what do you mean if this approach is sound and good?
st176174
I think I meant that if faking a forward pass is the correct approach and would work. I am not 100 % sure that just setting the gradients to 0 would lead to correct training. And also if there are better approaches, this feels like a bandaid approach with DDP. Without DDP you do not need to do a faked forward pass.
st176175
if your model has unused parameters, you could set find_unused_parameters=True; if not all output tensors will be used to calculate for loss, DDP <= PT 1.9 can not support the case yet. But we’ve added a fix to support this case and will be released in PT 1.10, you can try this feature in PT nightly build for now.
st176176
Thank you for your input. I tried with the latest nightly build but I get version mismatches between PT and TorchVision which I use for my models. I will definitely look out for this in the future.
st176177
I am updating my training script to use Distributed Data Parallel to do Multi-GPU training. I am done with most of the steps as mentioned in PyTorch Guidelines. But I am confused about how to handle metrics calculation and visualization: For example, I need to calculate accuracy and I have 4 samples in total and 2 GPUs. When I run testing I will have predictions and ground truths for 2 2 samples in each process. Now if I want to calculate accuracy do I need to call dist.reduce or it is not needed and I can directly calculate accuracy in rank 0 process.
st176178
Solved by gcramer23 in post #2 I need to calculate accuracy and I have 4 samples in total and 2 GPUs. When I run testing I will have predictions and ground truths for 2 2 samples in each process. Now if I want to calculate accuracy do I need to call dist.reduce or it is not needed and I can directly calculate accuracy in rank …
st176179
I need to calculate accuracy and I have 4 samples in total and 2 GPUs. When I run testing I will have predictions and ground truths for 2 2 samples in each process. Now if I want to calculate accuracy do I need to call dist.reduce or it is not needed and I can directly calculate accuracy in rank 0 process. Do you need to calculate local metrics or global metrics? To calculate global metrics, you would need to communicate between the processes Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 12.
st176180
Thanks a lot for the quick reply. It was really helpful. I have understood how to use reduce functions to solve the issue and I can get correct metrics as I was getting for single GPU. Could you please share if there is any way I can gather dictionaries created on multiple processes for multi-gpu on rank 0 process as for now I think all such reduce/gather functions work for only tensors?
st176181
Could I just use the rank 0 to calculate everything? I know that the speed will be slower (which is not obvious in my case), but the accuracy should be same, right?
st176182
For calculating global accuracy on rank0, don’t change sampler of TestDataloader to DDP sampler. And only use rank0 to run the testing loop.
st176183
Save model which is wrapped with DataParallel(). torch.save(model.module.state_dict(), save_folder + '/' + 'model.pt') Load this model & Train with SyncBatchNorm + DDP. # Define Model model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model) model.cuda(args.gpu) model = torch.nn.parallel.DistributedDataParallel(net, device_ids=[args.gpu], find_unused_parameters=True) # Load Model loc = 'cuda:{}'.format(args.gpu) net.module.load_state_dict(torch.load(save_folder + '/model.pt', map_location=loc), strict=False) Error(s) in loading stage_dict for DistributedDataParallel: Missing key(s) in state_dict: "module.con1.weight", "module.bn1.weight", ... How can I load my models trained with DataParallel() after warp with SyncBatchNorm + DDP ?
st176184
Solved by Yanli_Zhao in post #2 just do like this: Define Model model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model) model.cuda(args.gpu) model_DDP = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu], find_unused_parameters=True) save and load Model torch.save(model_DDP, tmp.name) model_DDP = to…
st176185
just do like this: Define Model model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model) model.cuda(args.gpu) model_DDP = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu], find_unused_parameters=True) save and load Model torch.save(model_DDP, tmp.name) model_DDP = torch.load(tmp.name)
st176186
Hello, I have seen some questions related to using tensorboard with DistributedDataParallel(DDP) on the forum but I haven’t found a definitive answer to my question. For instance, I wish to log loss values to tensorboard. When not considering DDP my code looks like the following for a loss item loss_writer.add_scalar('Overall_loss', overall_loss.item(), total_iter) where loss_writer = torch.utils.tensorboard.SummaryWriter(loss_dir). Now, when considering DDP I included as recommended here 13 to include if args.rank == 0 : loss_writer.add_scalar('Overall_loss', overall_loss.item(), total_iter) However, when doing so, I obtain something that looks like this which has been evoked in here 10. Any idea on how to obtain a smooth loss graph as on single-GPU training ? Thanks !
st176187
I don’t have an immediate answer. Are you also making sure that SummaryWriter is only instantiated by rank 0 (as suggested by the link you referred to in your question)?
st176188
@cbalioglu Hello, thank you for your answer! Unfortunatly, my SummaryWriter is in deed instanciated only by rank 0.
st176189
If I use the DataParrallel utility as such, model = torch.nn.DataParallel(model) my custom batchNorm module does not update it’s buffers, running_avg_mean and running_avg_std. It does update if I run the model without DataParrallel on a single GPU. How can I get the buffers to update in DataParrallel? # Batch Renormalization for convolutional neural nets (2D) implementation based # on https://arxiv.org/abs/1702.03275 from torch.nn import Module import torch class BatchRenormalization2D(Module): '''Batch renorm from https://arxiv.org/pdf/1702.03275.pdf''' def __init__(self, num_features, eps=1e-05, momentum=0.01, r_d_max_inc_step=0.0001): super(BatchRenormalization2D, self).__init__() self.eps = eps self.momentum = momentum self.gamma = torch.nn.Parameter(torch.ones((1, num_features, 1, 1)), requires_grad=True) self.beta = torch.nn.Parameter(torch.zeros((1, num_features, 1, 1)), requires_grad=True) self.register_buffer('running_avg_mean', torch.zeros((1, num_features, 1, 1))) self.register_buffer('running_avg_std', torch.ones((1, num_features, 1, 1))) self.max_r_max = 3.0 self.max_d_max = 5.0 self.r_max_inc_step = r_d_max_inc_step self.d_max_inc_step = r_d_max_inc_step self.r_max = 1.0 self.d_max = 0.0 def forward(self, x): batch_ch_mean = torch.mean(x, dim=(0, 2, 3), keepdim=True) batch_ch_std = torch.clamp(torch.std(x, dim=(0, 2, 3), keepdim=True), self.eps, 1e10) if self.training: r = torch.clamp(batch_ch_std / self.running_avg_std, 1.0 / self.r_max, self.r_max).data d = torch.clamp((batch_ch_mean - self.running_avg_mean) / self.running_avg_std, -self.d_max, self.d_max).data x = ((x - batch_ch_mean) * r )/ batch_ch_std + d x = self.gamma * x + self.beta if self.r_max < self.max_r_max: self.r_max += self.r_max_inc_step * x.shape[0] if self.d_max < self.max_d_max: self.d_max += self.d_max_inc_step * x.shape[0] self.running_avg_mean = self.running_avg_mean + self.momentum * (batch_ch_mean.detach() - self.running_avg_mean) self.running_avg_std = self.running_avg_std + self.momentum * (batch_ch_std.detach() - self.running_avg_std) else: x = (x - self.running_avg_mean) / self.running_avg_std x = self.gamma * x + self.beta return x
st176190
update, was able to make it work using the .lerp() function. Why does what I did not work?
st176191
Kendall_Reid: self.running_avg_mean = self.running_avg_mean + self.momentum * (batch_ch_mean.detach() - self.running_avg_mean) Because in DP, the python module object is replicated to run on each GPU in a different thread. However, this setattr assigns the updated the buffer to the replica, which is lost right afterwards. Instead, inplace updates to the buffer works because buffers in the replica on the first GPU share memory with the original one.
st176192
Trying to understand the connection between these two concepts, after reading this page: Rendezvous — PyTorch 1.9.0 documentation 4 Do we need this min/max if I already know the exact number machines under my control? Let us say I have two dedicated machines that I want to use for training. That means I want world_size=2. Then both min and max should be exactly 2 for the intended allocation, right? In what kind of scenario one would want to set min/max differently to take advantage of this flexibility?
st176193
Solved by cbalioglu in post #3 In traditional HPC and ML systems all nodes that are part of a job are started simultaneously (as you implied in your question) and the job gets dispatched (by a scheduler) once all nodes are ready to execute. The set of nodes that are part of a job are usually called a “gang”. TorchElastic is desi…
st176194
Let us say I have two dedicated machines that I want to use for training. That means I want world_size=2. Then both min and max should be exactly 2 for the intended allocation, right? In what kind of scenario one would want to set min/max differently to take advantage of this flexibility? If you have 2 machines you can use for training, then set the max to 2. If you are willing to start training with 1 machine, then setting min to 1 makes sense. You can set min to a different value than max when you are willing to start a training job with less machines due to faults in the system. cc @cbalioglu
st176195
In traditional HPC and ML systems all nodes that are part of a job are started simultaneously (as you implied in your question) and the job gets dispatched (by a scheduler) once all nodes are ready to execute. The set of nodes that are part of a job are usually called a “gang”. TorchElastic is designed in a way to handle systems that do not form a gang. Those systems are usually meant to run traditional distributed applications where the scheduler has no concept of forming a gang of nodes (e.g. the scheduler simply starts executing the job on a node the moment the node becomes available). Since you have no formal gang you have to have some form of mechanism to simulate a gang (a “pseudo-gang”) and dispatch the job at some point in time after the user requested it. This is where the minimum and maximum number of nodes come into play. You basically tell TorchElastic that it should wait until at least min number of nodes become available to execute a job, but that ideally you want it to have max number of jobs for the execution. Once TorchElastic reaches min number of nodes, it sets an internal “lastcall” timer and continues accepting new nodes until the timer expires or the max number of nodes is reached. At that point TorchElastic dispatches the job on all participant nodes. In summary TorchElastic has a built-in capability to form a gang even when run on systems that has no such concept. In your particular case (and in most cases) setting minimum and maximum number of nodes to the same value is the way to go. They become relevant in non-traditional execution environments as I described above.
st176196
Hi, I am trying to implement distributed fashion training using torch.distributed package. in torch.distributed.init_process_group(.) I am getting an error with deadlock. here is error ; 0 4 Process Process-4: Traceback (most recent call last): File “/usr/lib/python3.6/multiprocessing/process.py”, line 258, in _bootstrap self.run() File “/usr/lib/python3.6/multiprocessing/process.py”, line 93, in run self._target(*self._args, **self._kwargs) File “main.py”, line 17, in init_process dist.init_process_group(backend, rank=rank, world_size=size) File “/sdd1/amit/venv/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py”, line 407, in init_process_group timeout=timeout) File “/sdd1/amit/venv/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py”, line 475, in _new_process_group_helper timeout=timeout) RuntimeError: Connection reset by peer Here is code 10 def run(size, rank): 11 print(size, rank) 12 13 def init_process(rank, size, fn, backend=‘gloo’): 14 “”" Initialize the distributed environment. “”" 15 os.environ[‘MASTER_ADDR’] = ‘127.0.0.1’ 16 os.environ[‘MASTER_PORT’] = ‘29500’ 17 dist.init_process_group(backend, rank=rank, world_size=size) 18 fn(rank, size) 19 20 21 if name == “main”: 22 size = 4 23 processes = [] 24 for rank in range(size): 25 p = Process(target=init_process, args=(rank, size, run)) 26 p.start() 27 processes.append§ 28 29 for p in processes: 30 p.join()
st176197
Solved by mrshenli in post #2 Hey @Amit_Singh1, can you try if spawn mode works for you? I tried the following, and it works in my dev env. import os import torch.distributed as dist import torch.multiprocessing as mp def run(size, rank): print(size, rank) def init_process(rank, size, fn, backend="gloo"): """Initializ…
st176198
Hey @Amit_Singh1, can you try if spawn mode works for you? I tried the following, and it works in my dev env. import os import torch.distributed as dist import torch.multiprocessing as mp def run(size, rank): print(size, rank) def init_process(rank, size, fn, backend="gloo"): """Initialize the distributed environment.""" os.environ["MASTER_ADDR"] = "127.0.0.1" os.environ["MASTER_PORT"] = "29500" dist.init_process_group(backend, rank=rank, world_size=size) fn(rank, size) if __name__ == "__main__": size = 4 processes = [] mp.set_start_method("spawn") for rank in range(size): p = mp.Process(target=init_process, args=(rank, size, run)) p.start() processes.append(p) for p in processes: p.join() Here 68 are some best practices for multiprocessing.
st176199
The code in this tutorial 24 is missing the mp.set_start_method("spawn"). Does anyone know how we can propose a change or reference top this discussion in the tutorial? I am happy to do it but I am just starting to get more active and don’t know how this works.
st176200
Hey @Juans Does anyone know how we can propose a change or reference top this discussion in the tutorial? I am happy to do it but I am just starting to get more active and don’t know how this works. Thanks for pointing it out. The source file for that tutorial is at https://github.com/pytorch/tutorials/blob/master/intermediate_source/dist_tuto.rst 31 Please feel free to submit a revision PR and ping me for review. Thanks!
st176201
I’m also learning from this tutorial. One question is does the simplest case running on GPU or CPU? I tried to use nvprof to checkout, but no kernels detected, so it spawns 2 processes on CPU, am I right? How to make it to run on 2 GPU processes? I tried to create the tensor on CUDA device like this: tensor = torch.zeros(1).cuda(rank). But still nvprof didn’t find any kernels launched, am I missing something?
st176202
I’m also learning from this tutorial. One question is does the simplest case running on GPU or CPU? I tried to use nvprof to checkout, but no kernels detected, so it spawns 2 processes on CPU, am I right? How to make it to run on 2 GPU processes? I tried to create the tensor on CUDA device like this: tensor = torch.zeros(1).cuda(rank). But still nvprof didn’t find any kernels launched, am I missing something? @BruceDai003 how are you running nvprof? """run.py:""" #!/usr/bin/env python import os import torch import torch.distributed as dist import torch.multiprocessing as mp def run(rank, size): """ Distributed function to be implemented later. """ x = torch.ones(2, 2).cuda(rank) y = x + x print(y) def init_process(rank, size, fn, backend='gloo'): """ Initialize the distributed environment. """ os.environ['MASTER_ADDR'] = '127.0.0.1' os.environ['MASTER_PORT'] = '29500' dist.init_process_group(backend, rank=rank, world_size=size) fn(rank, size) if __name__ == "__main__": size = 1 processes = [] mp.set_start_method("spawn") for rank in range(size): p = mp.Process(target=init_process, args=(rank, size, run)) p.start() processes.append(p) for p in processes: p.join() I run the program with nvprof --profile-child-processes python test.py. ==81172== NVPROF is profiling process 81172, command: /fsx/users/gcramer/conda/envs/pytorch1/bin/python -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=5, pipe_handle=7) --multiprocessing-fork tensor([[2., 2.], [2., 2.]], device='cuda:0') ==81172== Profiling application: /fsx/users/gcramer/conda/envs/pytorch1/bin/python -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=5, pipe_handle=7) --multiprocessing-fork ==81172== Profiling result: Type Time(%) Time Calls Avg Min Max Name GPU activities: 27.17% 22.688us 13 1.7450us 1.6000us 2.5280us [CUDA memcpy DtoH] 7.74% 6.4640us 1 6.4640us 6.4640us 6.4640us void at::native::reduce_kernel<int=512, int=1, at::native::ReduceOp<double, at::native::func_wrapper_t<double, at::native::MinNanFunctor<double>>, unsigned int, double, int=4>>(double) 7.36% 6.1440us 1 6.1440us 6.1440us 6.1440us void at::native::reduce_kernel<int=512, int=1, at::native::ReduceOp<double, at::native::func_wrapper_t<double, at::native::MaxNanFunctor<double>>, unsigned int, double, int=4>>(double) 3.91% 3.2640us 2 1.6320us 1.4080us 1.8560us void at::native::vectorized_elementwise_kernel<int=4, at::native::AbsFunctor<float>, at::detail::Array<char*, int=2>>(int, float, at::native::AbsFunctor<float>) 3.79% 3.1680us 1 3.1680us 3.1680us 3.1680us void at::cuda::detail::cub::DeviceSelectSweepKernel<at::cuda::detail::cub::DispatchSelectIf<at::cuda::detail::cub::CountingInputIterator<long, long>, at::cuda::detail::cub::TransformInputIterator<bool, at::native::_GLOBAL__N__42_tmpxft_00009bab_00000000_7_Nonzero_cpp1_ii_cba1aaa0::NonZeroOp<bool>, bool*, long>, long*, int*, at::cuda::detail::cub::NullType, at::cuda::detail::cub::NullType, int, bool=0>::PtxSelectIfPolicyT, at::cuda::detail::cub::CountingInputIterator<long, long>, at::cuda::detail::cub::TransformInputIterator<bool, at::native::_GLOBAL__N__42_tmpxft_00009bab_00000000_7_Nonzero_cpp1_ii_cba1aaa0::NonZeroOp<bool>, bool*, long>, long*, int*, at::cuda::detail::cub::ScanTileState<int, bool=1>, at::cuda::detail::cub::NullType, at::cuda::detail::cub::NullType, int, bool=0>(long, at::cuda::detail::cub::CountingInputIterator<long, long>, bool, bool, at::native::_GLOBAL__N__42_tmpxft_00009bab_00000000_7_Nonzero_cpp1_ii_cba1aaa0::NonZeroOp<bool>, bool*, long, at::cuda::detail::cub::TransformInputIterator<bool, at::native::_GLOBAL__N__42_tmpxft_00009bab_00000000_7_Nonzero_cpp1_ii_cba1aaa0::NonZeroOp<bool>, bool*, long>, int) 3.72% 3.1040us 1 3.1040us 3.1040us 3.1040us _ZN2at6native24index_elementwise_kernelILi128ELi4EZNS0_16gpu_index_kernelIZNS0_17index_kernel_implINS0_10OpaqueTypeILi4EEEEEvRNS_14TensorIteratorEN3c108ArrayRefIlEESA_EUlPcSB_lE_EEvS7_SA_SA_RKT_EUliE_EEviT1_ 3.64% 3.0400us 2 1.5200us 1.3120us 1.7280us void at::native::vectorized_elementwise_kernel<int=4, at::native::BUnaryFunctor<at::native::CompareGTFunctor<double>>, at::detail::Array<char*, int=2>>(int, double, at::native::CompareGTFunctor<double>) 3.56% 2.9760us 1 2.9760us 2.9760us 2.9760us void at::native::vectorized_elementwise_kernel<int=4, at::native::DivFunctor<double>, at::detail::Array<char*, int=3>>(int, double, at::native::DivFunctor<double>) 3.52% 2.9440us 2 1.4720us 1.2480us 1.6960us void at::native::vectorized_elementwise_kernel<int=4, at::native::BUnaryFunctor<at::native::CompareNEFunctor<float>>, at::detail::Array<char*, int=2>>(int, float, at::native::CompareNEFunctor<float>) 3.49% 2.9120us 1 2.9120us 2.9120us 2.9120us void at::cuda::detail::cub::DeviceReduceSingleTileKernel<at::cuda::detail::cub::DeviceReducePolicy<bool, int, int, at::cuda::detail::cub::Sum>::Policy600, at::cuda::detail::cub::TransformInputIterator<bool, at::native::_GLOBAL__N__42_tmpxft_00009bab_00000000_7_Nonzero_cpp1_ii_cba1aaa0::NonZeroOp<bool>, bool*, long>, int*, int, at::cuda::detail::cub::Sum, int>(int, int, at::cuda::detail::cub::Sum, at::cuda::detail::cub::DeviceReducePolicy<bool, int, int, at::cuda::detail::cub::Sum>::Policy600, bool) 3.41% 2.8480us 1 2.8480us 2.8480us 2.8480us _ZN2at6native27unrolled_elementwise_kernelIZZZNS0_21copy_device_to_deviceERNS_14TensorIteratorEbENKUlvE1_clEvENKUlvE4_clEvEUldE_NS_6detail5ArrayIPcLi2EEE23TrivialOffsetCalculatorILi1EjESC_NS0_6memory12LoadWithCastILi1EEENSD_13StoreWithCastEEEviT_T0_T1_T2_T3_T4_ 3.30% 2.7520us 2 1.3760us 1.2480us 1.5040us _ZN2at6native27unrolled_elementwise_kernelIZZZNS0_16ceil_kernel_cudaERNS_18TensorIteratorBaseEENKUlvE_clEvENKUlvE2_clEvEUlfE_NS_6detail5ArrayIPcLi2EEE23TrivialOffsetCalculatorILi1EjESC_NS0_6memory15LoadWithoutCastENSD_16StoreWithoutCastEEEviT_T0_T1_T2_T3_T4_ 3.29% 2.7510us 2 1.3750us 1.2480us 1.5030us void at::native::unrolled_elementwise_kernel<at::native::CompareNEFunctor<float>, at::detail::Array<char*, int=3>, TrivialOffsetCalculator<int=2, unsigned int>, TrivialOffsetCalculator<int=1, unsigned int>, at::native::memory::LoadWithoutCast, at::native::memory::StoreWithoutCast>(int, float, at::native::CompareNEFunctor<float>, char*, int=3, at::detail::Array<char*, int=3>, int=2) 2.68% 2.2400us 1 2.2400us 2.2400us 2.2400us void at::native::vectorized_elementwise_kernel<int=4, at::native::BitwiseAndFunctor<bool>, at::detail::Array<char*, int=3>>(int, bool, at::native::BitwiseAndFunctor<bool>) 2.64% 2.2080us 1 2.2080us 2.2080us 2.2080us [CUDA memcpy HtoD] 2.41% 2.0160us 1 2.0160us 2.0160us 2.0160us void at::native::vectorized_elementwise_kernel<int=4, at::native::MulFunctor<bool>, at::detail::Array<char*, int=3>>(int, bool, at::native::MulFunctor<bool>) 2.30% 1.9200us 1 1.9200us 1.9200us 1.9200us void at::native::vectorized_elementwise_kernel<int=4, at::native::AddFunctor<float>, at::detail::Array<char*, int=3>>(int, float, at::native::AddFunctor<float>) 2.15% 1.7920us 1 1.7920us 1.7920us 1.7920us void at::native::vectorized_elementwise_kernel<int=4, at::native::CompareEqFunctor<float>, at::detail::Array<char*, int=3>>(int, float, at::native::CompareEqFunctor<float>) 2.11% 1.7600us 1 1.7600us 1.7600us 1.7600us void at::native::vectorized_elementwise_kernel<int=2, at::native::CompareNEFunctor<float>, at::detail::Array<char*, int=3>>(int, float, at::native::CompareNEFunctor<float>) 2.11% 1.7600us 1 1.7600us 1.7600us 1.7600us void at::native::vectorized_elementwise_kernel<int=4, at::native::CompareNEFunctor<float>, at::detail::Array<char*, int=3>>(int, float, at::native::CompareNEFunctor<float>) 2.07% 1.7280us 1 1.7280us 1.7280us 1.7280us _ZN2at6native29vectorized_elementwise_kernelILi4EZZZNS0_16ceil_kernel_cudaERNS_18TensorIteratorBaseEENKUlvE_clEvENKUlvE2_clEvEUlfE_NS_6detail5ArrayIPcLi2EEEEEviT0_T1_ 2.03% 1.6960us 1 1.6960us 1.6960us 1.6960us _ZN2at6native29vectorized_elementwise_kernelILi2EZZZNS0_16ceil_kernel_cudaERNS_18TensorIteratorBaseEENKUlvE_clEvENKUlvE2_clEvEUlfE_NS_6detail5ArrayIPcLi2EEEEEviT0_T1_ 1.61% 1.3440us 1 1.3440us 1.3440us 1.3440us void at::cuda::detail::cub::DeviceCompactInitKernel<at::cuda::detail::cub::ScanTileState<int, bool=1>, int*>(int, int, bool=1) API calls: 99.71% 4.88775s 1 4.88775s 4.88775s 4.88775s cudaStreamIsCapturing 0.08% 3.8803ms 26 149.24us 12.256us 3.3780ms cudaLaunchKernel 0.08% 3.8186ms 8 477.32us 471.60us 506.55us cudaGetDeviceProperties 0.06% 2.9157ms 4 728.93us 712.43us 769.01us cuDeviceTotalMem 0.05% 2.2443ms 404 5.5550us 753ns 229.15us cuDeviceGetAttribute 0.01% 370.98us 300 1.2360us 952ns 10.083us cudaGetDevice 0.01% 344.41us 14 24.600us 17.998us 32.813us cudaMemcpyAsync 0.01% 331.36us 1 331.36us 331.36us 331.36us cudaMalloc 0.00% 199.46us 4 49.864us 45.765us 61.253us cuDeviceGetName 0.00% 86.061us 14 6.1470us 4.6040us 9.2300us cudaStreamSynchronize 0.00% 53.606us 59 908ns 737ns 1.3350us cudaGetLastError 0.00% 15.076us 1 15.076us 15.076us 15.076us cudaFuncGetAttributes 0.00% 12.104us 4 3.0260us 2.2260us 4.0030us cuDeviceGetPCIBusId 0.00% 10.313us 12 859ns 754ns 1.5130us cuDevicePrimaryCtxGetState 0.00% 9.4360us 5 1.8870us 1.3530us 3.1390us cudaSetDevice 0.00% 7.8430us 8 980ns 791ns 1.3650us cuDeviceGet 0.00% 5.4070us 3 1.8020us 1.1060us 2.9200us cudaDeviceGetAttribute 0.00% 5.0910us 6 848ns 766ns 1.0290us cudaPeekAtLastError 0.00% 4.2730us 4 1.0680us 747ns 1.5770us cudaGetDeviceCount 0.00% 3.7520us 1 3.7520us 3.7520us 3.7520us cudaOccupancyMaxActiveBlocksPerMultiprocessorWithFlags 0.00% 3.7080us 4 927ns 839ns 1.0140us cuDeviceGetUuid 0.00% 3.6310us 3 1.2100us 784ns 1.6020us cuDeviceGetCount
st176203
W python_anomaly_mode.cpp:104] Warning: Error detected in CudnnBatchNormBackward. Traceback of forward call that caused the error: File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/ml/code/train.py", line 36, in <module> model.optimize_parameters() File "/opt/ml/code/model/cyclegan2_model.py", line 58, in optimize_parameters self.forward() File "/opt/ml/code/model/cyclegan2_model.py", line 78, in forward self.rec_A = self.G_BtoA(self.fake_B) # G_B(G_A(A)) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 918, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 705, in forward output = self.module(*inputs[0], **kwargs[0]) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 918, in _call_impl result = self.forward(*input, **kwargs) File "/opt/ml/code/networks/cyclegan2_networks.py", line 120, in forward return self.model(input) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 918, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/container.py", line 119, in forward input = module(input) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 918, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 140, in forward self.weight, self.bias, bn_training, exponential_average_factor, self.eps) File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 2150, in batch_norm input, weight, bias, running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled (function _print_stack) Traceback (most recent call last): File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/ml/code/train.py", line 36, in <module> model.optimize_parameters() File "/opt/ml/code/model/cyclegan2_model.py", line 63, in optimize_parameters self.backward_G() # calculate gradients for G_A and G_B File "/opt/ml/code/model/cyclegan2_model.py", line 100, in backward_G self.loss_G.backward() File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 245, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 147, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [64]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! I am getting this issue, can any one help me
st176204
Hi, I want to collect tensors in all GPUs for each minibatch and save them. Can someone suggest how to do that?
st176205
If you are using DDP (DistributedDataParallel()) then you can simply save them like you do without DDP (using torch.save), because every process (i.e. GPU) will run it. Use gpu index to prevent multiple saving of the same name
st176206
I want to collect tensors in all GPUs for each minibatch and save them. Do you want all tensors to be on a single process before saving? You can save a tensor using torch.save — PyTorch 1.9.0 documentation 1.
st176207
Yes, DDP is fine. The tensors can stay on GPUs. Each tensor should be saved in a file, and I want to make sure I save them without repetition or missing a tensor. Can you suggest a sample code?
st176208
To avoid repetition you can use gpu index when saving files. Something like the following: def save_tensors(gpu, total_gpus): torch.cuda.set_device(gpu) torch.distributed.init_process_group(backend = 'nccl', init_method='env://', world_size=total_gpus, rank=gpu) for i in range(100): tensor = torch.rand(2,3) torch.save(tensor, f'tensor_{i}_gpu_{gpu}.pt') def main(): gpu_count = torch.cuda.device_count() os.environ['MASTER_ADDR'] = '127.0.0.1' os.environ['MASTER_PORT'] = '1234' torch.multiprocessing.spawn(save_tensors, nprocs = gpu_count, args = (gpu_count, ))
st176209
Hello, I face a problem with DataLoader and custom DataSet. Here is my custom DataSet : class AudioDataset(Dataset): def __init__(self, dataset_path: str) -> None: super().__init__() assert isdir(dataset_path) all_magn = [ f for f in tqdm(listdir(dataset_path)) if isfile(join(dataset_path, f)) and f.startswith("magn") ] all_phase = [ f for f in tqdm(listdir(dataset_path)) if isfile(join(dataset_path, f)) and f.startswith("phase") ] assert len(all_magn) == len(all_phase) self.__all_magn = sorted(all_magn) self.__all_phase = sorted(all_phase) self.__dataset_path = dataset_path def __getitem__(self, index: int): magn = th.load(join( self.__dataset_path, self.__all_magn[index] )) phase = th.load(join( self.__dataset_path, self.__all_phase[index] )) return th.stack([magn, phase], dim=0) def __len__(self): return len(self.__all_magn) wich is loaded with : if __name__ == "__main__": audio_dataset = audio.AudioDataset("/path/to/tensor/dir") data_loader = DataLoader( audio_dataset, batch_size=8, shuffle=True, num_workers=10, drop_last=True ) The data is well loaded but in fact the DataLoader hangs when iterating. It seems that the “speed” of loading is not constant (my dataset is +60k tensors of size = (512, 512) ) : it varies from 20min to 1h to make an epoch. I precise that the “speed” of iteration is constant when I specify num_workers = 0. I’ve seen that this issue is quite common, how remediate to those hang ? Python = problem with both 3.6 or 3.8 Pytorch = 1.9.0 CUDA = 11.1 Nvidia driver = 460.84 Ubuntu 20.04 Best regards
st176210
You could profile the DataLoader (with num_workers>0) and check, if you are seeing spikes in the data loading time. If so, it would point towards a data loading bottleneck, which would cause the training loop to wait for the next available batch. This post 39 explains common bottlenecks and proposes some workarounds, in case you are indeed seeing this issue.
st176211
How can one optimize only part of a Dataparallel model, while perserving the data-parallel behaviour in an appropriate way. For example: parallel_model = DataParallel(model) optimizer = torch.optim.SGD(parallel_model.last_conv_layer.parameter(),lr=0.01) error: torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'last_conv_layer' And, if one use parallel_model.module.last_conv_layer.parameter() to obtain trainable weights, would DataParallel still function as expected? Thanks in advance.
st176212
Solved by ptrblck in post #2 Yes, accessing the underlying layers via the .module attribute will work. You could alternatively create the optimizer before wrapping the model into nn.DataParalllel.
st176213
Yes, accessing the underlying layers via the .module attribute will work. You could alternatively create the optimizer before wrapping the model into nn.DataParalllel.
st176214
hi I implemented tutorial codes in distributed session. I used node 0 that consisted of two rtx 6000 and node 1 that have a 2080 super. is it occurred by a mismatch between both nodes? there are error logs at below. How can I fixed this problem? node 0 (master) python -m torch.distributed.launch --nnode=2 --node_rank=0 --nproc_per_node=1 multi_gpu_pratice/getting_start.py --local_world_size=1 [281611] Initializing process group with: {‘MASTER_ADDR’: ‘163.247.44.175’, ‘MASTER_PORT’: ‘7899’, ‘RANK’: ‘0’, ‘WORLD_SIZE’: ‘2’} Precision-7920-Tower:281611:281611 [0] NCCL INFO Bootstrap : Using [0]lo:127.0.0.1<0> Precision-7920-Tower:281611:281611 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation Precision-7920-Tower:281611:281611 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] Precision-7920-Tower:281611:281611 [0] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> Precision-7920-Tower:281611:281611 [0] NCCL INFO Using network Socket NCCL version 2.7.8+cuda11.0 node 1 python -m torch.distributed.launch --nnode=2 --node_rank=1 --nproc_per_node=1 multi_gpu_pratice/getting_start.py --local_world_size=1 [8658] world_size = 2, rank = 1, backend=nccl [8658] rank = 1, world_size = 2, n = 1, device_ids = [0] MS-7B23:8658:8658 [0] NCCL INFO Bootstrap : Using [0]lo:127.0.0.1<0> MS-7B23:8658:8658 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). MS-7B23:8658:8658 [0] NCCL INFO NET/IB : No device found. MS-7B23:8658:8658 [0] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> MS-7B23:8658:8664 [0] NCCL INFO Setting affinity for GPU 0 to 3f MS-7B23:8658:8664 [0] NCCL INFO Call to connect returned Connection refused, retrying MS-7B23:8658:8664 [0] NCCL INFO Call to connect returned Connection refused, retrying
st176215
In your command line I don’t see the --master-addr option which is required for a multi-node training. Do you mind retrying your job with the following? # Node 0 pytorch -m torch.distributed.launch --nnode=2 --node_rank=0 --nproc_per_node=1 --master_addr="<hostname_of_rank_0>" multi_gpu_practice/getting_start.py --local_world_size=1 # Node 1 pytorch -m torch.distributed.launch --nnode=2 --node_rank=1 --nproc_per_node=1 --master_addr="<hostname_of_rank_0>" multi_gpu_practice/getting_start.py --local_world_size=1
st176216
antae: python -m torch.distributed.launch --nnode=2 --node_rank=1 --nproc_per_node=1 multi_gpu_pratice/getting_start.py --local_world_size=1 Thanks for your reply, but --master_addr is setted in code directly. I initialize NCCL_SOCKET_IFNAME="^eno1,enp0s31f6", but it just use lo socket.
st176217
Precision-7920-Tower:281611:281611 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] Precision-7920-Tower:281611:281611 [0] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> Precision-7920-Tower:281611:281611 [0] NCCL INFO Using network Socket NCCL version 2.7.8+cuda11.0 MS-7B23:8658:8658 [0] NCCL INFO Bootstrap : Using [0]lo:127.0.0.1<0> MS-7B23:8658:8658 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). MS-7B23:8658:8658 [0] NCCL INFO NET/IB : No device found. MS-7B23:8658:8658 [0] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> MS-7B23:8658:8664 [0] NCCL INFO Setting affinity for GPU 0 to 3f MS-7B23:8658:8664 [0] NCCL INFO Call to connect returned Connection refused, retrying MS-7B23:8658:8664 [0] NCCL INFO Call to connect returned Connection refused, retrying It looks like NCCL is having a problem establishing a connection. Can you verify that your interfaces are correct. The Common environment variables section provides some information Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 48.
st176218
parser.add_argument('--dist_url', default="env://", type=str) parser.add_argument('--rank', type=int) parser.add_argument('--gpu_to_work_on', type=int) params = parser.parse_args() def example(): from torch.nn.parallel import DistributedDataParallel as DDP init_dist(args) model = nn.Linear(10, 10).cuda(params.gpu_to_work_on) ddp_model = DDP(model, device_ids=[params.gpu_to_work_on]) loss_fn = nn.MSELoss() optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) outputs = ddp_model(torch.randn(20, 10).cuda(params.gpu_to_work_on)) labels = torch.randn(20, 10).cuda(params.gpu_to_work_on) loss_fn(outputs, labels).backward() optimizer.step() def init_dist(params): params.rank = int(os.environ["RANK"]) params.world_size = int(os.environ["WORLD_SIZE"]) dist.init_process_group( backend="nccl", init_method=params.dist_url, world_size=params.world_size, rank=params.rank, ) params.gpu_to_work_on = params.rank % torch.cuda.device_count() print('rank:', params.rank) print('gpu_to_work_on:', params.gpu_to_work_on) print('n_gpus:', torch.cuda.device_count()) torch.cuda.set_device(params.gpu_to_work_on) return if __name__ == '__main__': example() python -m torch.distributed.launch main.py Am I correct using DistributedDataParallel?
st176219
Hi oasjd7. It looks like you have done what’s required to use DDP. You have initialized the process group and created a DDP model. You are using the torch.distributed.launch to run your example. Are you having problems with your example code?
st176220
@gcramer23 Thanks for your reply. There was no error but, I just wondered that my code is working fine as I intended. I use only 32 batch_size and I don’t need to size up the batch. I just want to more speed. In this situation, does DistributedDataParallel can help me to speed up? (I have 4-gpus 12,000MiB per gpu, and my code needs at least 40,000MiB)
st176221
I use only 32 batch_size and I don’t need to size up the batch. I just want to more speed. In this situation, does DistributedDataParallel can help me to speed up? (I have 4-gpus 12,000MiB per gpu, and my code needs at least 40,000MiB) Yes DDP can improve training speed. It will need to be configured correctly. You can benchmark your configurations to see what works best for your use case. This paper provides helpful information on this topic https://research.fb.com/wp-content/uploads/2020/08/PyTorch-Distributed-Experiences-on-Accelerating-Data-Parallel-Training.pdf 3.
st176222
Hey y’all, I had a quick question about making a distributed pytorch application I had built more efficient. Basically, I am implementing a full-torch version of Ape-X (https://arxiv.org/pdf/1803.00933.pdf 2). In it, the authors implement a shared replay buffer in shared memory with a tensorflow key-value store (Appendix F). My issue occurs that, since I can’t really specify how much compute each actor uses (I’m assuming one device per each item in world_size), my CPUs just get destroyed with handling all these RPCs, and my hope was to build this system to scale well (independent of the number of actors). My question–are there ops such as tensorflow’s lookup 1 module for this stuff? Can I put tensors in shared memory? I understand the TCPStore only takes strings in its set() args. Is there a torch recommendation for handling the case where I have centralized data that needs to get read by a learner and added to by a bunch of actors generating data, without requests overload? Thanks for any time y’all.
st176223
Hi theoryofjake, Can I put tensors in shared memory? You can move tensors to shared memory torch.Tensor.share_memory_ — PyTorch 1.9.0 documentation 4. A possible solution is to use a python dictionary and move the values to shared memory. Will this work for your case?
st176224
Could you give an example? My thought was something similar to using torch.distributed’s TCPStore, something like store = torch.distributed.TCPStore(..) rand_tensor = torch.rand((64, 10)) store.set('rand_tensor1', rand_tensor) but this errors because set() only takes str as the second arg.
st176225
example using tensor.share_memory_() import torch import torch.multiprocessing as mp def run(rank, python_dict): python_dict[rank] += torch.randn(2) def run_example(share_memory): print(f"share_memory={share_memory}") nproc=4 python_dict = {} for i in range(nproc): python_dict[i] = torch.zeros(2) if share_memory: python_dict[i].share_memory_() print(f"before={python_dict}") processes = [] for rank in range(nproc): p = mp.Process(target=run, args=(rank, python_dict,)) processes.append(p) for proc in processes: proc.start() proc.join() print(f"after={python_dict}\n") if __name__ == "__main__": run_example(share_memory=False) run_example(share_memory=True)
st176226
For some reason it is hard (and ugly) to start my own module using torch.distributed.launch. It is inside an application therefore I would need to concatenate an execution cmd line string to do it. Is there a way that I can call torch.distributed.launch using an API way? (i.e. I then expose my module as a simple function and feed my function as a callback to torch’s distributed module) Thanks!
st176227
As part of torch 1.9.0 we are introducing torch.distributed.run to replace torch.distributed.launch definition is here (pytorch/run.py at master · pytorch/pytorch · GitHub 16). This eventually calls into a function called elastic_launch (pytorch/api.py at master · pytorch/pytorch · GitHub 7) which seems to be what you are looking for. For example, you can import it and use it like elastic_launch(LaunchConfig, your_module_function)(your_module_args). Does this satisfy your use case?
st176228
Thanks Howard. Compiled and installed from source, and trying out the elastic_launch API. One question is, is there a definitive list of every thing needed in environments for this elastic_launch? when using the script, there are bunch of things such as RANK, master_addr to make everything ready. However the elastic_launch’s config only has a subset of these variables. Haven’t fully traced the code yet, just would like to double check first instead of trail and error finding them. right now, I am able to get things kicked off but apparently some configurations is missing. (pid=601762) "message": "EtcdException: Could not get the list of servers, maybe you provided the wrong host(s) to connect to?", (pid=601762) "extraInfo": { (pid=601762) "py_callstack": "Traceback (most recent call last):\n File \"/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/urllib3/connection.py\", line 170, in _new_conn\n (self._dns_host, self.port), self.timeout, **extra_kw\n File \"/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/urllib3/util/connection.py\", line 96, in create_connection\n and Traceback (most recent call last): File "train_ray_local.py", line 169, in <module> ray.get([client.train.remote(), client2.train.remote()]) File "/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 47, in wrapper return func(*args, **kwargs) File "/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/ray/worker.py", line 1481, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(EtcdException): ray::Network.train() (pid=601762, ip=10.231.13.71) File "/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/urllib3/util/connection.py", line 96, in create_connection raise err File "/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/urllib3/util/connection.py", line 86, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: ray::Network.train() (pid=601762, ip=10.231.13.71) File "/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/urllib3/connectionpool.py", line 706, in urlopen chunked=chunked, File "/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/urllib3/connectionpool.py", line 394, in _make_request conn.request(method, url, **httplib_request_kw) File "/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/urllib3/connection.py", line 234, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/home/centos/anaconda3/envs/dev/lib/python3.7/http/client.py", line 1277, in request self._send_request(method, url, body, headers, encode_chunked) File "/home/centos/anaconda3/envs/dev/lib/python3.7/http/client.py", line 1323, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/home/centos/anaconda3/envs/dev/lib/python3.7/http/client.py", line 1272, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/home/centos/anaconda3/envs/dev/lib/python3.7/http/client.py", line 1032, in _send_output self.send(msg) File "/home/centos/anaconda3/envs/dev/lib/python3.7/http/client.py", line 972, in send self.connect() File "/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/urllib3/connection.py", line 200, in connect conn = self._new_conn() File "/home/centos/anaconda3/envs/dev/lib/python3.7/site-packages/urllib3/connection.py", line 182, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f6a609edb50>: Failed to establish a new connection: [Errno 111] Connection refused
st176229
Hi, I should have provided more resources in my previous response, my apologies. Here is the reading about torch.distributed.run and the arguments it supports (Elastic Launch — PyTorch master documentation 2). It is a superset of the arguments of torch.distributed.launch but also includes fault tolerance provided by TorchElastic Torch Distributed Elastic — PyTorch master documentation 1. Regarding LaunchConfig and elastic_launch, they are not yet public APIs and are subject to change. By default it uses an etcd backend which is a third-party distributed key-value store which is why I believe you are hitting your errors. However, if you don’t want third party dependencies, you can use an internally built backend for rendezvous. Here is an example for you: test.py from torch.distributed.launcher.api import LaunchConfig, elastic_launch import os def my_process(args): env_variables = [ "RANK", "WORLD_SIZE", "MASTER_ADDR", "MASTER_PORT", "TORCHELASTIC_MAX_RESTARTS", # etc... ] for env_var in env_variables: print(f"{env_var}: {os.environ.get(env_var)}") print(f"here are my own args: {args}") if __name__ == "__main__": config = LaunchConfig(min_nodes=1, max_nodes=1, nproc_per_node=2, rdzv_endpoint="localhost:0", rdzv_backend="c10d") outputs = elastic_launch(config, my_process)("my args") Then just run python test.py The output looks something like (along with some torchelastic logging and warnings): RANK: 1 WORLD_SIZE: 2 MASTER_ADDR: <my machine> MASTER_PORT: 51217 TORCHELASTIC_MAX_RESTARTS: 3 here are my own args: my args RANK: 0 WORLD_SIZE: 2 MASTER_ADDR: <my machine> MASTER_PORT: 51217 TORCHELASTIC_MAX_RESTARTS: 3 here are my own args: my args
st176230
Awesome example! clear and simple. Thanks for it Howard! One follow up question, what is the best way to double check the distributed training actually happened. Just want to make sure I am not cheated by some lucky succeeded run of mis-configuration I read this page (Distributed communication package - torch.distributed — PyTorch 1.8.1 documentation 7) but not sure what would be the definitive way to tell.
st176231
One way might be printing out the forward output on all rank every N iterations, and see if the loss is different and resides on different GPUs. You can also check nvidia-smi to see if they indeed keep multiple GPUs busy.
st176232
I tried this example, it works greatly even on multiple machines. But I am trying to understand how these two parameters work, and what the mechanism is: rdzv_endpoint="localhost:0", rdzv_backend="c10d" Let us say I have two machines, now I used above two parameters to both machines. How does that work? localhost:0 seems really non-typical. Under the hood, how is etcd involved in this process? Thanks a lot!
st176233
would you please point me to the exact place where all-reduce happens? maybe I can just print out message to confirm that all-reduce happened?
st176234
For a distributed setting you should provide the hostname of one of your machines: rdzv_endpoint="node0.example.com" rdzv_backend="c10d" The example that Howard gave was meant to be run on a single machine with two worker processes. localhost:0 simply means use a random port on the local machine for the coordination of the nodes. Since you only have one node (that runs two workers) specifying localhost as an endpoint worked in that example. However if you want to run your training on more than one machine (a.k.a. node) the endpoint should be a hostname/FQDN that is reachable by all machines.
st176235
Thanks Can! OK. looks like it works now. To run on multiple machines, i assume we dont’ need to start an etcd instance first. Just simply set rdzv_endpoint=“IP_ADDRESS” rdzv_backend=“c10d”. And it works. Looks like c10d backend takes care of everything for us. Looking at the output, I am not sure if parallel part actually happened (pid=15988, ip=10.231.21.63) {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "95509630884468593764425877023036505025", **"global_rank": 0**, "group_rank": 0, "worker_id": "16109", "role": "default_role", "hostname": "n231-021-063.novalocal", "state": "SUCCEEDED", "total_run_time": 75, "rdzv_backend": "c10d", "raw_error": null, "metadata": "{\"group_world_size\": 1, \"entry_point\": \"sage_main_routine\", \"local_rank\": [0], \"role_rank\": [0], \"role_world_size\": [1]}", "agent_restarts": 0}} (pid=952664) {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "176607332379713185119918774283739049986", **"global_rank": 0**, "group_rank": 0, "worker_id": "952994", "role": "default_role", "hostname": "n231-013-071.novalocal", "state": "SUCCEEDED", "total_run_time": 76, "rdzv_backend": "c10d", "raw_error": null, "metadata": "{\"group_world_size\": 1, \"entry_point\": \"sage_main_routine\", \"local_rank\": [0], \"role_rank\": [0], \"role_world_size\": [1]}", "agent_restarts": 0}} After it finishes, I got the output above. Both says it used group_rank as 0, my setup for the run is like this, each node has 0 or 1 as rank respectively: os.environ["MASTER_ADDR"] = "10.231.13.71" os.environ["MASTER_PORT"] = "2345" os.environ["WORLD_SIZE"] = "2" os.environ["RANK"] = str(rank) Maybe I still missed something in the os environment var setting? Are these four enough?
st176236
Thanks Can! You are welcome! OK. looks like it works now. To run on multiple machines, i assume we dont’ need to start an etcd instance first. Just simply set rdzv_endpoint=“IP_ADDRESS” rdzv_backend=“c10d”. And it works. Looks like c10d backend takes care of everything for us. That is right. The new c10d rendezvous backend does not depend on any 3rd party software. You don’t need to start (or even install) etcd. From the output it looks like your machines run in silo. Each has a world size of 1 and rank 0. Since you are using our new launcher API you should not set any environment variables in your training script. Our launcher sets all four variables you listed before executing it. I suggest checking out our “Distributed Elastic” docs for v1.9 RC here 1 which also lists all the environment variables set for you by the launcher API. Let me know if you still have trouble after reading the docs. Cheers.
st176237
Finished reading the doc. Now I am a bit confused. This “Distributed Elastic” page describes how to use a script to start the training. But for my case I was seeking for an API version of this functionality. Based on the earlier response from Howard, it is ready, just apply function elastic_launch() on my entry point function. Are you suggesting this function elastic_launch() is actually not ready for use? Based on what I tried so far, the function way works pretty well in bringing up the processes on multiple machine (thank you guys!) But somehow I need a good example to learn how to set things up properly. my current configuration: sage_main_routine(): os.environ["MASTER_ADDR"] = "10.231.131.7" os.environ["MASTER_PORT"] = "2346" os.environ["WORLD_SIZE"] = "2" os.environ["RANK"] = str(rank) os.environ["GROUP_RANK"] = str(rank) In the main script: config = LaunchConfig( min_nodes=1, max_nodes=2, nproc_per_node=1, rdzv_endpoint="10.231.131.7", rdzv_backend="c10d", ) outputs = elastic_launch(config, sage.sage_main_routine)( Path(FOLDER), args, self.rank ) UPDATE: have the 2-node being silo is my accidentally turn off Parallel module. Now I am having errors related to tcp connection timeouts. Either way I think bringing up the processes works great, it is just need some help on how to set the correct set of variables.
st176238
As Howard mentioned the elastic_launch function is not part of our public API (yet); therefore, we do not have docs for it, but reading and understanding how the script works and what environment variables it sets, should be helpful since it is a thin wrapper on top of elastic_launch. Your sage_main_routine should be actually “reversed” like this: sage_main_routine(foo): # These environment variables are set by `elastic_launch`. master_addr = os.environ["MASTER_ADDR"] master_port = os.environ["MASTER_PORT"] world_size = os.environ["WORLD_SIZE"] rank = os.environ["RANK"] store = TCPStore(master_addr, master_port, world_size, is_master=rank== 0) # Rest of the script... In the main script: config = LaunchConfig( min_nodes=1, max_nodes=2, nproc_per_node=1, rdzv_endpoint="10.231.131.7", rdzv_backend="c10d", ) foo = "my dummy value" outputs = elastic_launch(config, sage.sage_main_routine)( foo, # Your custom script arguments )
st176239
Gotcha! Thanks for the tips! After reading more into how things are implemented, I think now I know what is going on. We are integrating pytorch into a system where we spawn multiple processes, and the training happens within one of the sub-processes on one single machine. pytorch package imports get leaked into multiple processes including the parent, thus for each time “Using backend: pytorch” is shown, there is one c10d/rendezvous setup/registration happening automatically. Therefore the distributed modules got confused and in consequence connection issues. What is the easy way to tell a process when we spawning it, saying “ok, you are (or not) the one to initialize distributed modules”? I guess I can hack into pytorch for now, but I am not sure about side-effects doing so. If there is already a mechanism, that would be awesome. Now I guess I get the reason why this module was originally provided only as a script. Thank you so much everyone!
st176240
Spawning/forking without an exec in general is always tricky and can introduce subtle bugs (like you had in this case) We do not have a mechanism to detect double initializations and it is unlikely that we can come up with a solution that works for everyone considering the size of our user base. My only suggestion would be to rethink how you import and initialize your Python modules. Ideally only a single subprocess should import the relevant packages from PyTorch while the rest performs other auxiliary tasks. Now I guess I get the reason why this module was originally provided only as a script. That is correct. The internals of how the launcher script works is still under development. So I strongly suggest using the script if possible. We do not guarantee any backwards compatibility for our API since it is not public yet. Thank you so much everyone! You are welcome! Good luck with your experiments and let us know if you need any other help!
st176241
Thanks Can! Really appreciate all the information! My last question, I want to make sure the “parallel training” is indeed happening, would you please share a few code pointers where processes do the parameters sharing with others. I’d like to print out the parameter (maybe before and after as well) to make sure I have things rolling correctly.
st176242
My last question, I want to make sure the “parallel training” is indeed happening, would you please share a few code pointers where processes do the parameters sharing with others. For clarification, the architecture is multiple nodes and multiple processes within a node, with one process doing the training? Also, are you using a DDP model?
st176243
I am using two machines, each machine calls elastic_launch() routine once. And yes, inside my model definition, I am wrapping my model into a DDP. Two machines runs the same exact training routine.
st176244
My last question, I want to make sure the “parallel training” is indeed happening, would you please share a few code pointers where processes do the parameters sharing with others. DDP tutorial Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.9.0+cu102 documentation 3. " DDP uses collective communications in the torch.distributed package to synchronize gradients and buffers. More specifically, DDP registers an autograd hook for each parameter given by model.parameters() and the hook will fire when the corresponding gradient is computed in the backward pass. Then DDP uses that signal to trigger gradient synchronization across processes. Please refer to DDP design note for more details." You can check before and after .backward().
st176245
import torch import torchvision import numpy as np import torch.nn as nn import torch.nn.functional as F import pandas as pd from torch.distributions import Normal import torchvision.transforms as transforms torch.set_printoptions(linewidth=120) torch.set_grad_enabled(True) torch.backends.cudnn.benchmark = True torch.backends.cudnn.enabled = True # Use standard FashionMNIST dataset train_set = torchvision.datasets.FashionMNIST( root = './data/FashionMNIST', train = True, download = True, transform = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) ) test_set = torchvision.datasets.FashionMNIST( root = './data/FashionMNIST', train = False, download = True, transform = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) ) class MLPLayer(nn.Module): """ Hidden Layer of our BNN """ def __init__(self, input_dim, output_dim, rho_prior, rho0=-6., lambda0=0.99): # initialize layers super().__init__() # set input and output dimensions self.input_dim = input_dim self.output_dim = output_dim # initialize mu, rho and theta parameters for layer's weights self.w_mu = nn.Parameter(torch.Tensor(input_dim, output_dim).uniform_(-0.6, 0.6)) self.w_rho = nn.Parameter(torch.Tensor(input_dim, output_dim).uniform_(rho0, rho0)) self.theta = nn.Parameter(logit(torch.Tensor(output_dim).uniform_(lambda0, lambda0))) # initialize mu, rho and theta parameters for layer's biases, theta = logit(phi) self.b_mu = nn.Parameter(torch.Tensor(output_dim).uniform_(-0.6, 0.6)) self.b_rho = nn.Parameter(torch.Tensor(output_dim).uniform_(rho0, rho0)) self.rho_prior = rho_prior # self.device = device # initialize weight samples (these will be calculated whenever the layer makes a prediction) self.gamma = None self.w = None self.b = None # initialize log pdf of prior and vb distributions self.kl = 0 def forward(self, X, temp, phi_prior): """ For one Monte Carlo sample :param X: [batch_size, input_dim] :return: output for one MC sample, size = [batch_size, output_dim] """ # sample weights and biases sigma_w = torch.log(1 + torch.exp(self.w_rho)) sigma_b = torch.log(1 + torch.exp(self.b_rho)) sigma_prior = torch.log(1 + torch.exp(self.rho_prior)) # self.register_buffer('u', torch.rand(self.theta.shape)) u = self.u #to(self.device) self.gamma = gumbel_softmax(self.theta, u, temp, hard=True) self.gamma_w = self.gamma.expand(self.input_dim, self.output_dim) self.gamma_b = self.gamma # epsilon_w = Normal(0, 1).sample(self.w_mu.shape) # epsilon_b = Normal(0, 1).sample(self.b_mu.shape) self.register_buffer('epsilon_w', Normal(0, 1).sample(self.w_mu.shape)) self.register_buffer('epsilon_b', Normal(0, 1).sample(self.b_mu.shape)) epsilon_w = self.epsilon_w #to(self.device) epsilon_b = self.epsilon_b #to(self.device) self.w = self.gamma_w * (self.w_mu + sigma_w * epsilon_w) self.b = self.gamma_b * (self.b_mu + sigma_b * epsilon_b) output = torch.mm(X, self.w) + self.b.expand(X.size()[0], self.output_dim) # record KL at sampled weight and bias phi = sigmoid(self.theta) w_phi = phi.expand(self.input_dim, self.output_dim) b_phi = phi kl_phi = phi * (torch.log(phi) - torch.log(phi_prior)) + \ (1 - phi) * (torch.log(1 - phi) - torch.log(1 - phi_prior)) kl_w = w_phi * (torch.log(sigma_prior) - torch.log(sigma_w) + 0.5 * (sigma_w ** 2 + self.w_mu ** 2) / sigma_prior ** 2 - 0.5) kl_b = b_phi * (torch.log(sigma_prior) - torch.log(sigma_b) + 0.5 * (sigma_b ** 2 + self.b_mu ** 2) / sigma_prior ** 2 - 0.5) self.kl = torch.sum(kl_w) + torch.sum(kl_b) + torch.sum(kl_phi) return output class SFunc(nn.Module): def __init__(self, data_dim, hidden_dim1, hidden_dim2, target_dim, temp, phi_prior1, phi_prior2, builder, sigma_noise=1): # initialize the network using the MLP layer super().__init__() # self.rho_prior = torch.Tensor([np.log(np.exp(1.3) - 1)]) #.to(device) self.register_buffer('rho_prior', torch.Tensor([np.log(np.exp(1.3) - 1)])) # self.device = device self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=5, stride=1, padding=2) self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=2) self.l1 = MLPLayer(data_dim, hidden_dim1, self.rho_prior) # , self.device) self.l2 = MLPLayer(hidden_dim1, hidden_dim2, self.rho_prior) #, self.device) self.l4 = OutLayer(hidden_dim2, target_dim, self.rho_prior) #, self.device) self.target_dim = target_dim # self.log_sigma_noise = torch.log(torch.Tensor([sigma_noise])) #.to(device) self.register_buffer('log_sigma_noise', torch.log(torch.Tensor([sigma_noise]))) self.temp =temp self.phi_prior1=phi_prior1 self.phi_prior2=phi_prior2 self.train_len= torch.tensor(len(builder)) def forward(self, X, y, temp,phi_prior1,phi_prior2 ): """ output of the BNN for one Monte Carlo sample :param X: [batch_size, data_dim] :return: [batch_size, target_dim] """ print("\tIn Model: input size", X.size()) output = F.relu(F.max_pool2d(self.conv1(X), 2)) output = F.relu(F.max_pool2d(self.conv2(output), 2)) output = F.relu(self.l1(output.reshape(-1, 64*8*8), temp, phi_prior1)) output = F.relu(self.l2(output, temp, phi_prior2)) output = self.l4(output) #loss function here return output.squeeze() data_size = 60000 data_dim = 64*8*8 hidden_dim1 = 64*1*1 hidden_dim2 = 64*1*1 target_dim = 10 L=2 temp = torch.tensor(0.5) phi_prior1 = torch.tensor(0.0001) phi_prior2 = torch.tensor(0.0001) lr = .001 batch_size =1024 epochs = 1 torch.set_default_tensor_type('torch.cuda.FloatTensor') device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") train_loader = torch.utils.data.DataLoader(train_set, batch_size = batch_size,shuffle=True,num_workers=0) test_loader = torch.utils.data.DataLoader(test_set, batch_size = batch_size,shuffle=False,num_workers=0) net = SFunc(data_dim, hidden_dim1, hidden_dim2, target_dim, temp, phi_prior1,phi_prior2,train_loader) if torch.cuda.device_count() > 1: print("There are", torch.cuda.device_count(), "GPUs!") net = torch.nn.DataParallel(net,device_ids=[0,1]) net.to(device) optimizer = torch.optim.Adam(net.parameters(), lr=lr) for epoch in range(epochs): for batch in train_loader: images, labels = batch[0].to(device), batch[1].to(device) print("Outside: input size", images.size()) preds = net(images, labels,temp,phi_prior1,phi_prior2) optimizer.zero_grad() # loss.backward() optimizer.step() print("\n") total = 0 correct = 0 with torch.no_grad(): for batch in test_loader: images, labels = batch[0].to(device), batch[1].to(device) labels_list.append(labels) print("Outside: input size", images.size()) outputs = net(images, labels, temp, phi_prior1, phi_prior2) preds2 = torch.max(outputs, 1)[1] predictions_list.append(preds2) correct += (preds2 == labels).sum() total += labels.size(0) test_accuracy = correct/total In this code I am getting an error. Traceback (most recent call last): File "SSIG_Fashion-MNIST-HPCC_New.py", line 488, in <module> preds = net(images, labels,temp.to(device),phi_prior1.to(device),phi_prior2.to(device)) File "/mnt/home/jantresa/anaconda3/envs/test1/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/home/jantresa/anaconda3/envs/test1/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 157, in forward inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) File "/mnt/home/jantresa/anaconda3/envs/test1/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 174, in scatter return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) File "/mnt/home/jantresa/anaconda3/envs/test1/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 44, in scatter_kwargs inputs = scatter(inputs, target_gpus, dim) if inputs else [] File "/mnt/home/jantresa/anaconda3/envs/test1/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 36, in scatter res = scatter_map(inputs) File "/mnt/home/jantresa/anaconda3/envs/test1/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 23, in scatter_map return list(zip(*map(scatter_map, obj))) File "/mnt/home/jantresa/anaconda3/envs/test1/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 19, in scatter_map return Scatter.apply(target_gpus, None, dim, obj) File "/mnt/home/jantresa/anaconda3/envs/test1/lib/python3.8/site-packages/torch/nn/parallel/_functions.py", line 93, in forward outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams) File "/mnt/home/jantresa/anaconda3/envs/test1/lib/python3.8/site-packages/torch/nn/parallel/comm.py", line 189, in scatter return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams)) RuntimeError: chunk expects at least a 1-dimensional tensor I am not able to understand why when I pass simple tensor arguments to the wrapped model, I get the error mentioned here.
st176246
Hi, in our project using multiple gpus for training a resnet50 model with PyTorch and DistributedDataParallel, I encountered a problem. Here is the github-link for our project. github.com aime-team/pytorch-benchmarks 7 A benchmark framework for Pytorch. Contribute to aime-team/pytorch-benchmarks development by creating an account on GitHub. Looking at the comparison of the validation accuracy progress after each epoch between a single GPU and multiple GPUs, it looks like the GPUs don’t share their training results with each other and it’s actually just one GPU training the model. Here is a comparison between a single Nvidia RTX 3090 and 4 Nvidia RTX 3090 using the same training parameters trained with the imagenet dataset. If the epoch axes of the 4x3090 graph is divided by the number of gpus (green curve), the graph has a similar shape like the one of the single GPU (blue curve). So basically training with 4 GPUS needs 4 epochs to get the same results like a single GPU achieves in only 1 epoch. Since the dataset is distributed over all GPUS, each GPU only uses 1/4 of the whole dataset, explaining the worse training result for each GPU after each epoch. Playing around with training parameters like using different batch sizes and learning rates gives me the same result. I have another comparison with different training parameters following in the next post because of the limit of embedded media for new users. No matter what batch size or learning rate is used, training with a single GPU is always much more efficient then training with multiple GPUS using DistributedDataParallel. I also ran different tests with smaller datasets and different training parameters with the same effect. Using DataParallel instead of DistributedDataParallel works as expected: The training of multiple GPUS with a batchsize divided by the number of GPUS gives similar results as the training of a single GPU with the full batchsize. But since DataParallel is much slower than DistributedDataParallel, I would be very grateful, if someone could give me a hint what I am doing wrong. Thank you in advance
st176247
Very interesting project! So basically training with 4 GPUS needs 4 epochs to get the same results like a single GPU achieves in only 1 epoch. This is not true if you consider the sync among 4 GPUs per epoch. It should be equivalent to running 4 epochs on a single GPU. Can you confirm if there is any communication between different processes (by printing the gradient values of different ranks after backward)? Gradients of different ranks should be the same after backward. Additionally, your repo has an arg average_gradients. If you turn on this as a duplicate gradient averaging step, will it achieve the same accuracy as a single GPU?
st176248
Thank you for your replies! wayi: Can you confirm if there is any communication between different processes (by printing the gradient values of different ranks after backward)? Gradients of different ranks should be the same after backward. I checked the gradient values of different ranks after the backward propagation and can confirm they are equal over all ranks. So the communication/sync between the GPUs seems to work. But I still don’t understand why the development of the validation accuracy is still that bad in our case. I wasn’t able to get a training setting where multiple GPUs using DistributedDataParallel gives a benefit over a single GPU. The epochs in a training with 4 GPUs are 4x shorter (because each GPU works with only 1/4 of the dataset) but we need 4x the amount of epochs to get the same result. wayi: Additionally, your repo has an arg average_gradients. If you turn on this as a duplicate gradient averaging step, will it achieve the same accuracy as a single GPU? The option average_gradients I added as a try to solve this issue. But since the gradients of all ranks are already synced, it has no effect and the gradients remain unchanged. TinfoilHat0: What about using DataParallel? With DataParallel this issue doesn’t occur. That means training with multiple GPUs using a global batchsize divided by the number of GPUs (to have the same local batchsize) gives similar results as a single GPU with the full batchsize. But since DataParallel is significantly slower then DistributedDataParallel and we are trying to make multi-GPU-training as performant as possible, it would be awesome to solve this with DistributedDataParallel.
st176249
Have you tried DDP with both 1) and 2)? the same local batch size, which is equal to the global batch size divided by the number of GPUs, the original learning rate multiplied by the number of GPUs.
st176250
having the exact same issue, only monitoring training accuracy/loss. Also, training is slower that DataParallel and even single GPU. Wondering if/how you solved this. On a related note, would really appreciate a pointer to a detailed discussion/documentation of DistributedDataParallel… so far, I’m learning by making mistakes since the tutorials don’t seem to cover it comprehensively (or I haven’t found them yet).
st176251
The epochs in a training with 4 GPUs are 4x shorter (because each GPU works with only 1/4 of the dataset) but we need 4x the amount of epochs to get the same result. With DataParallel this issue doesn’t occur. Putting these two together, looks like the loss function might play a role here. With DDP, the gradient synchronization only occurs during the backward pass after loss computation, which means that each process/GPU independently computes the loss using its local input split. In contrast, DataParallel does not have this problem, as forward output is first gathered and then the loss is computed over all input data in that iteration. Will this make a difference in your application? Another thing is that, when switching from single GPU to DDP-based multi-GPU training, you might need to tune the configs like learning rate to get the best result. See discussions in this post: Should we split batch_size according to ngpu_per_node when DistributedDataparallel 6
st176252
Past weeks I ran a few more tests, but I still wasn’t able to solve this issue. Here are some training results with different batch sizes and learning rates: I also tried the ratios of the learning rate and batch size between single GPU and multi GPU training suggested by @wayi . The single GPU curve is still always ahead of the multi GPU curve. Am I still missing something or is DistributedDataParallel just not working properly? Can please someone confirm its actually possible to achieve a similar progression of the validation accuracy in multi-GPU training using DistributedDataParallel compared to a single GPU training?
st176253
I tried to train MNIST using torch.distributed.launch nccl backend The launch command export NCCL_DEBUG=INFO export NCCL_IB_DISABLE=true # use or not does not change the results echo "NCCL_IB_DISABLE=$NCCL_IB_DISABLE" export NCCL_SOCKET_IFNAME=eno1,eth0 # use or not does not change the results python3 -m torch.distributed.launch --nproc_per_node 2 \ --nnodes 1 \ --node_rank 0 \ --master_addr="0.0.0.0" \ --master_port=2333 \ main.py \ --epochs 3 \ --lr 1e-3 \ --batch_size 150 gloo backend works just fine nccl got stuck i have tried suggestions on the forum but none of them worked debug info: sh start-dist-train.sh NCCL_IB_DISABLE=true ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** nccl nccl torch-research-2gpu-0:48249:48249 [0] NCCL INFO Bootstrap : Using [0]eth0:10.244.26.37<0> torch-research-2gpu-0:48249:48249 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation torch-research-2gpu-0:48249:48249 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] torch-research-2gpu-0:48249:48249 [0] NCCL INFO NET/Socket : Using [0]eth0:10.244.26.37<0> torch-research-2gpu-0:48249:48249 [0] NCCL INFO Using network Socket NCCL version 2.7.8+cuda10.2 torch-research-2gpu-0:48250:48250 [1] NCCL INFO Bootstrap : Using [0]eth0:10.244.26.37<0> torch-research-2gpu-0:48250:48250 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation torch-research-2gpu-0:48250:48250 [1] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] torch-research-2gpu-0:48250:48250 [1] NCCL INFO NET/Socket : Using [0]eth0:10.244.26.37<0> torch-research-2gpu-0:48250:48250 [1] NCCL INFO Using network Socket
st176254
Hey Chenchao, Couple questions: Which PyTorch version are you using? What is your OS/GPU setup? Is it possible to share your script? Is this behavior reproducible at every run?
st176255
Hey Can, pytorch version 1.8.1-cu102 the instance is kubeflow notebook server container image is ubuntu:20.04 behavior is reproducible I fixed the issue by setting master ip to localhost export NCCL_SOCKET_IFNAME =lo I haven’t tried a multi-node training with several GPUs per node yet.
st176256
Cool, glad that you could fix the problem. Let us know if you experience any issues with a multi-node setup.
st176257
Hello Guys, Thanks to the DDP, I could split the batch data across different GPUs on Different nodes, I could also split the model on different GPUs in one node. But for now, I need to split the model on different GPUs on Diffrent Nodes, let’s say two nodes, and one gpu per node. Could someone help me with this? I think the main difficult is that: for each node, the “local_rank” is 0, how to send different part of the model to different GPU on different node? Thanks a lot!
st176258
Unfortunately cross-host model sharding is not supported yet, but we have plans to introduce it in a future version of PyTorch.
st176259
case1: input = ... model1 = ... model2 = ... s1 = torch.cuda.Stream() s2 = torch.cuda.Stream() with torch.cuda.stream(s1): output1 = model1(input) optimizer1.zero_grad() loss1(output1, label).backward() optimizer1.step() with torch.cuda.stream(s2): output2 = model2(input) optimizer1.zero_grad() loss2(output1, label).backward() optimizer2.step() case2: input = ... model1 = ... model2 = ... s1 = torch.cuda.Stream() s2 = torch.cuda.Stream() with torch.cuda.stream(s1): output1 = model1(input) optimizer1.zero_grad() loss1(output1, label).backward() optimizer1.step() output2 = model2(input) optimizer1.zero_grad() loss2(output1, label).backward() optimizer2.step() I expected the first case to take about half as long as the second, but I found that both cases took about the same amount of time. I don’t know why. How do I implement parallel execution of multiple models in multiple cuda streams?
st176260
Depending on the model and thus the workload, the CPU might not be able to run ahead and schedule the kernel launches fast enough. You could profile it using e.g. Nsight Systems and check, if the kernels are overlapping or if they are so short, that they are executed “sequentially” on these two devices. As a quick check you could replace the models with huge matrix multiplications and profile these.
st176261
Hi @ptrblck and @ronda, I have been trying to do something similar with my model. I have created three different copies of the same model and I would like to run them concurrently. Right now, I am running these models on a Jupyter notebook. The structure of the code looks like this: model0 = GRUCell(output_dim, hidden_dim, batch_size-1, output_dim, num_layers).double().cuda() model1 = GRUCell(output_dim, hidden_dim, batch_size-1, output_dim, num_layers).double().cuda() model2 = GRUCell(output_dim, hidden_dim, batch_size-1, output_dim, num_layers).double().cuda() h0 = model0.init_hidden() h1 = model1.init_hidden() h2 = model2.init_hidden() optimizer0 = torch.optim.Adam(model0.parameters(), lr=learning_rate, weight_decay=0.00000) optimizer1 = torch.optim.Adam(model1.parameters(), lr=learning_rate, weight_decay=0.00000) optimizer2 = torch.optim.Adam(model2.parameters(), lr=learning_rate, weight_decay=0.00000) s1 = torch.cuda.Stream() s2 = torch.cuda.Stream() s3 = torch.cuda.Stream() loss_fn0 = torch.nn.MSELoss() loss_fn1 = torch.nn.MSELoss() loss_fn2 = torch.nn.MSELoss() # Intermediate code not relevant to the thread hence skipped for epoch in epochs: for k in range(sequence_len): #Creating multiple copies of the same data x_batch_train0, y_batch_train0, l2, l1 = batch_creator_4((np.array(idxs[0])-k).tolist(), total_len, sequence_len, predict_len, batch_size-1, shift = 2) x_batch_train1, y_batch_train1, _, _ = batch_creator_4((np.array(idxs[0])-k).tolist(), total_len, sequence_len, predict_len, batch_size-1, shift = 0) x_batch_train2, y_batch_train2, _, _ = batch_creator_4((np.array(idxs[0])-k).tolist(), total_len, sequence_len, predict_len, batch_size-1, shift = 1) tic = time.time() with torch.cuda.stream(s1): for i in range(l2): for t in range(sequence_len): x_batch_train_sub0 = torch.reshape(x_batch_train0[i,:, t,:], (batch_size-1, 3)).to(device, non_blocking=True) output0, h0, _ = model0((x_batch_train_sub0, h0)) h0 = h0.detach() y_batch_train_sub0 = torch.reshape(y_batch_train0[i, :,0], (batch_size-1, 1)).to(device, non_blocking=True) loss_tot0 = loss_fn0(output0, y_batch_train_sub0) optimizer0.zero_grad() loss_tot0.backward() optimizer0.step() with torch.cuda.stream(s2): for i1 in range(l2): for t1 in range(sequence_len): x_batch_train_sub1 = torch.reshape(x_batch_train1[i1,:, t1,:], (batch_size-1, 3)).to(device, non_blocking=True) output1, h1, _ = model1((x_batch_train_sub1, h1)) h1 = h1.detach() y_batch_train_sub1 = torch.reshape(y_batch_train1[i1, :,0], (batch_size-1, 1)).to(device, non_blocking=True) loss_tot1 = loss_fn1(output1, y_batch_train_sub1) optimizer1.zero_grad() loss_tot1.backward() optimizer1.step() with torch.cuda.stream(s3): for i in range(l2): for t in range(sequence_len): x_batch_train_sub2 = torch.reshape(x_batch_train2[i,:, t,:], (batch_size-1, 3)).to(device, non_blocking=True) output2, h2, _ = model2((x_batch_train_sub2, h2)) h2 = h2.detach() y_batch_train_sub2 = torch.reshape(y_batch_train2[i, :,0], (batch_size-1, 1)).to(device, non_blocking=True) loss_tot2 = loss_fn2(output2, y_batch_train_sub2) #loss_arr2.append((loss_tot2**2).cpu().detach().numpy()) optimizer2.zero_grad() loss_tot2.backward() optimizer2.step() torch.cuda.synchronize() toc = time.time() print(toc-tic) For hidden_dim(feature size) = 1000, time taken by one iteration inside the outermost loop(toc-tic) = 0.76s hidden_dim(feature size) = 5000, time taken by one iteration inside the outermost loop(toc-tic) ~ 9s hidden_dim(feature size) = 6000, time taken by one iteration inside the outermost loop(toc-tic) = 13s Doing the same calculation with serial code as shown below: # Same model, optimizer and loss creation code as above # Intermediate code irrelevant to this thread, hence removed. for epoch in epochs: for k in range(sequence_len): #pdb.set_trace() x_batch_train0, y_batch_train0, l2, l1 = batch_creator_4((np.array(idxs[0])-k).tolist(), total_len, sequence_len, predict_len, batch_size-1, shift = 2) x_batch_train1, y_batch_train1, _, _ = batch_creator_4((np.array(idxs[0])-k).tolist(), total_len, sequence_len, predict_len, batch_size-1, shift = 0) x_batch_train2, y_batch_train2, _, _ = batch_creator_4((np.array(idxs[0])-k).tolist(), total_len, sequence_len, predict_len, batch_size-1, shift = 1) tic = time.time() for i in range(l2): for t in range(sequence_len): x_batch_train_sub0 = torch.reshape(x_batch_train0[i,:, t,:], (batch_size-1, 3))..to(device, non_blocking=True) x_batch_train_sub1 = torch.reshape(x_batch_train1[i,:, t,:], (batch_size-1, 3))..to(device, non_blocking=True) x_batch_train_sub2 = torch.reshape(x_batch_train2[i,:, t,:], (batch_size-1, 3)).to(device, non_blocking=True) output0, h0, _ = model0((x_batch_train_sub0, h0)) output1, h1, _ = model1((x_batch_train_sub1, h1)) output2, h2, _ = model2((x_batch_train_sub2, h2)) h0 = h0.detach() h1 = h1.detach() h2 = h2.detach() y_batch_train_sub0 = torch.reshape(y_batch_train0[i, :,0], (batch_size-1, 1)).to(device, non_blocking=True) y_batch_train_sub1 = torch.reshape(y_batch_train1[i, :,0], (batch_size-1, 1)).to(device, non_blocking=True) y_batch_train_sub2 = torch.reshape(y_batch_train2[i, :,0], (batch_size-1, 1)).to(device, non_blocking=True) loss_tot0 = loss_fn(output0, y_batch_train_sub0) loss_tot1 = loss_fn(output1, y_batch_train_sub1) loss_tot2 = loss_fn(output2, y_batch_train_sub2) optimizer0.zero_grad() optimizer1.zero_grad() optimizer2.zero_grad() loss_tot0.backward() loss_tot1.backward() loss_tot2.backward() optimizer0.step() optimizer1.step() optimizer2.step() toc = time.time() print(toc-tic) For hidden_dim = 1000, I get an average execution time of toc-tic = 0.76s For hidden_dim = 5000, I get an average execution time of toc-tic = 13s For hidden_dim = 6000, I get an average execution time of toc-tic = 19s This means that the execution is not happening completely parallelly and that the benefits(if any) become visible only for a large network. I have also looked at several threads(1, 2, 3 1) on this forum and these issues(a, b) on GitHub. I understand there may be plenty of redundant commands that I have used but I wanted to create an example that is simple to explain. I don’t have any experience with Pytorch’s Cuda interface. Any suggestions are welcome.
st176262
image2098×328 81.9 KB when use DistributedDataParallel scaler = GradScaler() for epoch in range(args.epochs): model.train() if num_distrib() > 1: train_loader.sampler.set_epoch(epoch) for i, (input, target) in enumerate(train_loader): with autocast(): # mixed precision output = model(input) loss = loss_fn(output, target) # note - loss also in fp16 model.zero_grad() scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() reduced_loss = reduce_tensor(loss, args.gpus) losses.update(reduced_loss.item(), input.size(0)) scheduler.step() scheduler.step() has after scaler.step(optimizer) why appearance UserWarning how can i do ?
st176263
The warning is raised, because the GradScaler might use an initial scaling factor, which could be too large for the first batches and will thus reduce it as well as skip the optimizer.step(). Due to this, the scheduler.step() would be executed before the first optimizer.step(), which will raise the warning. You could ignore or disable it, or on the other hand, check if the scaler decreased the scale factor and also skip the scheduler.step().
st176264
when I train a model using DDP in 4 GPUs and evaluate it in one GPU with args.local_rank==0, I want to broadcast the top1 to other GPUs. but I got the deadlock. The GPUs (local_rank=1,2,3) just enter the next command without blocking to get the broadcast results. The code is shown below. It was execuated after finishing training one epoch. if args.local_rank == 0: top1, top5 = test(net, testloader, criterion, False) torch.distributed.broadcast(torch.tensor(top1).cuda(args.local_rank),src=0, async_op=False) print("local rank:{}, top1:{}".format(args.local_rank, top1)) The result is shown below. The process was hanged after print the following information: image1122×328 22.9 KB Is there anyone who met the same problem?
st176265
Can you try this way if args.local_rank == 0: top1, top5 = test(net, testloader, criterion, False) top1 = torch.tensor(top1).cuda(args.local_rank) else: top1 = torch.tensor(0.).cuda(args.local_rank) torch.distributed.broadcast(top1,src=0, async_op=False) print("local rank:{}, top1:{}".format(args.local_rank, top1.item()))
st176266
cindybrain: if args.local_rank == 0: top1, top5 = test(net, testloader, criterion, False) torch.distributed.broadcast(torch.tensor(top1).cuda(args.local_rank),src=0, async_op=False) print("local rank:{}, top1:{}".format(args.local_rank, top1)) Thanks for your reply! But it still doesn’t work
st176267
Hey @cindybrain, that’s weird. Could you please share a self-contained repro? Thanks!