id
stringlengths
3
8
text
stringlengths
1
115k
st177068
I don’t quite understand the “each element of the list (which is a tensor) has to shared across multiple gpu(in Dataparallel case)”. nn.DataParallel will split the input tensor in dim0 and will send each chunk to a GPU. The elements won’t be duplicated, which implies the “sharing”. Since the list input is apparently not working, you could create a tensor of this list and make sure that the split dimension is in dim0.
st177069
Hello, I use DDP module to train ImageNet. To collect training metrics from different GPUs, I use distributed.all_reduce. Here are some related codes: local_rank = args.local_rank torch.cuda.set_device(local_rank) device = torch.device("cuda", local_rank) for epoch in range(args.num_epoch + args.warmup_epoch): start = time.time() train_loss, train_acc = utils.train_one_epoch(net, train_loader, criterion, optimizer, scheduler) val_loss, val_acc = utils.test_one_epoch(net, val_loader, criterion) #train_loss, train_acc, val_loss, val_acc are floating numbers reduce_tensor = torch.tensor([train_loss, train_acc, val_loss, val_acc]).to(device) torch.distributed.all_reduce(reduce_tensor) reduce_tensor /= args.num_gpus # args.num_gpus = 8 time_used = (time.time() - start) / 60. if local_rank == 0: print('Epoch %d train loss %.3f acc: %.3f%%; val loss: %.3f acc %.3f%%; use %.3f mins.'% (epoch, reduce_tensor[0], reduce_tensor[1], reduce_tensor[2], reduce_tensor[3], time_used)) I only get wrong results in the last epoch. Here are some logs: log1: Epoch 97 train loss 0.892 acc: 77.805%; val loss: 0.930 acc 77.010%; use 8.296 mins. Epoch 98 train loss 0.887 acc: 77.922%; val loss: 0.931 acc 77.024%; use 8.305 mins. Epoch 99 train loss 0.422 acc: 38.989%; val loss: 0.459 acc 38.506%; use 8.300 mins. All metrics are 4/8 of expected. It seems that results from 4 GPUs are 0. log2: Epoch 96 train loss 0.973 acc: 75.933%; val loss: 0.967 acc 76.188%; use 9.449 mins. Epoch 97 train loss 0.969 acc: 76.003%; val loss: 0.967 acc 76.148%; use 9.459 mins. Epoch 98 train loss 0.969 acc: 76.029%; val loss: 0.967 acc 76.228%; use 9.445 mins. Epoch 99 train loss 1.333 acc: 104.523%; val loss: 1.326 acc 104.876%; use 9.452 mins. All metrics are 11/8 of expected: 1.333 / (11/8)=0.969. It seems that results from 3 GPUs are repeated in all reduce. The strange results only happen in the final epoch. What could be the possible reasons? Thanks!
st177070
Your program seems to be correct, some questions: what backend are you using? and could you please add these tests?: change device to “cpu”, will the error be the same? print rank, train_loss, train_acc, val_loss, val_acc in each of your process, before all_reduce theoretically this problem should not happen the default gloo backend supports gpu all_reduce and broadcast you cannot repeat or let out a process, since all_reduce is blocking
st177071
Thanks for responding! I use NCCL as backend: torch.distributed.init_process_group(backend="nccl") If I change the device to ‘cpu’, there is an error: Tensors must be CUDA and dense I will try to print them before all_reduce, and see what happens.
st177072
Hey @KaiHoo can you print the reduce_tensor before you pass it to all_reduce, so that we can narrow down whether it is the all_reduce or the DDP training/testing that’s mal-bahaving.
st177073
@iffiX @mrshenli Hello, sorry for the late reply. Here are part of the codes: Epoch 95 train loss 1.056 acc: 74.176%; val loss: 0.954 acc 75.958%; use 12.457 mins. Epoch 96 train loss 1.048 acc: 74.339%; val loss: 0.949 acc 75.998%; use 12.459 mins. Epoch 97 train loss 1.028 acc: 74.815%; val loss: 0.946 acc 76.232%; use 12.455 mins. Epoch 98 train loss 1.027 acc: 74.855%; val loss: 0.946 acc 76.236%; use 12.475 mins. Rank 5 train loss 1.026 acc: 74.890%; val loss: 0.972 acc 75.600% Rank 6 train loss 1.025 acc: 74.815%; val loss: 0.924 acc 76.352% Rank 3 train loss 1.025 acc: 74.889%; val loss: 0.957 acc 75.632% Rank 1 train loss 1.032 acc: 74.757%; val loss: 0.929 acc 76.960% Rank 7 train loss 1.023 acc: 75.038%; val loss: 0.930 acc 76.512% Rank 2 train loss 1.019 acc: 75.013%; val loss: 0.958 acc 76.144% Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7ffb5070eea0> Traceback (most recent call last): File "/usr/lib/python3.5/weakref.py", line 117, in remove TypeError: 'NoneType' object is not callable Rank 0 train loss 1.022 acc: 74.841%; val loss: 0.951 acc 76.304% Epoch 99 train loss 0.513 acc: 37.451%; val loss: 0.470 acc 38.097%; use 12.467 mins. Rank 4 train loss 1.023 acc: 74.984%; val loss: 0.947 acc 76.304% Since the strange results only happen in the final epoch, I only print the metrics for the last epoch. The order of logs are exactly what I got, though the ‘Rank 4’ line should be printed before the ‘Epoch 99 train’ line: for epoch in range(args.num_epoch + args.warmup_epoch): start = time.time() train_loss, train_acc = utils.train_one_epoch(net, train_loader, criterion, optimizer, mean_and_std, scheduler, args) val_loss, val_acc = utils.test_one_epoch(net, val_loader, criterion, mean_and_std) reduce_tensor = torch.tensor([train_loss, train_acc, val_loss, val_acc]).to(device) if epoch == args.num_epoch + args.warmup_epoch - 1: print('Rank %d train loss %.3f acc: %.3f%%; val loss: %.3f acc %.3f%%'% (local_rank, reduce_tensor[0], reduce_tensor[1], reduce_tensor[2], reduce_tensor[3])) torch.distributed.all_reduce(reduce_tensor) reduce_tensor /= args.num_gpus time_used = (time.time() - start) / 60. if local_rank == 0: print('Epoch %d train loss %.3f acc: %.3f%%; val loss: %.3f acc %.3f%%; use %.3f mins.'% (epoch, reduce_tensor[0], reduce_tensor[1], reduce_tensor[2], reduce_tensor[3], time_used))
st177074
KaiHoo: Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7ffb5070eea0> Traceback (most recent call last): File "/usr/lib/python3.5/weakref.py", line 117, in remove TypeError: 'NoneType' object is not callable Is the above error expected? How did you handle this? If this is handled by skipping/redoing that iteration, it might cause allreduce mismatch.
st177075
KaiHoo: File "/usr/lib/python3.5/weakref.py", line 117, in remove I have no idea about this error, though nothing happened. This bug is reported to PyTorch, but seems a bug of python: github.com/pytorch/pytorch [minor] Random ignored exception on exit of pytorch scripts opened Jul 28, 2017 closed Aug 3, 2017 cdluminate This happends randomly and I have no idea under what condition can this problem be reproduced. Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at... github.com/pytorch/pytorch torch.nn.parallel.data_parallel crashes machine and has a weakref bug 2 opened Aug 3, 2019 PetrochukM 🐛 Bug For some reason, this happened: Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7f4b13e53158> Traceback (most recent call last): File "/usr/lib/python3.5/weakref.py", line 117, in... high priority module: data parallel triaged
st177076
I’m a bit curious about this bug, did anyone got to the bottom of this ? @KaiHoo, just in case, are you spreading the GPUs across different nodes ?
st177077
I want to train a large network using model parallelism on multiple machines (multiple GPUs per machine), for that I am following this article https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#combine-ddp-with-model-parallelism 9 This article doesn’t set up any multi machine cluster, so how will it train on multiple machines? Also I am not able to understand following terms in my scenario, world size rank spawn processes process group I have already installed NCCL in all nodes. How can I make it work?
st177078
Solved by wayi in post #3 If you want to explore model parallelism in a distributed environment, you need to use Distributed RPC framework. The tutorial page of DDP + RPC can be found here: https://pytorch.org/tutorials/advanced/rpc_ddp_tutorial.html#
st177079
This is good place to start: github.com pytorch/examples 39 master/imagenet A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc. The example script and README show how to setup multi-node training for ImageNet. You may also want to try out PyTorch Lightning which has a simple API for multi-node training: https://pytorch-lightning.readthedocs.io/en/stable/multi_gpu.html 16
st177080
If you want to explore model parallelism in a distributed environment, you need to use Distributed RPC framework 15. The tutorial page of DDP + RPC can be found here: https://pytorch.org/tutorials/advanced/rpc_ddp_tutorial.html# 21
st177081
That example shows how to use DDP on multiple nodes, but model parallelism requires RPC in PyTorch.
st177082
In conclusion, single machine model parallelism can be done as shown in the article I listed in my question, multi node training without model parallelism (with DDP) is shown in the example listed by @conrad & multi node training with model parallelism can only be implemented using PyTorch RPC. Is it right @wayi ?
st177083
You are totally right! RPC is the only way to support model parallelism in PyTorch distributed training. There may be some higher level APIs in the future, but they are all RPCs under the hood.
st177084
Is the collate function executed by every DataLoader worker, or only by the main process? So basically, does each worker only call __getitem__ and then put fetched samples into a queue for the main process to use the collate function and make a batch, or each worker process calls the collate function, creates batches and puts entire batches into a queue? @vincentqb
st177085
Hi, I created a rpc network on localhost:29500. When I execute rpc.shutdown(), I can see that port 29500 is still listening. Is there a way to shut down the port completely? Thank you.
st177086
Solved by lcw in post #3 I believe that port is the one used by the TCP store, for rendezvous. It’s not the first time I hear this, but I never understood if the store outlives the RPC agent on purpose or if it’s due to some sort of leak…
st177087
Are you using the TensorPipe RPC Backend? From what I understand calling shutdown should mean that process stops listening on the port - is this correct @lcw @mrshenli ?
st177088
I believe that port is the one used by the TCP store, for rendezvous. It’s not the first time I hear this, but I never understood if the store outlives the RPC agent on purpose or if it’s due to some sort of leak…
st177089
Hi. As per my understanding torch.distributed.barrier will put the first process on hold until all the other processes has reached to the same point. Well it does not seem to be working on the cluster. I am using nccl backend. I have added some print statements just to debug. My code: def init_process(rank, size, backend='nccl'): """ Initialize the distributed environment. """ os.environ['MASTER_ADDR'] = '127.0.0.1' os.environ['MASTER_PORT'] = '29500' dist.init_process_group(backend, rank=rank, world_size=size) def train(rank, world_size): print("inside train method") init_process(rank, world_size, "nccl") print( f"Rank {rank + 1}/{world_size} process initialized." ) if rank == 0: get_dataloader(rank, world_size) model.gru() dist.barrier() print(f"Rank {rank + 1}/{world_size} training process passed data download barrier.") def main(): mp.spawn(train, args=(WORLD_SIZE,), nprocs=WORLD_SIZE, join=True) actual output: inside train method Rank 2/2 process initialized Rank 1/2 training process passed data download barrier starting epochs Rank 1/2 process initialized Rank 0/2 training process passed data download barrier starting epochs I expected output to be like: inside train method. Rank 1/2 process initialized. Rank 2/2 process initialized. Rank 1/2 training process passed data download barrier. Rank 2/2 training process passed data download barrier. starting epochs starting epochs. From this it is clear that first process did not wait for second process. Am I missing something?
st177090
I’m a bit confused by the output logs. How does Rank 1 log “process initialized” after logging “training process passed data download barrier starting epochs”? Also why are there 3 ranks with a world size of 2? Lastly, “inside train method” should be logged twice if you are spawning 2 processes. I reproduced your script, and swapped out the get_dataloader() and model.gru() calls with a print statement that prints “Just before barrier”. I got the expected output: spawning inside train method inside train method Rank 1/2 process initialized. Rank 2/2 process initialized. Rank 1/2 Just before barrier Rank 1/2 training process passed data download barrier. Rank 2/2 training process passed data download barrier. barrier() should indeed block until all the nodes reach the barrier. It would be worth debugging if there is something odd occuring in the get_dataloader() and model.gru() calls. For a lighter weight barrier implementation, you can try calling dist.allreduce on a very small (or empty) torch tensor and this should mimic the barrier behavior.
st177091
sorry my bad my output is inside train method Rank 2/2 process initialized Rank 2/2 training process passed data download barrier starting epochs Rank 1/2 process initialized Rank 1/2 training process passed data download barrier starting epochs But I expected output to be like yours.
st177092
Hi! I am using a nn.parallel.DistributedDataParallel model for both training and inference on multiple gpu. To achieve that I use mp.spawn(evaluate, nprocs=n_gpu, args=(args, eval_dataset)) To evaluate I actually need to first run the dev dataset examples through a model and then to aggregate the results. Therefore I need to be able to return my predictions to the main process (possibly in a dict, but some other data structure should work as well). I’ve tried providing an extra dict argument mp.spawn(evaluate, nprocs=n_gpu, args=(args, eval_dataset, out_dict)) and modifying it in the function but apparently spawn copies it, so the dict in the main process is not modified. I guess, I could write the results to the file and then read in the main process but it doesn’t seem like the most elegant solution. Is there a better way to return values from spawned functions? Thanks!
st177093
Solved by mrshenli in post #2 Is there a better way to return values from spawned functions? If you want to pass the result from spawned processes back to the parent process, you can let the parent process create multiprocessing queues, pass it to children processes, and let children processes send result back through the que…
st177094
Is there a better way to return values from spawned functions? If you want to pass the result from spawned processes back to the parent process, you can let the parent process create multiprocessing queues 31, pass it to children processes, and let children processes send result back through the queue. See the following code: github.com pytorch/pytorch/blob/cb26661fe4faf26386703180a9045e6ac6d157df/test/test_multiprocessing.py#L577-L600 45 @unittest.skipIf(NO_MULTIPROCESSING_SPAWN, "Disabled for environments that \ don't support multiprocessing with spawn start method") @unittest.skipIf(not TEST_CUDA_IPC, 'CUDA IPC not available') def test_event_multiprocess(self): event = torch.cuda.Event(enable_timing=False, interprocess=True) self.assertTrue(event.query()) ctx = mp.get_context('spawn') p2c = ctx.SimpleQueue() c2p = ctx.SimpleQueue() p = ctx.Process( target=TestMultiprocessing._test_event_multiprocess_child, args=(event, p2c, c2p)) p.start() c2p.get() # wait for until child process is ready torch.cuda._sleep(50000000) # spin for about 50 ms event.record() p2c.put(0) # notify child event is recorded This file has been truncated. show original If the result does not have to go back to the parent process, you can use gather 7 or allgather 1 to communicate the result across children processes.
st177095
Hi, for me writing to files actually worked out quite alright. I’m just using pickle.dump in the spawned processes and pickle.load in the main one
st177096
Actually met this problem once again recently and decided to make use of the mp.Queue as @mrshenli suggested, so I used result_queue = mp.Queue() for rank in range(args.n_gpu): mp.Process(target=get_features, args=(rank, args, eval_dataset, result_queue)).start() to start the processes instead of mp.spawn and just added the results in my queue with result_queue.put((preds, labels, ranks)) in those processes. To later collect the results I did the following: for _ in range(args.n_gpu): temp_result = result_queue.get() preds.append(temp_result[0]) labels.append(temp_result[1]) ranks.append(temp_result[2]) del temp_result
st177097
I mean, supposed batch_size=4 and num_workers=2, which of the following may match the runtime case? #1 worker1: load sample1, sample2 worker2: load sample3, sample4 batch1 contains sample1, sample2, sample3, sample4 #2 worker1: load batch1 worker2: load batch2
st177098
I have read the mentioned post, but how can I proof it? or any PyTorch source code show the fact?
st177099
The source code for the DataLoader logic is too complex for me, but I see two reasons why #2 would be the approach used in practice: @apaszke said it, creating a batch from the two workers would require waiting for them to finish their task (getting data) before returning the batch, which is inefficient and (I guess depending on the architecture, data structure, storage etc…) useless compared to a single process loading sample1, sample2, sample3 and sample4 and then returning the batch. What is your use case?
st177100
for your first point, @apaszke only showed the result, but I want to know the reason. for the second, multi workers handle a batch may not be slower than single worker.
st177101
In my test case, I use 200 samples and set batch size is equal to 50. The result shows only 4 subprocess works even if i set num_workers=16(I have 16 CPUs). So I think alex.veuthey is right.
st177102
I have a very big file list, which is organized with: [ [filename,label], [filename,label], … [filename,label] ] And I create a dataset to read this file list to the memory. Since my training code is run with DistributedDataParallel and I have 8 GPUs, the dataset will be created 8 times. python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" train.py And they will cost large memories, nearly 30*8=240G in total. Is there a way to let those processes share a single dataset? Thanks for your help
st177103
Can you use PyTorch DataLoader? If you implement the __getitem__ function, the batches will be lazily read into memory. Each DDP replica will then have one DataLoader, and each DataLoader will load the data lazily, so there shouldn’t be as much memory pressure. Relevant Forums Post: How to use dataset larger than memory? 2
st177104
Thanks for your reply. I do use pytorch dataloader like below: class MyTrainImageFolder(torch.utils.data.Dataset): # Class inheritance def __init__(self, root, file_dir, label_dir, transform=MYTFS): imgs = [] labels = [] img_list = np.load(file_dir) labels_list = np.load(label_dir) self.imgs = img_list self.labels = labels_list self.root = root self.transform = transform self.lens = len(img_list) print('total img number:', len(self.imgs)) def __getitem__(self, index): label = torch.tensor(int(self.labels[index])) img = Image.open(os.path.join(self.root, self.imgs[index])) img = self.transform(img) return img, label def __len__(self): return len(self.imgs) The problem is, the file in “file_dir” is about 30G, I have more than 200 million images. Can i build a dataset once when using distributedDataParallel?
st177105
Maybe I should split the big list file into some small patches, and using a for loop in my training process? That is the best way that I could find.
st177106
One thing I can think of is to split the file into smaller patches, and instead of loading the files in the __init__ function, you can load these smaller files in the __getitem__ function itself (using the index and number of examples per file to fetch the correct file). This way you avoid loading the massive file all at once from all the ranks. I haven’t profiled this performance-wise, though you will be doing 2 disk reads instead of one in the getitem function - one for the images and one for the list/labels file. However, you might benefit from some caching with the latter depending on the file size/batch size/how you sample from the dataset. I’m not sure if there some shared-memory based approach where we can load these files into some memory that is shared by all the processes, but I can try to dig more into this approach if the above one does not work.
st177107
I have a problem running multigpu training on one node with 2 GPU s. Currently the program stucks on these lines imgs = imgs.to(self.device) true_masks = true_masks.to(self.device) Can anyone help me?
st177108
Do you mind providing a script to reproduce your issue, the environment (HW setup, torch version, etc.) that you ran this in, and the logs you are seeing? It would help debugging the issue!
st177109
Hello, I am implementing a custom network, which first computes a latent encoding, and then does some forward passes. I am wondering whether this kind of network can work on nn.DataParallel. I assume nn.DataParallel only does some parallelism on self.forward. Does it also handle the custom functions as well? def MyNetwork(nn.Module): def preprocess(self, a): self.encoding = encode(a) def forward(self, b): return self.encoding + self.fc(b) model = nn.DataParallel(MyNetwork()) # Pre encoding model.module.preprocess(a) # Parallel data pass output1 = model(b1) output2 = model(b2)
st177110
nn.DataParallel will not use the passed GPUs in model.module.preprocess, since the forward method will be used to create the model clones, split the data etc., as you’ve already assumed. You could call other functions inside the forward method, if you want to use all devices.
st177111
Hey, I am using DDP for model training. In the script, I add a line as, dist.init_process_group(backend=“nccl”) besides using model = DDP(model) Then, I ran the script in command line as, python -m torch.distributed.launch --nproc_per_node=8 --nnode=1 --node_rank=0 paraphrase_simpletransformers.py But then it will say OOM after it is running. I checked nvidia-smi, I found that each pid must have one gpu and also occupy the 0th gpu. That is why OOM happens. Not sure why this happens? May you give help here, thanks in advance. ±------------------------------±---------------------±---------------------+ ±----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 82610 C …e/anaconda3/envs/pytorch_p36/bin/python 3071MiB | | 0 82611 C …e/anaconda3/envs/pytorch_p36/bin/python 409MiB | | 0 82612 C …e/anaconda3/envs/pytorch_p36/bin/python 409MiB | | 0 82614 C …e/anaconda3/envs/pytorch_p36/bin/python 403MiB | | 0 82615 C …e/anaconda3/envs/pytorch_p36/bin/python 409MiB | | 0 82616 C …e/anaconda3/envs/pytorch_p36/bin/python 411MiB | | 0 82617 C …e/anaconda3/envs/pytorch_p36/bin/python 405MiB | | 0 82618 C …e/anaconda3/envs/pytorch_p36/bin/python 409MiB | | 1 82611 C …e/anaconda3/envs/pytorch_p36/bin/python 1517MiB | | 2 82612 C …e/anaconda3/envs/pytorch_p36/bin/python 1517MiB | | 3 82614 C …e/anaconda3/envs/pytorch_p36/bin/python 1497MiB | | 4 82615 C …e/anaconda3/envs/pytorch_p36/bin/python 1497MiB | | 5 82616 C …e/anaconda3/envs/pytorch_p36/bin/python 1537MiB | | 6 82617 C …e/anaconda3/envs/pytorch_p36/bin/python 1537MiB | | 7 82618 C …e/anaconda3/envs/pytorch_p36/bin/python 1537MiB | ±----------------------------------------------------------------------------+
st177112
I also tried to change the number of npro_per-node=4 as following, python -m torch.distributed.launch --nproc_per_node=4 --nnode=1 --node_rank=0 paraphrase_simpletransformers.py Then, I did nvidia-smi, it becomes, ±----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 85910 C …e/anaconda3/envs/pytorch_p36/bin/python 4925MiB | | 0 85911 C …e/anaconda3/envs/pytorch_p36/bin/python 3175MiB | | 0 85912 C …e/anaconda3/envs/pytorch_p36/bin/python 3175MiB | | 0 85913 C …e/anaconda3/envs/pytorch_p36/bin/python 3175MiB | | 1 85910 C …e/anaconda3/envs/pytorch_p36/bin/python 1421MiB | | 1 85911 C …e/anaconda3/envs/pytorch_p36/bin/python 1393MiB | | 2 85910 C …e/anaconda3/envs/pytorch_p36/bin/python 1431MiB | | 2 85912 C …e/anaconda3/envs/pytorch_p36/bin/python 1465MiB | | 3 85910 C …e/anaconda3/envs/pytorch_p36/bin/python 1433MiB | | 3 85913 C …e/anaconda3/envs/pytorch_p36/bin/python 1465MiB | | 4 85910 C …e/anaconda3/envs/pytorch_p36/bin/python 1431MiB | | 5 85910 C …e/anaconda3/envs/pytorch_p36/bin/python 1433MiB | | 6 85910 C …e/anaconda3/envs/pytorch_p36/bin/python 1433MiB | | 7 85910 C …e/anaconda3/envs/pytorch_p36/bin/python 1433MiB | ±----------------------------------------------------------------------------+
st177113
Can you set the env var CUDA_VISIBLE_DEVICES and specify which GPUs to use before launching training? This will at least clarify whether this is a problem with processes binding to GPUs or something else. Here’s a good explanation of the env var: cuda - How do I select which GPU to run a job on? - Stack Overflow
st177114
I am trying to train GNN using DDP multi-node GPUs. I am using pytorch 1.7 and gloo as backend. I get an error like below on machine 2. -- Process 0 terminated with the following error: Traceback (most recent call last): ...... File "/home/karthi/venvv/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 532, in _ddp_init_helper self.gradient_as_bucket_view) RuntimeError: replicas[0][0] in this process with sizes [64, 5] appears not to match sizes of the same param in process 0. But the same code works in single machine multi-gpu DDP. Please help me.This text will be hidden
st177115
Solved by mrshenli in post #2 Hey @karthi0804, could you please share code of a repro? This error is from here, indicating that parameter sizes/order do not match across processes. Could you please verify if that is the case by printing sizes of params in model.parameters()? cc @Yanli_Zhao
st177116
Hey @karthi0804, could you please share code of a repro? This error is from here 22, indicating that parameter sizes/order do not match across processes. Could you please verify if that is the case by printing sizes of params in model.parameters()? cc @Yanli_Zhao
st177117
My bad. Extremely sorry! silly mistake. Wrong model architecture in another machine!
st177118
Hi, I am also getting this replicas error… I am using windows 10, torch 1.7.1, pytorch-lightning 1.1.7 with 3 gpus. The model training was working well with ddp and 2 gpus, on another machine (win10, torch 1.7.1 and pl 1.1.7) the code crashed after printed the following error message: self.reducer = dist.Reducer( RuntimeError: replicas[0][0] in this process with sizes [12, 6] appears not to match sizes of the same param in process 0. Please help!
st177119
My issue was that I failed to update the model in the second machine as the same version of the first machine. Pls, check it by printing and comparing the model params.
st177120
Thanks check back @karthi0804, i copied exact model and all supporting code over from previous machine, because this new machine is just built from scratch. did a lot of other tests, the new machine also failed at same replicas error when tested with two gpus (by physically disconnecting the 3rd one). however, after tinkering around and setting pl.Trainer with ‘accelerator=ddp_spawn’ instead of ‘ddp’, it works, even though the pytorch lightning docs explicitly warns against using the former. I don’t exactly know what the limitations they warn about ‘ddp_spawn’–so far i can not observe one with naked eyes, hopefully we can fix the error with ‘ddp’ here or there and switch back.
st177121
I train my model in two nodes(4 gpus) with ddp. 截屏2021-02-05 下午5.18.471022×426 94 KB 截屏2021-02-05 下午5.27.42772×380 48 KB When I log in the first node, it seems to functions well. when I use ps aux|grep python. there are two tasks running 截屏2021-02-05 下午5.23.312874×164 98.8 KB but when I log in the second node, there are no any tasks running 截屏2021-02-05 下午5.21.481462×78 28.7 KB so how do the ddp find the second node?
st177122
Solved by osalpekar in post #2 If I understand correctly, you are trying to train with 4 GPUs, 2 on one machine and 2 on another machine? If this is the case, then you will need to launch your training script separately on each machine. The node_rank for launch script on the first machine should be 0 and node_rank passed to the l…
st177123
If I understand correctly, you are trying to train with 4 GPUs, 2 on one machine and 2 on another machine? If this is the case, then you will need to launch your training script separately on each machine. The node_rank for launch script on the first machine should be 0 and node_rank passed to the launch script on the second machine should be 1. It seems here like you are passing 2 separate node_ranks for processes launched on the same machine. See the multi-node multi-process distributed launch example here: Distributed communication package - torch.distributed — PyTorch 1.7.0 documentation
st177124
thanks! I got it. By the way, is there any ways to automatically do that? manuallying launch the task in each node is not convenient.
st177125
We don’t provide a way of doing this natively in PyTorch. Writing a simple bash script to do this should be doable though (take a list of hostnames, ssh into each one, copy over pytorch program, and run). You could also use slurm/mpirun if those are available in your environment. If you decide to use torchelastic for distributed training, then there are some plugins available to simplify training in the cloud: elastic/kubernetes at master · pytorch/elastic · GitHub
st177126
Hello. I hope you are very well. I am finalizing my experiment with pytorch. When I finish my paper, I hope I can share my paper in here. Anyway, is there any detailed documentation about data parallel(dp) and distributed data parallel(ddp) During my experiment, DP and DDP have big accuracy difference with same dataset, network, learning rate, and loss function. I hope I can put this experiment results in my paper but my professor asks the detailed explanation of why it happens. My dataset is a very unique image dataset and it is not a normal object such as imagenet or city scape stuff, so it can be a very different result than usual computer science paper. In this reason, I look around and read some articles. https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html 41 https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/ 23 However, I am still confused about this two different multi gpu training strategies. What is the “reduce” mean. The “reduce” is the weight update or loss reduction. What is the major difference between DP and DDP in the weight update strategy? I think this is important. DDP affects the batch normalization (BN) or DDP still needs the synchronized BN. Thank you for reading my question.
st177127
Solved by mrshenli in post #6 correct. The input data goes through the network, and loss calculate based on output and ground truth. During this loss calculation, DP or DDP work differently. correct. Each loss in the GPU has the different loss result. DP used mean value because DP send every output result to main GPU an…
st177128
There are some comparison between DP and DDP here: https://pytorch.org/tutorials/beginner/dist_overview.html 217 What is the “reduce” mean. The “reduce” is the weight update or loss reduction. What’s the context here? If you mean all_reduce, it is a collective communication operation. DDP uses it to synchronize gradients. see https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/collectives.html#allreduce 17 What is the major difference between DP and DDP in the weight update strategy? I think this is important. Weight update is done by the optimizer, so if you are using the same optimizer the weight update strategy should be the same. The difference between DP and DDP is how they handle gradients. DP accumulates gradients to the same .grad field, while DDP first use all_reduce to calculate the gradient sum across all processes and divide that by world_size to compute the mean. More details can be found in this paper 40. The above difference has impact on how lr should be configured. See this discussion: Should we split batch_size according to ngpu_per_node when DistributedDataparallel 51 DDP affects the batch normalization (BN) or DDP still needs the synchronized BN. Thank you for reading my question. By default, DDP will broadcast buffers from rank 0 to all other ranks, so yes, it does affect BN. BTW, for distributed training related questions, could you please add a “distributed” tag to the post? There is a oncall team monitoring that tag.
st177129
mrshenli: Weight update is done by the optimizer, so if you are using the same optimizer the weight update strategy should be the same. The Well, many people talk about “reduce” but in the context “reduce” does not seem like the literary “reduce”. That is why I ask the question. Because it keeps coming but no one defines this term first when they use it. what is different between reducing gradients and weight update. Do you mean, DP and DDP exactly update the same weight and same updated each layer right? It is also confusing to me. Do you mean Batch size or LR size? You link the batch size about it. I face that there is no improvement when I use the DDP with synchronized BN. That is why I am asking third question.
st177130
what is different between reducing gradients and weight update. There are many weight updating algorithms, e.g., Adam, SGD, Adagrad, etc. (see more here 1). And they are all independent from DP or DDP. So even if the gradient is the same, different optimizers can update the weight to a different value. Reducing gradients in DDP basically means communicating gradients across processes. Do you mean, DP and DDP exactly update the same weight and same updated each layer right? Neither DP nor DDP touches model weight. In the following code, it is the optimzer.step() that updates model weights. What DP and DDP do are preparing the .grad field for all parameters. output = model(input) output.sum().backward() # DP and DDP not involved in the below this point. opt.step() It is also confusing to me. Do you mean Batch size or LR size? You link the batch size about it. Quoting some discussion from that link. If you search for “lr”, you will find almost all comments in that thread discusses how to configure LR and batch size. Should we split batch_size according to ngpu_per_node when DistributedDataparallel distributed Another question is if we do not divide batch-size by 8, the total images processed in one epoch will be the same as usual or eight times? As for learning rate, if we have 8-gpus in total, there wiil be 8 DDP instances. If the batch-size in each DDP distances is 64 (has been divides manually), then one iteration will process 64×4=256 images per node. Taking all gpu into account (2 nodes, 4gpus per node), then one iteration will process 64×8=512 images. Assuming in one-gpu-one-node scenario, we… Should we split batch_size according to ngpu_per_node when DistributedDataparallel distributed I agree with all your analysis on the magnitude of the gradients, and I agree that it depends on the loss function. But even with MSE loss fn, it can lead to different conclusions: If the fw-bw has processed 8X data, we should set lr to 8X, meaning that the model should take a larger step if it has processed more data as the gradient is more accurate. (IIUC, this is what you advocate for) If the gradient is of the same magnitude, we should use 1X lr, especially when approaching convergence. Ot… I face that there is no improvement when I use the DDP with synchronized BN. That is why I am asking third question. Right, SyncBatchNorm has its own way for communication, which is out of control of DDP. Using DDP won’t change how SyncBatchNorm behaves. github.com pytorch/pytorch/blob/f64d24c941a00bc81b3017008ae212cca761d393/torch/nn/modules/_functions.py#L79-L81 4 torch.distributed.all_reduce( combined, torch.distributed.ReduceOp.SUM, process_group, async_op=False) sum_dy, sum_dy_xmu = torch.split(combined, num_channels)
st177131
Thank you for detail explanation. Also, DDP and LR relationship are interesting. I used to find the LR with trial and error manner… I got some understand about reduce and DDP. Please check my understanding. So Basically DP and DDP do not directly change the weight “but it is a different way to calculate the gradient in multi GPU conditions”. If this is incorrect please let me know. The input data goes through the network, and loss calculate based on output and ground truth. During this loss calculation, DP or DDP work differently. However I thought that gradient is basically calculated from loss. Each loss in the GPU has the different loss result. DP used mean value because DP send every output result to main GPU and calculate the loss. If my understanding is incorrect please point out. However DDP used the different. I still do not get it this parts. In the paper they also use the average value. What is different between mean calculation and syncronized calculation? For update the weight in network, the optimizer updates the network using by gradient value. The update part is the optimizer part no DP or DDP related with it. So the performance difference might come from LR difference? Because the bath size become different. weight = previous weight - (gradient*learning_rate) Really thank you for helping me.
st177132
henry_Kang: So Basically DP and DDP do not directly change the weight “but it is a different way to calculate the gradient in multi GPU conditions”. correct. The input data goes through the network, and loss calculate based on output and ground truth. During this loss calculation, DP or DDP work differently. correct. Each loss in the GPU has the different loss result. DP used mean value because DP send every output result to main GPU and calculate the loss. This is incorrect. DP’s forward pass 1) create a model replica on every GPU, 2) scatters input to every GPU 3) feed one input shard to a different model replica 4) use one thread per model replica to create output on each GPU 5) gather all outputs from different GPUs to one GPU and return. The loss with DP is calculated based on that gathered output, and hence there is only one loss with DP. github.com pytorch/pytorch/blob/d06f1818ada6405a30943f58548af958c2b83ff6/torch/nn/parallel/data_parallel.py#L147-L162 2 def forward(self, *inputs, **kwargs): if not self.device_ids: return self.module(*inputs, **kwargs) for t in chain(self.module.parameters(), self.module.buffers()): if t.device != self.src_device_obj: raise RuntimeError("module must have its parameters and buffers " "on device {} (device_ids[0]) but found one of " "them on device: {}".format(self.src_device_obj, t.device)) inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) if len(self.device_ids) == 1: return self.module(*inputs[0], **kwargs[0]) replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) outputs = self.parallel_apply(replicas, inputs, kwargs) return self.gather(outputs, self.output_device) DDP is multi-processing parallel, and hence it can scale across multiple machines. In this case, every process has its own loss and so there are multiple different losses. Gradients are synchronized during the backward pass using autograd hook and allreduce. For more details, I recommend reading the paper I linked above. What is different between mean calculation and syncronized calculation? Because DP is single-process-multi-thread, the scatter, parallel_apply, gather ops used in the forward pass are automatically recorded by the autograd graph. So during the backward pass, the gradients will be accumulated to the .grad feld. There is no grad synchronization in DP, because autograd engine does all grad accumulation already. As DP is multi-process and every process has its own autograd engine, we need additional code to synchronize grad. So the performance difference might come from LR difference? Yep, that’s one possible source. It also relates to what loss function you are using. If the loss function cannot guarantee f([x, y]) == (f(x) + f(y)) /2, then the result can also be different, as it is not compatible with gradient averaging used in DDP.
st177133
Well, what I facing is that, DDP has better result than DP. However many question in here actually said that DP give better result than DDP. Second, our task is class imbalance and binary semantic segmentation. The task is real world image with very complex background. In this cases, DP gives us the 82 % mIoU and DDP achieves the 88% in the same loss function and same learning rate. The Loss function is the IoU Loss. What grad syncronization and accumulation is another new question. I will read your paper first and ask question again. Thank you. It is really difficult but I hope I can make it.
st177134
henry_Kang: However many question in here actually said that DP give better result than DDP. No, this is not guaranteed. The only conclusion we can draw is that DP should be able to produce the same result model as non-parallel training, and DDP cannot guarantee this. But regarding which one is better, it needs to quantitatively measured, as it is affected by a lot of factors, e.g. batch size, lr, loss function, etc.
st177135
mrshenli: it needs to quantitatively measured, as it is affected by a lot of factors, e.g. batch size, lr, loss function, etc. Ok I see. So It can be really dangerous to say that DDP is better or DP is better. I will just keep it and do not put into my the paper. Anyway I will cite your paper since I am using DDP.
st177136
Hey @mrshenli About the loss function and LR, Which loss functions are effected by LR? optimizers are also important?
st177137
Hi Shen, I’m also encountering performance drop with DDP. Could you please elaborate on what f([x, y]) == (f(x) + f(y)) /2 means? I don’t quite understand the notation here. Thanks!
st177138
Hi, I am new to PyTorch’s DistributedDataParallel module. Now I want to convert my GAN model to DDP training, but I’m not very confident about what should I modify. My original toy script is like: # Initialization G = Generator() D = Discriminator() G.cuda() D.cuda() opt_G = optim.SGD(G.parameters(), lr=0.001) opt_D = optim.SGD(D.parameters(), lr=0.001) G_train = GeneratorOperation(G, D) # a PyTorch module to calculate all training losses for G. D_train = DiscriminatorOperation(G, D) # a PyTorch module to calculate all training losses for D. # Training for i in range(10000): loss_D = D_train() opt_D.zero_grad() loss_D.backward() opt_D.step() loss_G = G_train() opt_G.zero_grad() loss_G.backward() opt_G.step() My question is, when I add DDP module to the above script, should I modify in torch.cuda.set_device(local_rank) G = Generator() D = Discriminator() G.cuda() D.cuda() G_ddp = DDP(G, device_ids=[local_rank], output_device=local_rank) D_ddp = DDP(D, device_ids=[local_rank], output_device=local_rank) opt_G = optim.SGD(G_ddp.parameters(), lr=0.001) opt_D = optim.SGD(D_ddp.parameters(), lr=0.001) G_train = GeneratorOperation(G_ddp, D_ddp) D_train = DiscriminatorOperation(G_ddp, D_ddp) or in torch.cuda.set_device(local_rank) G = Generator() D = Discriminator() G.cuda() D.cuda() opt_G = optim.SGD(G.parameters(), lr=0.001) opt_D = optim.SGD(D.parameters(), lr=0.001) G_train = DDP(GeneratorOperation(G, D), device_ids=[local_rank], output_device=local_rank) D_train = DDP(DiscriminatorOperation(G, D), device_ids=[local_rank], output_device=local_rank) Should I use one of the above over another, or they are the same? Appreciated if you can also explain in detail. Thanks!
st177139
Per the example here: examples/main.py at master · pytorch/examples · GitHub 3, the DDP model is created first and then it’s parameters are passed as an arg when creating the optimizer (like the first option shown). A more specific answer would depend on what GeneratorOperation and DiscriminatorOperation do. If you want the functionality in these to be replicated across ranks and the corresponding gradients to be synchronized during the backward pass, then they should passed to DDP.
st177140
in ImageNet examples, the codes look like: for epoch in range(all_epochs): train_sampler.set_epoch(epoch) train_one_epoch_with_train_loader() how is the sampler.epoch inside train_loader set by the outside definition train_sampler.set_epoch(epoch)? Seems that the train_loader is not explicitly re-constructed in every epoch. Didn’t find much hints in /torch/utils/data Thanks!
st177141
Solved by osalpekar in post #2 The sampler is passed as an argument when initializing the DataLoader, so the train loader will have access to the sampler object. Neither the loader not the sampler need to be re-constructed every epoch.
st177142
The sampler is passed as an argument when initializing the DataLoader, so the train loader will have access to the sampler object. Neither the loader not the sampler need to be re-constructed every epoch.
st177143
I am working with a CPU cluster, where I request a number of nodes to perform model parallelism. I must request jobs and run scripts via slurm. I am unsure how to specify variables such as MASTER_ADDR and MASTER_PORT, and am not sure if I should/how to use the host names of nodes as Worker ID’s (I think I can get these names from the sinfo command). Also, the worke nodes I’m working with are not connected to the internet, so I’m unsure if I can even specify a MASTER_ADDR/IP address? (I will confirm with the staff) If anyone has implemented RPC on a cluster with slurm - how did you specific the address/port of the master node and worker id’s of the other nodes? I surely can’t be the first in this situation. I found a previous post 1 that simply recommended asking the service provider, but it would be nice to know if anyone has handled this issue.
st177144
Hey @ajayp1, I am not familiar with slurm, but RPC’s MASTER_ADDR and MASTER_PORT should be similar to the ones used by collective communications and DistributedDataParallel (DDP) in init_process_group 6. Have you previously used DDP with slurm? Found an example here: Multi-node-training on slurm with PyTorch · GitHub 30
st177145
Hi @mrshenli, Thanks for that example, I was looking all over online for something like that but couldn’t find one. If anyone else is curious, I used a similar slurm variable to get the worker ID’s and wrote them to a file in my shell script: touch /dev/null > $WORKDIR/mynodes.txt scontrol show hostname $SLURM_JOB_NODELIST > $WORKDIR/mynodes.txt Then I simply read from this file in my pytorch script. To get the rank 0 IP address, I did os.environ['MASTER_ADDR'] = os.system(nodes_list[0] + ' -I awk {print$1}') after reading the rank 0 node ID from my written file. For my specific cluster, the port number didn’t matter, I just picked 8080. For data parallelism, I’m planning on simply using the distributed library which requires the same MASTER_ADDR, so not expecting a problem there.
st177146
Actually, after reviewing the output file for the RPC implementation, it looks like each node acted as the master, despite the if/else statement to check for rank 0 as per the RPC model parallelism tutorial. My code is here: GitHub - ajayp1/Distributed-Torch-testing: test scripts for trying out PyTorch's distributed libraries 10 Separate but related issue: when I run the MNIST example from this tutorial 1, I’m not able to broadcast the MASTER_ADDR and PORT variables to the worker nodes. I managed to at least set the variables on the master node by putting the assignment under if __name__ == "__main__":, but it still doesn’t broadcast to other nodes. Am I supposed to use a certain launcher so that e.g. MPI can communicate between nodes? My code for this example using just torch.distributed is in the same repo, files have MNIST in the title.
st177147
@mrshenli unfortunately we don’t have specific instructions to launch in SLURM. Having said that torchelastic is scheduler agnostic - it uses rendezvous id to “form membership”. So if you follow the instructions in here: Quickstart — PyTorch/Elastic master documentation 20 it should work regardless of the scheduler.
st177148
To see how much time is spent on interprocess communication, allreduce, etc…; as well as static memory usage and amount of information transferred during interprocess communciation
st177149
nvprof from Nvidia is probably the best tool available: CUDA Pro Tip: nvprof is Your Handy Universal GPU Profiler | NVIDIA Developer Blog 6 DDP currently does not work with the autograd profiler
st177150
Hi! I am new to Pytorch and distributed learning. I am currently training a RL model using 4 GPUs on the server for an embodied navigation task. I process the visual information using pretrained MobileNet v2 model from Torchvision. I wrap the visual network in using DataParallel, and the visual network is included in the final actor-critic model. However, when I export the final model to onnx, it gave an error a NoneType error when reading. I read from other posts and understand that if I wrap the final model into DataParallel, I could call the module attribute. However, I don’t know what exactly I should do when part of the model is wrapped in DataParallel and I couldn’t find similar issues online.
st177151
Kevin57: I wrap the visual network in using DataParallel, and the visual network is included in the final actor-critic model. However, when I export the final model to onnx, it gave an error a NoneType error when reading. Hey @Kevin57, could you please create an issue on PyTorch github with the error log? And please also add an onnx label to that issue. Thanks! I read from other posts and understand that if I wrap the final model into DataParallel, I could call the module attribute. However, I don’t know what exactly I should do when part of the model is wrapped in DataParallel and I couldn’t find similar issues online. Could you please show a code snippet? Is it sth like: class MyModule(nn.Module): def __init__(self): self.l1 = nn.Linear(20, 20) self.l2 = DataParallel(nn.Linear(20, 20)) If this is the case, does the following work? net = MyModule() net.l2.module # this should returns the non-DataParallel linear module cc @VitalyFedyunin
st177152
Hi! Thank you for your response! I have submitted an issue on PyTorch but I am not able to add a label. The name is DataParallel to ONNX. For the code snippet, I think this is a similar example to mine! I tried to use something like net.ls.module in your code snippet, but it does not work and it gives me the following error message: File “/home/qi/mlagents_gesthor/ml-agents/mlagents/trainers/torch/model_serialization.py”, line 119, in export_policy_model self.policy.actor_critic.module, File “/home/qi/gesthor/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 778, in getattr raise ModuleAttributeError("’{}’ object has no attribute ‘{}’".format( torch.nn.modules.module.ModuleAttributeError: ‘SeparateActorCritic’ object has no attribute ‘module’ where SeparateActorCritic is similar to net in your code snippet.
st177153
Are forward and backward still synchronization points in DDP even if they are inside a no_sync() context? My understanding is that no_sync prevents gradient averaging, but I was wondering if it disables syncing completely.
st177154
Hey @lkp411 no_sync() disables the following code, which should completely disable communications across processes (unless you are using DDP.join() 11). github.com pytorch/pytorch/blob/b1907f5ebcaeb7405aa3d9d8025a7a5eeb0ee590/torch/nn/parallel/distributed.py#L710-L720 17 if torch.is_grad_enabled() and self.require_backward_grad_sync: self.require_forward_param_sync = True # We'll return the output object verbatim since it is a freeform # object. We need to find any tensors in this object, though, # because we need to figure out which parameters were used during # this forward pass, to ensure we short circuit reduction for any # unused parameters. Only if `find_unused_parameters` is set. if self.find_unused_parameters: self.reducer.prepare_for_backward(list(_find_tensors(output))) else: self.reducer.prepare_for_backward([])
st177155
Hey @mrshenli Thanks so much for you reply. What are the other sync points in DDP? I remember reading somewhere here that copying data from the gpu to the cpu forces a sync. Is this true? And does it still happen in a no_sync context? Once again, thanks for your time.
st177156
lkp411: I remember reading somewhere here that copying data from the gpu to the cpu forces a sync. Is this true? And does it still happen in a no_sync context? Yes, this is true, since you would need the calculated tensor in order to push it to the CPU (thus the CPU has to synchronize on the GPU workload). This should also be the case in a DDP setup with no_sync().
st177157
Hey @lkp411, in the context of DDP, there are two different types of synchronizations, intra-process (CUDA stream) and inter-process (collective comm). The gradient averaging (AllReduce) is inter-process sync and the CPU-to-GPU copy is a intra-process sync. Which ones are you referring to in the following question? What are the other sync points in DDP?
st177158
I was referring to interprocess syncs. Are there any other sync points besides the ddp constructor, forward, backward and collective communication calls? And does calling torch.cuda.synchronize(rank) in a specific process after issuing an async collective communication call (for example an all_reduce) block till the result of the collective communication call is available? Also since we’re on the topic, are there plans to add sparse all_reduce capabilities to the NCCL backend?
st177159
I trained my model in two nodes, and then it hangs up in initiation. 截屏2021-02-01 下午10.40.241214×410 50.8 KB 截屏2021-02-01 下午10.41.371960×592 183 KB then I add NCCL_DEBUG=INFO and NCCL_DEBUG_SUBSYS=all to see what’s going on, but there is no output file.
st177160
Was there any error message? Does it behave differently if you replace gpu10 with its IP address? then I add NCCL_DEBUG=INFO and NCCL_DEBUG_SUBSYS=all to see what’s going on, but there is no output file. It’s possible that init_process_group failed at rendezvous stage (process ip/port discovery using the master as a leader), so that it has not reached NCCL code yet.
st177161
no differences. and it just remind me of connect() time out. But when I trained it on a single node with 2 gpus using CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 alphachem_main.py, it works well.
st177162
wuchiz: But when I trained it on a single node with 2 gpus using CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 alphachem_main.py, it works well. This probably means the two machines cannot talk to each other using the given configuration. Have you tried setting NCCL_SOCKET_IFNAME 3 to point to the correct NIC?
st177163
It still hangs up in initiation step after setting this environment variable to eth0 截屏2021-02-03 上午7.20.062158×640 219 KB
st177164
I know what’s wrong. the name network interface is eno1 instead of eth0. However it still hangs up in this step. the error is shown in the following figure. image1879×596 44.7 KB
st177165
I run a single machine, 2-GPU resnet-based training with one process for each GPU. Each process prints a msg at the start of each epoch. After a few minutes, one process draws ahead of the other. The difference ends up to be multiple epochs, even though both processes do finish. How can this speed difference occur, given that DDP synchronizes the two processes during each call to backward()? Based on a note on a Web site about different input sizes to different DDP processes I tried with model.join(): <training loop> But observed no different behavior. How does DDP sneak past that back prop synchronization point? I unfortunately could not reproduce the problem on a simple case. But maybe I am missing some logic? - Ubuntu 20.04 - Pytorch torch-1.7.1-py3.8 - torch.cuda.nccl.version(): 2708 - 2xNvidia GTX Titan - Single machine, 2 process, one for each of the GPUs
st177166
Hi, unless an explicit synchronization (such as with a device to host copy or torch.cuda.synchronize) is triggered, this sort of skew is possible due to the different GPUs, but such a significant skew (many epochs) seems unlikely. Are there any other jobs/processes running on your GPU that may slow this down and is the problem consistently reproducible? As far as model.join() that API is to support different processes having different no. of inputs, is that your case here? If not, you would not need model.join().
st177167
Thank you for both answers, Rohan! Before responding I was trying to verify that moving the model save to after the optimizer step solved the issue. And I didn’t have a chance for that yet. The oddity with the out-of-synch processes is that they run in lockstep for quite a while, meaning several minutes. And then one of them slows way down. Often, once the faster process finishes, the slow one sits with 100% of both GPU and CPU (according to top). So something is seriously wrong. What confuses me is that I thought DDP will force synchronization as part of the backward() call to ensure the same gradients being used on all the processes going forward. So how could two processes get out of sync at all even just beyond a single train loop. Nobody is running on the machine other than these two processes. Inputs should be of the same lengths; the model.join() attempt was just out of desperation. I run with drop-last in the dataloader, so even the final batch is fully populated. If anything, the two processes are too decoupled, rather than one waiting for the other due to unequal inputs. The one difference from all the tutorial setups is that I am using k-fold cross validation with a distributed sampler. So epochs are put together by train/validate over multiple splits (i.e. rotating the validation folds) But to the DDP mechanism that shouldn’t make a difference, I would think. For now both processes run on the same machine to exclude NCCL version mismatch. Though my goal is cross-machine work. I did try updating NCCL and cuda to (I think) version 11? But this measure did not fix the issue. Does any of the above point to a misunderstanding on my part? Of course, it could simply be a bug in the code somewhere. Andreas