id
stringlengths
3
8
text
stringlengths
1
115k
st176268
Hi, @mrshenli I created a mini-repo: GitHub - SHu0421/Question-Repo. you can directly run it by bash train.sh. my torch version is 1.8.1 and the cuda version is 10.2. I run the code on four Tesla v100 GPUs (one node). It hung out as before with all GPUs usage 100% image1324×902 64.9 KB
st176269
Hello everyone, I’m newbie to pytorch. i’m working currently on untrimmed video classification. the network i’m implementing does online learning which means a batch size of 1. In addition, the loss is calculated on all frames of a single video (the loss is equal to the mean of cross entropy across all frames). the tutorial : DataParallel — PyTorch 1.8.1 documentation was not of a great help since my batch size is 1 so the network runs only on GPU 1.
st176270
Hey @takieddine_soualhi I am trying to understand the use case. In addition, the loss is calculated on all frames of a single video (the loss is equal to the mean of cross entropy across all frames). Is it possible to treat each frame as a sample, so that the number of frames in the video will be the batch size? If that’s possible, then you can let each GPU to take care of a subset of the frames?
st176271
What’s up everyone, I currently have a distributed reinforcement learning framework built using PyTorch. Upon profiling my code, a major damper on my throughput (network updates per unit time) is getting the parameters (state_dicts) of my networks from the OrderedDict class torch uses to JSON format for sending over a network using gRPC. A code example of what I’m currently doing: # convert torch state dict to json for entry in actor_params: actor_params[entry] = actor_params[entry].cpu().data.numpy().tolist() actor_params = json.dumps(actor_params) where actor_params is just a model state_dict. To summarize, I just need a quick way to get from torch CPU state_dict → JSON (speed this block up). I do this for six networks sequentially, so that’s where my speed issue is. Any help or ideas are greatly appreciated. Cheers!
st176272
Solved by mrshenli in post #2 This is actually one of the reasons why we built PyTorch RPC, i.e., you don’t need to serialize tensors into json/string before passing it through communication. The native PyTorch RPC will serialize actor_params into a binary payload + a list of tensors, so that tensor storage/contents are kept as-…
st176273
This is actually one of the reasons why we built PyTorch RPC, i.e., you don’t need to serialize tensors into json/string before passing it through communication. The native PyTorch RPC will serialize actor_params into a binary payload + a list of tensors, so that tensor storage/contents are kept as-is. Then the TensorPipe backend can directly send those tensors to the destination. RPC API: Distributed RPC Framework — PyTorch master documentation 10 Toy RL tutorial: Getting Started with Distributed RPC Framework — PyTorch Tutorials 1.8.0 documentation 6 More tutorials: PyTorch Distributed Overview — PyTorch Tutorials 1.8.0 documentation 3
st176274
Is there any update on [RFC] Add Windows support to torch.distributed package · Issue #42095 · pytorch/pytorch · GitHub 1, windows support for this package? torch.distributed.rpc.is_available() returns False for me on my windows machine.
st176275
Hey @theoryofjake, RPC is not available on Windows yet. Regarding that issue, MSFT team helped a lot on enabling DDP on Windows, which is now available as a prototype feature in the latest release. cc @pbelevich
st176276
Thanks so much for your reply. I have one more question for you, as I’m now working on a Linux machine: I have read through your tutorials for the parameter server and the DDP example. I have an application that just needs to share model parameters from a GPU process to a different CPU process. Which makes the most sense: a parameter-server type application using the remote calls, DDP, or sending tensors with recv/send? Thanks again y’all.
st176277
theoryofjake: I have read through your tutorials for the parameter server and the DDP example. I have an application that just needs to share model parameters from a GPU process to a different CPU process. Which makes the most sense: a parameter-server type application using the remote calls, DDP, or sending tensors with recv/send? Not PS: Since the goal is just to pass parameters cross two processes, parameter-server (PS) might be an overkill, as PS usually serves multiple parallel trainers. Not DDP: Since you need to synchronize parameters, DDP might not be a good fit either, as DDP synchronizes model gradients. send/recv vs RPC: this depends on how the problem is written. In general, senc/recv is a better fit for single-program multi-data (SPMD) applications, while with RPC, there is usually one driver/master that coordinates all computations in a cluster. send/recv: The main difference between send/recv and RPC is that when using send/recv both processes need to proceed in the same pace, i.e., when one process calls send, the other one must call recv. If this is how your program is designed, then send/recv should be sufficient (though you still need to convert your model parameters into one tensor and then call send/recv, or calling one send/recv for each parameter). RPC: When using RPC, you can program the entire logic on the master, and all other processes just block on the rpc.shutdown() call. You don’t need to worry about things like serializing a model, or coordinate multiple processes, etc.
st176278
Hi there, To read the official doc, it totally confuse me. What’s the definition of ‘world_size’ and ‘rank’ torch.distributed.init_process_group()? Regarding the argument ‘world_size’, is it cross machine total device count? Or just total machine count? Regarding the argument ‘rank’, is it an index for each machine, or an index for each devices? For instance, if I have 2 machines and there are 4 GPUs per machine. What’s the value of ‘world_size’ and ‘rank’ I have to set when I call torch.distributed.init_process_group()?
st176279
Solved by mrshenli in post #2 The concepts of world_size and rank are defined on processes (hence the name process_group). If you would like to create 8 processes, then the world_size should be 8, and the ranks for them should range from 0 to 7. It is up to the application to determine how to place processes to machines. In the …
st176280
The concepts of world_size and rank are defined on processes (hence the name process_group). If you would like to create 8 processes, then the world_size should be 8, and the ranks for them should range from 0 to 7. It is up to the application to determine how to place processes to machines. In the above cluster (2 machines, and 4 GPUs each), the best setup would be creating 4 processes on each machine, with each exclusively working on a different GPU.
st176281
“… the best setup would be creating 4 processes on each machine, …” In this case, world_size should be 4, each process on every machine should have a rank from 0 to 4? Then with this setup, how to define the rank of these two machines?
st176282
“… the best setup would be creating 4 processes on each machine, …” Hey @HuangLED, in this case, the world_size should be 8, and the ranks should range from 0-3 on the first machine and 4-7 on the second machine. This page might help explain: github.com pytorch/examples 57 master/distributed/ddp A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
st176283
I am running into a very frustrating issue involving Detectron2 and a multi-gpu setup on docker. It works fine without docker, but in docker I get the follow error after loading the coco data and then calling the trainer. Is there some NCCL setting that I’m not seeing that I have to set? Traceback (most recent call last): File "perception/isaac_kitti.py", line 367, in <module> args=(args,), File "/home/scenesearch/src/detectron2/detectron2/engine/launch.py", line 59, in launch daemon=False, File "/home/scenesearch/miniconda3/envs/scenesearch/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/scenesearch/miniconda3/envs/scenesearch/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes while not context.join(): File "/home/scenesearch/miniconda3/envs/scenesearch/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 150, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException: -- Process 0 terminated with the following error: Traceback (most recent call last): File "/home/scenesearch/miniconda3/envs/scenesearch/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/home/scenesearch/src/detectron2/detectron2/engine/launch.py", line 94, in _distributed_worker main_func(*args) File "/home/scenesearch/perception/isaac_kitti.py", line 160, in train trainer = IsaacKittiTrainer(cfg) File "/home/scenesearch/src/detectron2/detectron2/engine/defaults.py", line 284, in __init__ data_loader = self.build_train_loader(cfg) File "/home/scenesearch/src/detectron2/detectron2/engine/defaults.py", line 473, in build_train_loader return build_detection_train_loader(cfg) File "/home/scenesearch/src/detectron2/detectron2/config/config.py", line 201, in wrapped explicit_args = _get_args_from_config(from_config, *args, **kwargs) File "/home/scenesearch/src/detectron2/detectron2/config/config.py", line 238, in _get_args_from_config ret = from_config_func(*args, **kwargs) File "/home/scenesearch/src/detectron2/detectron2/data/build.py", line 327, in _train_loader_from_config sampler = TrainingSampler(len(dataset)) File "/home/scenesearch/src/detectron2/detectron2/data/samplers/distributed_sampler.py", line 37, in __init__ seed = comm.shared_random_seed() File "/home/scenesearch/src/detectron2/detectron2/utils/comm.py", line 230, in shared_random_seed all_ints = all_gather(ints) File "/home/scenesearch/src/detectron2/detectron2/utils/comm.py", line 154, in all_gather group = _get_global_gloo_group() File "/home/scenesearch/src/detectron2/detectron2/utils/comm.py", line 89, in _get_global_gloo_group return dist.new_group(backend="gloo") File "/home/scenesearch/miniconda3/envs/scenesearch/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 2508, in new_group timeout=timeout) File "/home/scenesearch/miniconda3/envs/scenesearch/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 592, in _new_process_group_helper timeout=timeout) RuntimeError: [enforce fail at /opt/conda/conda-bld/pytorch_1616554800319/work/third_party/gloo/gloo/transport/tcp/device.cc:208] ifa != nullptr. Unable to find interface for: [0.31.32.145] I’ve tried a lot of configurations for NCCL, with the current version having the following set: export NCCL_SOCKET_IFNAME=eth0; export NCCL_IB_DISABLE=1; export NCCL_DEBUG=info; export NCCL_P2P_DISABLE=1 Below is the NCCL_DEBUG output, but I don’t see anything that would be suggestive of the actual error. There appears to be only one issue on the Detectron2 github page about this where they say this is a DDP problem, not a detectron concern. I wonder if this is actually a docker issue. 2039953:3182:3182 [0] NCCL INFO Bootstrap : Using [0]eth0:100.104.55.225<0> 2039953:3182:3182 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation 2039953:3182:3182 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1. 2039953:3182:3182 [0] NCCL INFO NET/Socket : Using [0]eth0:100.104.55.225<0> 2039953:3182:3182 [0] NCCL INFO Using network Socket NCCL version 2.7.8+cuda11.1 2039953:3183:3183 [1] NCCL INFO Bootstrap : Using [0]eth0:100.104.55.225<0> 2039953:3183:3183 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation 2039953:3183:3183 [1] NCCL INFO NCCL_IB_DISABLE set by environment to 1. 2039953:3183:3183 [1] NCCL INFO NET/Socket : Using [0]eth0:100.104.55.225<0> 2039953:3183:3183 [1] NCCL INFO Using network Socket 2039953:3183:3351 [1] NCCL INFO NCCL_P2P_LEVEL set by environment to LOC 2039953:3182:3350 [0] NCCL INFO NCCL_P2P_LEVEL set by environment to LOC 2039953:3182:3350 [0] NCCL INFO Channel 00/02 : 0 1 2039953:3182:3350 [0] NCCL INFO Channel 01/02 : 0 1 2039953:3183:3351 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64 2039953:3183:3351 [1] NCCL INFO Trees [0] -1/-1/-1->1->0|0->1->-1/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1 2039953:3182:3350 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64 2039953:3182:3350 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1 2039953:3182:3350 [0] NCCL INFO Channel 00 : 0[60] -> 1[70] via direct shared memory 2039953:3183:3351 [1] NCCL INFO Channel 00 : 1[70] -> 0[60] via direct shared memory 2039953:3182:3350 [0] NCCL INFO Channel 01 : 0[60] -> 1[70] via direct shared memory 2039953:3183:3351 [1] NCCL INFO Channel 01 : 1[70] -> 0[60] via direct shared memory 2039953:3182:3350 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer 2039953:3182:3350 [0] NCCL INFO comm 0x7f419c002dd0 rank 0 nranks 2 cudaDev 0 busId 60 - Init COMPLETE 2039953:3183:3351 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer 2039953:3182:3182 [0] NCCL INFO Launch mode Parallel 2039953:3183:3351 [1] NCCL INFO comm 0x7f818c002dd0 rank 1 nranks 2 cudaDev 1 busId 70 - Init COMPLETE Here is my Dockerfile: FROM nvcr.io/nvidia/pytorch:20.11-py3 USER root RUN useradd -ms /bin/bash scenesearch RUN apt-get update ARG DEBIAN_FRONTEND=noninteractive ENV TZ=America/New_York RUN apt-get install libgl1-mesa-glx -y RUN apt-get install ffmpeg libsm6 libxext6 -y RUN apt-get install -y software-properties-common &&\ apt-add-repository universe &&\ apt-get update &&\ apt-get install -y python3-pip RUN apt-get install -y libpng16-16 libtiff5 libjpeg-turbo8 wget && rm -rf /var/lib/apt/lists/* WORKDIR /home/scenesearch COPY . /home/scenesearch RUN chmod -R 777 ./ USER scenesearch RUN wget \ https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \ # && mkdir ./.conda \ && bash Miniconda3-latest-Linux-x86_64.sh -b \ && rm -f Miniconda3-latest-Linux-x86_64.sh ENV PATH="./miniconda3/bin:${PATH}" ARG PATH="./miniconda3/bin:${PATH}" RUN conda create -n scenesearch python=3.7.9 SHELL ["conda", "run", "-n", "scenesearch", "/bin/bash", "-c"] RUN conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia RUN pip install --upgrade pip RUN pip install nuscenes-devkit RUN pip install pygame networkx RUN pip install --no-cache-dir -r requirements.txt ENV NVIDIA_DRIVER_CAPABILITIES=all
st176284
Solved by mrshenli in post #2 Hey @crnyu, the above error seems to suggest the code is using gloo instead of NCCL. Have you tried configure GLOO_SOCKET_IFNAME instead?
st176285
crnyu: RuntimeError: [enforce fail at /opt/conda/conda-bld/pytorch_1616554800319/work/third_party/gloo/gloo/transport/tcp/device.cc:208] ifa != nullptr. Unable to find interface for: [0.31.32.145] Hey @crnyu, the above error seems to suggest the code is using gloo instead of NCCL. Have you tried configure GLOO_SOCKET_IFNAME instead?
st176286
Hi everyone, I’m trying to use nn.DataParallel to have a multi-gpu training, but I encountered the following error. I had a look at the various threads, but I wasn’t able to fix the issue: RuntimeError: Expected tensor for argument #1 ‘input’ to have the same device as tensor for argument #2 ‘weight’; but device 1 does not equal 0 (while checking arguments for cudnn_convolution) I’ve used the following lines before importing torch to limit the visible GPUs (as I do for the training on a single gpu): os.environ["CUDA_VISIBLE_DEVICES"] = "0, 2, 3" os.getcwd() device = torch.device("cuda" if torch.cuda.is_available() else "cpu") The model it’s used in this way (with weights from another training): model = ResNet() model.load_state_dict(torch.load('rot_weights.pt')) model = nn.DataParallel(model, device_ids=[0, 2, 3]).to(device) optimizer = optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False) The model is a ResNet-18 from which I want just the feature extraction part, modified for my regression problem: class ResNet(nn.Module): def __init__(self): super(ResNet, self).__init__() self.model = pretrainedmodels.__dict__['resnet18'](pretrained='imagenet') self.regression_layer = nn.Sequential(nn.Linear(512, 6)) def forward(self, x): batch_size ,_,_,_ = x.shape #taking out batch_size from input image x = self.model.features(x) x = torch.nn.functional.adaptive_avg_pool2d(x,1).reshape(batch_size,-1) # then reshaping the batch_size x = self.regression_layer(x) x = compute_rotation_matrix_from_ortho6d(x.view(batch_size, -1)) return x def compute_rotation_matrix_l2_loss(self, gt_rotation_matrix, predict_rotation_matrix): loss_function = nn.MSELoss() loss = loss_function(predict_rotation_matrix, gt_rotation_matrix) return loss def compute_rotation_matrix_geodesic_loss(self, gt_rotation_matrix, predict_rotation_matrix): theta = compute_geodesic_distance_from_two_matrices(gt_rotation_matrix, predict_rotation_matrix) error = theta.mean() return error Any suggestion would be really appreciated!
st176287
If you mask the GPUs via CUDA_VISIBLE_DEVICES, the device ids inside the script will be mapped to [0, nb_gpus], which would mean you should use 0, 1, 2 in the script. Could you change it and see, if it would solve the issue?
st176288
Hey @ptrblck, thanks for the prompt reply. Unfortunately, I still get the same error: RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 1 does not equal 0 (while checking arguments for cudnn_convolution) Full error below: File "/home/chiara/my_workspace/RegressionCNN_rot/main.py", line 162, in main train_loss_epoch, train_error_epoch = training(model, train_loader) File "/home/chiara/my_workspace/RegressionCNN_rot/main.py", line 94, in training out_rot_mat = model(image_batch) File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 1 on device 1. Original Traceback (most recent call last): File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/chiara/my_workspace/RegressionCNN_rot/model.py", line 19, in forward x = self.model.features(x) File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/pretrainedmodels/models/torchvision_models.py", line 322, in features x = self.conv1(input) File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 399, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/chiara/anaconda3/envs/python3.7/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 396, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 1 does not equal 0 (while checking arguments for cudnn_convolution) I have also tried without specifying the devide_ids, but nothing changed.
st176289
Thanks for the update. I’ve rechecked your initial code and remembered that I’ve seen a similar issue before in pretrainedmodels and guess you might also be hitting this issue 7. It seems the repository is breaking nn.DataParallel, so you could either use another repo (e.g. torchvision.models) or use DistributedDataParallel instead (I haven’t verified that it’s working with pretrainedmodels, but it might).
st176290
Thanks for the suggestions @ptrblck. I’m trying to use torchvision.models but I think I need to modify my model class. When I replace the line self.model = pretrainedmodels.__dict__['resnet18'](pretrained='imagenet') with self.model = models.resnet18(pretrained=True) I get the following error: AttributeError: ‘ResNet’ object has no attribute ‘features’
st176291
Yes, you are right that some modifications would be needed, in case you depend on the (missing) .features attribute. The torchvision implementation can be found here 1 and you’ll see that the layers (or blocks) are called directly instead of using a features/classifier split. You could create a custom model by reusing the torchvision.models.resnet18 and overriding the forward method. Here is an example how to do it: class MyResNet18(nn.Module): def __init__(self, resnet): super().__init__() # create features branch using https://github.com/pytorch/vision/blob/2a52c2dca73513d0d0c3e2a505aed05e5b9aa792/torchvision/models/resnet.py#L230-L246 self.features = nn.Sequential( resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1, resnet.layer2, resnet.layer3, resnet.layer4 ) self.avgpool = resnet.avgpool self.fc = resnet.fc def _forward_impl(self, x: torch.Tensor) -> torch.Tensor: # See note [TorchScript super()] x = self.features(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.fc(x) return x def forward(self, x: torch.Tensor) -> torch.Tensor: return self._forward_impl(x) # create standard model and reuse in custom one model = models.resnet18() print(model) custom_model = MyResNet18(model) # check outputs x = torch.randn(2, 3, 224, 224) out = model(x) custom_out = custom_model(x) # compare outputs to make sure the model works as intended print((out - custom_out).abs().max()) > tensor(0., grad_fn=<MaxBackward1>) print(custom_model.features)
st176292
Hi @ptrblck. It seems that following your suggestion the error was solved, thanks a lot!
st176293
Hi, I noticed that each async torch.distributed request object holds a pointer to the sent tensor, therefore the buffer memory is not freed in the sender process until we explicitly call wait() or is_completed(). I am looking for a way to overcome this. any suggestions?
st176294
Looks like only work from Gloo and MPI ProcessGroup holds those tensors. Does it work if you do not hold that async work/request object in application? Gloo and MPI ProcessGroup both have a queue and a runLoop to hold those work objects alive until processed. Curious, if you don’t call wait() how do you know when the communication is finished and safe to consume the output?
st176295
If we destroy async objects before completion we get “Attempted destruction of AsyncWork before work has completed, terminating the program.” Therefore I must save in application all the sent request objects. (Answering your question: I don’t consume the output of these (isend) messages) ( of course, the receiver calls wait() before consuming) I’m using MPI (cuda-aware openmpi), with async p2p messages. The goal is simple send tons of isend messages, get memory freed automatically on each completion finally, wait() on all the isends at the end of the program.
st176296
I see. Only send/recv/recvAnysource APIs returns AsyncWork, and the AsyncWork is not stored in the queue. Other collectives use a different WorkEntry data structure, which are stored in the queue. Looks to me we should consolidate these APIs to implement the same behavior. @teng-li @pietern Any reason for having different async work data structures for MPI ProcessGroup?
st176297
Created an issue 3 to track this. In the mean time, does it work for you if you put the async work into a queue and launch a separate thread to wait and dequeue? The wait API does release GIL, so it shouldn’t be blocking the main thread. This will lead to a similar behavior if we consolidate send/recv with collective async work. It won’t be perfect, as it does not guarantee immediately free tensors when comm is done when an earlier send/recv finished later. A better solution would need to install callbacks to MPI thread, which requires larger revamp and I am not sure if MPI supports that.
st176298
Only send/recv/recvAnysource APIs returns AsyncWork and the AsyncWork is not stored in the queue You mean isends too right?
st176299
seliad: You mean isends too right? Oh, sorry, I meant the C++ send/recv/recvAnysource API. The isend API is Python only. Both send and isend call into the same C++ send API, the only difference is whether it waits on the work.
st176300
@mrshenli I tried the solution with the cleaner thread and it doesn’t work: seems like the wait() in the cleaner thread stops the whole process. I think this is were its at in code. 3 I could only make it work with while not r.is_completed(): pass but performance suffered a lot (~2x slowdown) compared my previous solution.
st176301
Hi I want to follow up on this thread. What is the status of this feature? What is the best practice to free the requests now? @seliad wonder what you choset to implement eventually?
st176302
I see the issue @mrshenli opened is still open. I ended up adding some python code to handle the freeing of buffers. Just keep the requests and occasionally check for completion and clean.
st176303
I am running RPC with 3 nodes. In my code, master node is successfully able to call worker1’s and worker2’s forward functions and get the results back. After that, loss backprop step is executed on the master node, which takes quite some time, due to that I am getting below error on master node, dist_autograd.backward(context_id, [losses]) RuntimeError: Error on Node 0: ETIMEDOUT: connection timed out On the worker nodes I am getting following output, Failed to respond to 'Shutdown Proceed' in time, got error RPCErr:1:RPC ran for more than set timeout (5000 ms) and will now be marked with an error. [W tensorpipe_agent.cpp:687] RPC agent for worker2 encountered error when sending outgoing request #92 to master: ETIMEDOUT: connection timed out <above line many times> Process Process-1: [W tensorpipe_agent.cpp:545] RPC agent for worker2 encountered error when reading incoming request from master: ECONNRESET: connection reset by peer (this is expected to happen during shutdown) EDIT: StackTrace: Process Process-1: Traceback (most recent call last): File "/home/user/anaconda3/envs/pytorch2/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/user/anaconda3/envs/pytorch2/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "main.py", line 70, in workers_init_rpc rpc.shutdown() File "/home/user/anaconda3/envs/pytorch2/lib/python3.7/site-packages/torch/distributed/rpc/api.py", line 78, in wrapper return func(*args, **kwargs) File "/home/user/anaconda3/envs/pytorch2/lib/python3.7/site-packages/torch/distributed/rpc/api.py", line 284, in shutdown _get_current_rpc_agent().join() RuntimeError: [/opt/conda/conda-bld/pytorch_1607370156314/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [192.168.13.205]:28380 Init RPC code: def workers_init_rpc(rank, world_size, options): # options = rpc.ProcessGroupRpcBackendOptions(num_send_recv_threads=128, # rpc_timeout=0, # init_method="tcp://192.168.13.46:2222" ) print(f'Rank {rank}: Proceed to init rpc') rpc.init_rpc( f"worker{rank}", rank=rank, world_size=world_size, rpc_backend_options=options ) print(f'Rank: {rank}, rpc init done') if rank == 0: print('Proceed to run_master') run_master() # block until all rpcs finish rpc.shutdown() if __name__=="__main__": world_size = 3 processes = [] options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=128, rpc_timeout = 10*60) rank = int(sys.argv[1]) p = mp.Process(target=workers_init_rpc, args=(rank, world_size, options)) p.start() processes.append(p) # mp.spawn(run_worker, args=(world_size,), nprocs=world_size, join=True) for p in processes: p.join() I have tried to increase rpc_timeout parameter in TensorPipeRpcBackendOptions. But it’s not working. How should I keep the connection ON for longer time durations?
st176304
in default, after how long the backward timed out? when you increased rpc_timeout to a very large value, how long did the backward time out?
st176305
I set rpc_timeout = 3600 (1 hr) and it ran for around 2 mins 11 seconds (after rpc_init) then timed out. I also got following (After rpc_init) on the workers that I forgot to mention in my question, Failed to respond to 'Shutdown Proceed' in time, got error RPCErr:1:RPC ran for more than set timeout (5000 ms) and will now be marked with an error. This gets printed before the response (for workers) I have shown in my question.
st176306
To be specific like you asked, dist_autograd timed out in around 58 seconds. I put a timestamp before dist_autograd.backward(context_id, [losses]) then calculated the duration when it threw ETIMEDOUT error.
st176307
@Yanli_Zhao I have made a small edit (in stacktrace of workers) in the question please take a look.
st176308
Thanks for reporting this @matrix! If possible, could you paste a small repro (i.e. that shows run_master) that results in this error? Would be great to post this to Issues · pytorch/pytorch · GitHub so we can determine if this is an actual bug.
st176309
Code to reproduce the error: import sys import torch.distributed.rpc as rpc import torch import time import torch.multiprocessing as mp import torch.distributed.autograd as dist_autograd import torch.nn as nn from torch.distributed.rpc import RRef from torch.distributed.optim import DistributedOptimizer from torch import optim def _call_method(method, rref, *args, **kwargs): r""" a helper function to call a method on the given RRef """ return method(rref.local_value(), *args, **kwargs) def _remote_method(method, rref, *args, **kwargs): r""" a helper function to run method on the owner of rref and fetch back the result using RPC """ return rpc.rpc_sync( rref.owner(), _call_method, args=[method, rref] + list(args), kwargs=kwargs ) class Net1(nn.Module): def __init__(self): super(Net1, self).__init__() self.layer = nn.Linear(10, 20) def parameter_rrefs(self): return [RRef(p) for p in self.parameters() if p.requires_grad] def forward(self, x): return self.layer(x) class Net2(nn.Module): def __init__(self): super(Net2, self).__init__() self.layer = nn.Linear(20, 1) def parameter_rrefs(self): return [RRef(p) for p in self.parameters() if p.requires_grad] def forward(self, x): return self.layer(x) class Net(nn.Module): def __init__(self, *args, **kwargs): super(Net, self).__init__() self.encoder_rref = rpc.remote( "worker1", Net1, args = args, kwargs = kwargs ) self.decoder_rref = rpc.remote( "worker2", Net2, args = args, kwargs = kwargs ) def parameter_rrefs(self): remote_params = [] remote_params.extend(self.encoder_rref.remote().parameter_rrefs().to_here()) remote_params.extend(self.decoder_rref.remote().parameter_rrefs().to_here()) return remote_params def forward(self, x): x = _remote_method(Net1.forward, self.encoder_rref, x) x = _remote_method(Net2.forward, self.decoder_rref, x) return x def run_master(): model = Net() opt = DistributedOptimizer( optim.SGD, model.parameter_rrefs(), lr=0.05, ) for i in range(10): with dist_autograd.context() as context_id: x = torch.randn(32, 10) loss = model(x) loss = loss.sum() print('Before dist_autograd') dist_autograd.backward(context_id, [loss]) opt.step(context_id) def workers_init_rpc(rank, world_size, options): print(f'Rank {rank}: Proceed to init rpc') rpc.init_rpc( f"worker{rank}", rank=rank, world_size=world_size, rpc_backend_options=options ) print(f'Rank: {rank}, rpc init done') if rank == 0: print('Proceed to run_master') run_master() rpc.shutdown() if __name__=="__main__": world_size = 3 processes = [] options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=16, rpc_timeout = 60*60) rank = int(sys.argv[1]) p = mp.Process(target=workers_init_rpc, args=(rank, world_size, options)) p.start() processes.append(p) for p in processes: p.join() Save this in a .py file. How to run: Env. variable exports for all of these 3 nodes: export MASTER_ADDR=<Node 0 IP> export MASTER_PORT=<Node 0 port> export GLOO_SOCKET_IFNAME=network interface export TP_SOCKET_IFNAME=network interface On Node 0: python <filename> 0 On Node 1: python <filename> 1 On Node 2: python <filename> 2 PyTorch Version: 1.7.1
st176310
Hey @matrix, I made some minor edits to the source code and it works for me locally. See the code below. The only problem I noticed with the original code was that TensorPipeRpcBackendOptions is not picklable, so you cannot pass it as multiprocess args. I moved that to workers_init_rpc. import sys import torch.distributed.rpc as rpc import torch import time import torch.multiprocessing as mp import torch.distributed.autograd as dist_autograd import torch.nn as nn from torch.distributed.rpc import RRef from torch.distributed.optim import DistributedOptimizer from torch import optim import os def _call_method(method, rref, *args, **kwargs): r""" a helper function to call a method on the given RRef """ return method(rref.local_value(), *args, **kwargs) def _remote_method(method, rref, *args, **kwargs): r""" a helper function to run method on the owner of rref and fetch back the result using RPC """ return rpc.rpc_sync( rref.owner(), _call_method, args=[method, rref] + list(args), kwargs=kwargs ) class Net1(nn.Module): def __init__(self): super(Net1, self).__init__() self.layer = nn.Linear(10, 20) def parameter_rrefs(self): return [RRef(p) for p in self.parameters() if p.requires_grad] def forward(self, x): return self.layer(x) class Net2(nn.Module): def __init__(self): super(Net2, self).__init__() self.layer = nn.Linear(20, 1) def parameter_rrefs(self): return [RRef(p) for p in self.parameters() if p.requires_grad] def forward(self, x): return self.layer(x) class Net(nn.Module): def __init__(self, *args, **kwargs): super(Net, self).__init__() self.encoder_rref = rpc.remote( "worker1", Net1, args = args, kwargs = kwargs ) self.decoder_rref = rpc.remote( "worker2", Net2, args = args, kwargs = kwargs ) def parameter_rrefs(self): remote_params = [] remote_params.extend(self.encoder_rref.remote().parameter_rrefs().to_here()) remote_params.extend(self.decoder_rref.remote().parameter_rrefs().to_here()) return remote_params def forward(self, x): x = _remote_method(Net1.forward, self.encoder_rref, x) x = _remote_method(Net2.forward, self.decoder_rref, x) return x def run_master(): model = Net() opt = DistributedOptimizer( optim.SGD, model.parameter_rrefs(), lr=0.05, ) for i in range(10): with dist_autograd.context() as context_id: x = torch.randn(32, 10) loss = model(x) loss = loss.sum() print('Before dist_autograd') dist_autograd.backward(context_id, [loss]) opt.step(context_id) print("finished training") def workers_init_rpc(rank, world_size): os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '29500' print(f'Rank {rank}: Proceed to init rpc') options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=16, rpc_timeout = 60*60) rpc.init_rpc( f"worker{rank}", rank=rank, world_size=world_size, rpc_backend_options=options ) print(f'Rank: {rank}, rpc init done') if rank == 0: print('Proceed to run_master') run_master() rpc.shutdown() if __name__=="__main__": world_size = 3 processes = [] #options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=16, rpc_timeout = 60*60) #rank = int(sys.argv[1]) #p = mp.Process(target=workers_init_rpc, args=(rank, world_size, options)) mp.spawn(workers_init_rpc, args=(world_size, ), nprocs=3, join=True) #p.start() #processes.append(p) #for p in processes: # p.join() How to run python <filename> Output Rank 2: Proceed to init rpc Rank 0: Proceed to init rpc Rank 1: Proceed to init rpc Rank: 0, rpc init done Proceed to run_master Rank: 2, rpc init done Rank: 1, rpc init done Before dist_autograd Before dist_autograd Before dist_autograd Before dist_autograd Before dist_autograd Before dist_autograd Before dist_autograd Before dist_autograd Before dist_autograd Before dist_autograd finished training
st176311
@mrshenli I’m wondering if we can somehow report better errors when passing unpicklable objects in torch.multiprocessing? Ideally it seems like this error would’ve been caught earlier instead of manifesting in this confusing way.
st176312
@rvarm1 the printed error in my local test is indeed cannot pickle 'TensorPipeRpcBackendOptions' object. In the original code, pickling TensorPipeRpcBackendOptions happens before initializing RPC or gloo. So I suspect @matrix was hitting a different error, but I cannot reproduce that error locally. Traceback (most recent call last): File "tmp1.py", line 129, in <module> mp.spawn(workers_init_rpc, args=(world_size, options), nprocs=3, join=True) File "/raid/shenli/pytorch/torch/multiprocessing/spawn.py", line 230, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/raid/shenli/pytorch/torch/multiprocessing/spawn.py", line 179, in start_processes process.start() File "/private/home/shenli/local/miniconda/envs/torchdev/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/private/home/shenli/local/miniconda/envs/torchdev/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/private/home/shenli/local/miniconda/envs/torchdev/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/private/home/shenli/local/miniconda/envs/torchdev/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/private/home/shenli/local/miniconda/envs/torchdev/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/private/home/shenli/local/miniconda/envs/torchdev/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'TensorPipeRpcBackendOptions' object
st176313
@mrshenli Above code works with mp.spawn (locally) but not with mp.Process (distributed way) way (Even after moving options init to workers_init_rpc function). Reason I am not using mp.Spawn is, like I said in my previous post I have 3 nodes which has variable number of GPUs (1, 2 & 4). My model is divided into 2 parts. First part reside on a node which has 4 GPUs, and modules of this part are divided (s.t. each GPU holds nearly equal number of parameters) onto these 4 GPUs. Second part reside on the machine which has 2 GPUs, and it’s modules are divided onto 2 GPUs (just like the previous one). Unfortunately, I can’t fit the whole model on a GPU, all of these GPUs have 8 GB VRAM which is not enough for my case. That is why I am using mp.Process methodology to do the training. Furthermore, node with 1 GPU acts as a master, it encompasses these 2 parts of the model into one. Calls them (using rpc_sync) sequentially to run a full forward pass. Also, this dividing work (copying modules to GPUs) is being done in __init__ (think of Net1 and Net2 as 2 parts of the model) methods, could this be a problem when using mp.spawn? When using mp.Process master node is successfully able to fetch the results from workers by making rpc_sync calls. Problem comes when executing dist_autograd.backward(context_id, [losses]) on master node, it hangs on this line and due to that ETIMEDOUT error is generated (After moving options line to workers_init_rpc method). This is true for my case (with GPUs) & the code I posted here (to repro. this error). Note: PyTorch Version: 1.7.1 (In all 3 of them) Forgot to mention a observation, matrix: StackTrace: Process Process-1: Traceback (most recent call last): File "/home/user/anaconda3/envs/pytorch2/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/user/anaconda3/envs/pytorch2/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "main.py", line 70, in workers_init_rpc rpc.shutdown() File "/home/user/anaconda3/envs/pytorch2/lib/python3.7/site-packages/torch/distributed/rpc/api.py", line 78, in wrapper return func(*args, **kwargs) File "/home/user/anaconda3/envs/pytorch2/lib/python3.7/site-packages/torch/distributed/rpc/api.py", line 284, in shutdown _get_current_rpc_agent().join() RuntimeError: [/opt/conda/conda-bld/pytorch_1607370156314/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [192.168.13.205]:28380 Every time I execute this (on workers), a different port is used (its not constant). Below, port 28380 is used on a particular worker. matrix: RuntimeError: [/opt/conda/conda-bld/pytorch_1607370156314/work/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [192.168.13.205]:28380 Is this okay?
st176314
@mrshenli Try to run the code like this, matrix: How to run: Env. variable exports for all of these 3 nodes: export MASTER_ADDR=<Node 0 IP> export MASTER_PORT=<Node 0 port> export GLOO_SOCKET_IFNAME=network interface export TP_SOCKET_IFNAME=network interface On Node 0: python <filename> 0 On Node 1: python <filename> 1 On Node 2: python <filename> 2 You might not be able to regenerate this error locally using mp.spawn. Thank you, all for helping me out.
st176315
matrix: Furthermore, node with 1 GPU acts as a master, it encompasses these 2 parts of the model into one. Calls them (using rpc_sync) sequentially to run a full forward pass. Also, this dividing work (copying modules to GPUs) is being done in __init__ (think of Net1 and Net2 as 2 parts of the model) methods, could this be a problem when using mp.spawn? Sorry about the delay. If rpc_sync succeeded in your environment, then it mean at least the comm layer is working. So if the backward hangs, I would assume it’s sth wrong with the backward instead of mp.spawn. But let me try Process instead of spawn anyway.
st176316
I tried the following two implementations. The first one uses mp.Process to spawn processes, and with the second one, I ran python test.py 0/1/2 in three different terminal tabs. Both work for me. distributed autograd if __name__=="__main__": world_size = 3 processes = [] for rank in range(3): p = mp.Process(target=workers_init_rpc, args=(rank, world_size)) p.start() processes.append(p) [p.join() for p in processes] if __name__=="__main__": world_size = 3 rank = int(sys.argv[1]) workers_init_rpc(rank, world_size) Is the code you shared above exactly the same where you hit the hang problem? There is a known gap in distributed autograd. We currently only support fast mode distributed autograd, which means all RPC comm operations (rpc_sync, rpc_async, remote) must participate in the backward, otherwise the backward would hang. See more details in the doc below. https://pytorch.org/docs/stable/rpc/distributed_autograd.html 1
st176317
@mrshenli I made some changes like you said, now my code works. However there’s a problem. On master node I get following output as I have put prints in my code: Batch forward complete Epoch: 0, Batch Id: 0,Train Loss: 16.620447158813477 Before dist_autograd (Before executing dist_autograd.backward()) Step done (After optimizer.step() is executed, of course its an instance of DistributedOptimizer) Batch forward complete (Completion of the forward pass of a batch) Epoch: 0, Batch Id: 1,Train Loss: 12.148786544799805 Before dist_autograd It gets stuck here. There’s no error messages on either of the nodes. Here rpc_timeout is set to 1 hr. Then I changed rpc_timeout to 1 minute, and got the following on master node Batch forward complete Epoch: 0, Batch Id: 0,Train Loss: 58.77790832519531 Before dist_autograd Step done Batch forward complete Epoch: 0, Batch Id: 1,Train Loss: 40.98801803588867 Before dist_autograd Process Process-1: Traceback (most recent call last): File "/home/user/anaconda3/envs/pytorch2/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/user/anaconda3/envs/pytorch2/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "main.py", line 65, in workers_init_rpc run_master() File "main.py", line 49, in run_master train_one_epoch(model, opt, data_loader, epoch, print_freq=10) File "/home/user/Documents/Incremental_Learning/demo/helpers/engine.py", line 45, in train_one_epoch dist_autograd.backward(context_id, [losses]) RuntimeError: Error on Node 0: RPCErr:1:RPC ran for more than set timeout (60000 ms) and will now be marked with an error and below on the workers, [W tensorpipe_agent.cpp:545] RPC agent for worker1 encountered error when reading incoming request from worker0: ECONNRESET: connection reset by peer (this is expected to happen during shutdown) It does timeout after 1 minute. There’s no problem when processing first batch, it’s smooth doesn’t take time. But when processing second batch dist_autograd hangs, as you can see in above output. This is strange behaviour.
st176318
Hey @matrix, sorry about the delay again. We really need to work on our oncall procedure to cover pending discussions not just new discussions. For the timeout error, the first thing I would check if whether the network is indeed working. To do that, one way is to call rpc_sync (not remote or rpc_async) between all pairs of nodes. If that works, it means the network indeed works. From the timeout message, I cannot tell why the distributed autograd does not work. If you could share your latest code with me, I can grab three machines on AWS and try it.
st176319
I have 3 nodes each with 2 GPUs, how can I distribute my model training? Does torch.dist.distributedparallel (or similar torch library) distribute training across Multi-node Multi-GPU? if not, what is the best alternative?
st176320
Solved by eqy in post #2 Yes, this is the purpose of DistributedDataParallel — PyTorch master documentation
st176321
Yes, this is the purpose of DistributedDataParallel — PyTorch master documentation 5
st176322
@eqy I also heard about Horovod, it does the same thing? What is the best choice for above scenario? Thank you very much for your response!!
st176323
I also heard about Horovod, it does the same thing? What is the best choice for above scenario? Hey @bkuriach. It depends. If you would like to have framework (PyTorch/TensorFlow), Horovod distributed package might be a better fit. But if you are already using PyTorch, PyTorch DDP might be a better fit. Quoting my own responses from another post: One difference between PyTorch DDP is Horovod+PyTorch is that, DDP overlaps backward computation with communication. In contrast, according to the following example, Horovod synchronizes models in the optimizer step(), which won’t be able to overlap with backward computations. So, in theory, DDP should be faster. Horovod with PyTorch — Horovod documentation 3
st176324
I have a machine that has 8 V100s, and I have a small job that only requires 4 V100 to run, so I am trying to run 2 4-V100 distributed training runs on the same machine at the same time, however, this gets me RuntimeError: Address already in use all the time for the second run I launch. It appears as if one distributed training job running on a machine will block any other distributed training runs. Is it possible to work around this so that I can launch 2 distributed jobs on one machine?
st176325
Solved by eqy in post #4 It should be close to where you specify the address (e.g., MASTER_PORT here Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.8.1+cu102 documentation).
st176326
I’ve never tried this setup before so apologies if you have already considered it, but what happens when you change the port used for the 2nd distributed job?
st176327
That sounds like a reasonable solution! Dumb question here though - how do one change the port used by a distributed job?
st176328
It should be close to where you specify the address (e.g., MASTER_PORT here Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.8.1+cu102 documentation 1).
st176329
Thanks for @eqy’s suggestion! I find that if using the PyTorch distributed launch utility script, then there is a --master_port argument (see here 2) one can use to set the port, and once different distributed jobs are configured to use different ports, the “Address already in use” problem goes away.
st176330
I’m reading the code about DataParallel but I don’t quite understand why module replicate happens during call to forward. I thought we should replicate module once and execute forward for multiple times? Or is this class design for training phase because the backward prop would require parameter averaging (synchronization) after each forward call? Code: pytorch/data_parallel.py at 5b4c3a9da11120f60d732af505bc65f79df14637 · pytorch/pytorch · GitHub
st176331
The scatter and gather operations are used, as a “simple” data parallel implementation via nn.DataParallel. This blog post 2 explains the overall workflow in more details. This overhead is also why we recommend to use DistributedDataParallel with a single process per GPU, which would avoid these copies, and would thus yield the best performance.
st176332
I am currently trying to using apex with SWA like so: model, optimizer = amp.initialize(model, optimizer, opt_level='O2') model = DDP(model) swa_model = torch.optim.swa_utils.AveragedModel(model) scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=20) swa_start = 6 swa_scheduler = SWALR(optimizer, swa_lr=0.05 * world_size) At this line (swa_model = torch.optim.swa_utils.AveragedModel(model)) I am getting the following error: Traceback (most recent call last): File "/home/jupyter/.local/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap fn(i, *args) File "/home/jupyter/Flood_Comp/starter.py", line 249, in train swa_model = torch.optim.swa_utils.AveragedModel(model) File "/home/jupyter/.local/lib/python3.7/site-packages/torch/optim/swa_utils.py", line 89, in __init__ self.module = deepcopy(model) File "/opt/conda/lib/python3.7/copy.py", line 169, in deepcopy rv = reductor(4) File "/opt/conda/lib/python3.7/site-packages/apex/parallel/distributed.py", line 271, in __getstate__ del attrs['self.bucket_streams'] KeyError: 'self.bucket_streams' Any pointers on mitigating this would be helpful.
st176333
apex.amp is deprecated in favor of the native torch.cuda.amp implementation and we recommend to switch to the latter. More details are given in this post 6.
st176334
Okay. Strangely enough, I am unable to get the apex.amp benefits in torch.cuda.amp. But I look into the post you suggested and see if there’s anything I am missing out on.
st176335
After following @ptrblck’s suggestions here’s how my train() function is looking like (consider this to be the launcher expected by torch.multiprocessing.spawn(). def train(rank, num_epochs, world_size): init_process(rank, world_size) torch.manual_seed(0) model = create_model() torch.cuda.set_device(rank) model.cuda(rank) model = DistributedDataParallel(model, device_ids=[rank]) swa_model = torch.optim.swa_utils.AveragedModel(model) learning_rate = 1e-3 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate * world_size) scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=20) swa_start = 10 swa_scheduler = torch.optim.swa_utils.SWALR(optimizer, swa_lr=0.05) criteria = nn.CrossEntropyLoss() scaler = torch.cuda.amp.GradScaler(enabled=True) train_loader, val_loader = get_dataloader(rank, world_size) for epoch in range(num_epochs): model.train() for batch in train_loader: with torch.cuda.amp.autocast(enabled=True): image = batch['image'].cuda(rank, non_blocking=True) mask = batch['mask'].cuda(rank, non_blocking=True) pred = model(image) loss = criteria(pred, mask.unsqueeze(1)) scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() optimizer.zero_grad(set_to_none=True) if epoch > swa_start: swa_model.update_parameters(model) swa_scheduler.step() else: scheduler.step() Things are now working as expected. @ptrblck if I could do anything better to further optimize the performance please let me know. Majority of the SWA code comes from the official docs 1.
st176336
@ptrblck I am currently running into another problem that is closely related. After training with SWA, we need to update the batch norm statistics (reference 1). Since the structure of my dataset is different from what torch.optim.swa_utils.update_bn() expects, I am doing the following inside train() (recall that train() is the launcher I provide to mp.spawn()): if rank == 0: for batch in train_loader: image = batch['image'].cuda(rank, non_blocking=True) prediction = swa_model(image) This leads to the following error: Traceback (most recent call last): File "starter.py", line 343, in <module> nprocs=WORLD_SIZE, join=True File "/home/jupyter/.local/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/jupyter/.local/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes while not context.join(): File "/home/jupyter/.local/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 118, in join raise Exception(msg) Exception: -- Process 0 terminated with the following error: Traceback (most recent call last): File "/home/jupyter/.local/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap fn(i, *args) File "/home/jupyter/Flood_Comp/starter.py", line 334, in train prediction = swa_model(image) File "/home/jupyter/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/jupyter/.local/lib/python3.7/site-packages/torch/optim/swa_utils.py", line 101, in forward return self.module(*args, **kwargs) File "/home/jupyter/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/jupyter/.local/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 610, in forward self._sync_params() File "/home/jupyter/.local/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 1048, in _sync_params authoritative_rank, File "/home/jupyter/.local/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 979, in _distributed_broadcast_coalesced self.process_group, tensors, buffer_size, authoritative_rank RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [10.138.0.33]:26791 Anything I am missing out on?
st176337
Are you seeing this error using mp.spawn and swa in isolation or only in combination with torch.cuda.amp?
st176338
Could you try to isolate it further, which would help to debug it more? I.e. in particular it would be interesting to see, if your custom mp approach would work with amp or swa in isolation, as this is often causing trouble, if you are not careful.
st176339
I’m not familiar with the internals of swa so you could check the multiprocessing best-practices as well as the docs about sharing CUDA tensors 2.
st176340
Hi, I’m using allennlp to do distributed bert training. In their code, model has some customized functions, e.g., get_metrics, and get_regularization_penalty. After wrapping it with ddp, there is a comment says # Using `DistributedDataParallel`(ddp) brings in a quirk wrt AllenNLP's `Model` interface and its # usage. A `Model` object is wrapped by `ddp`, but assigning the wrapped model to `self.model` # will break the usages such as `Model.get_regularization_penalty`, `Model.get_metrics`, etc. # # Hence a reference to Pytorch's object is maintained in the case of distributed training and in the # normal case, reference to `Model` is retained. This reference is only used in # these places: `model.__call__`, `model.train` and `model.eval`. github.com allenai/allennlp/blob/c5bff8ba0d835eb03931f10f4f427ffe936cf796/allennlp/training/gradient_descent_trainer.py#L302 2 self._num_gradient_accumulation_steps = num_gradient_accumulation_steps # Enable automatic mixed precision training. self._scaler: Optional[amp.GradScaler] = None self._use_amp = use_amp if self._use_amp: if self.cuda_device == torch.device("cpu"): raise ValueError("Using AMP requires a cuda device") self._scaler = amp.GradScaler() # Using `DistributedDataParallel`(ddp) brings in a quirk wrt AllenNLP's `Model` interface and its # usage. A `Model` object is wrapped by `ddp`, but assigning the wrapped model to `self.model` # will break the usages such as `Model.get_regularization_penalty`, `Model.get_metrics`, etc. # # Hence a reference to Pytorch's object is maintained in the case of distributed training and in the # normal case, reference to `Model` is retained. This reference is only used in # these places: `model.__call__`, `model.train` and `model.eval`. if self._distributed: self._pytorch_model = DistributedDataParallel( self.model, device_ids=None if self.cuda_device == torch.device("cpu") else [self.cuda_device], My question is what is the relationship between self.model and its wrapped version self._pytorch_model? Do they share parameters and runtime state?
st176341
Solved by tom in post #2 You have one object for each of the three classes m = ThePyTorchModel (without DDP) ddp_m = DistributedDataParallel(ThePyTorchModel) anlp_m = Model(ThePyTorchModel) (AllenNLP’s model class) ddp_m and anlp_m wrap (i.e. contain a reference to) the (same) instance m as .module and .model usually. …
st176342
valiantljk: My question is what is the relationship between self.model and its wrapped version self._pytorch_model? You have one object for each of the three classes m = ThePyTorchModel (without DDP) ddp_m = DistributedDataParallel(ThePyTorchModel) anlp_m = Model(ThePyTorchModel) (AllenNLP’s model class) ddp_m and anlp_m wrap (i.e. contain a reference to) the (same) instance m as .module and .model usually. Now AllenNLP doesn’t want to special case and write .model.module if isinstance(.model, DDP) else .model all the time, so it leaves .model to be the regular model m but stores ddp_m as the PyTorch ._pytorch_model. So you should have anlp_m._pytorch_model.module is anlp_m.model return True, they are indeed the very same object. valiantljk: Do they share parameters and runtime state? Yes in the above sense (that you have an additional hierarchy level when going through DDP). Best regards Thomas
st176343
Thanks @tom. Very clear now. A follow up question, when the ddp wrapped model is copied onto one cuda device, does the original model still hold the reference to their common ancestor?
st176344
I m trying to improve the performance of training BERT model with multiple GPUs with torch DDP like discribed in chap 5.4 in paper “PyTorch Distributed: Experiences on Accelerating Data Parallel Training”. But I met some error when I set process group instances >1 on 2 servers with 16 GPUs. I m trying to use round_robin_process_groups instead of default process_group initiated with “torch.distributed.init_process_group” . Please correct me if I m using the wrong way to use round_robin_process_groups. Thank you! Here is my code: if args.num_process_groups > 1: store = c10d._get_default_store() rr_pg = torch.distributed._round_robin_process_groups( [ c10d.ProcessGroupNCCL(c10d.PrefixStore(str(i), store), args.local_rank, args.n_gpu) for i in range(args.num_process_groups) ] ) model = DDP(model, device_ids=[args.local_rank], output_device=args.local_rank, bucket_cap_mb=25, process_group=rr_pg) and i got the following errors: ncclSystemError: System call (socket, malloc, munmap, etc) failed. args, final_loss, train_time_raw = main() File "/workspace/bert/run_pretraining.py", line 942, in main model, optimizer, lr_scheduler, checkpoint, global_step = prepare_model_and_optimizer(args, device) File "/workspace/bert/run_pretraining.py", line 770, in prepare_model_and_optimizer model, optimizer, lr_scheduler, checkpoint, global_step = prepare_model_and_optimizer(args, device) File "/workspace/bert/run_pretraining.py", line 770, in prepare_model_and_optimizer model, optimizer, lr_scheduler, checkpoint, global_step = prepare_model_and_optimizer(args, device) File "/workspace/bert/run_pretraining.py", line 770, in prepare_model_and_optimizer model, optimizer, lr_scheduler, checkpoint, global_step = prepare_model_and_optimizer(args, device) File "/workspace/bert/run_pretraining.py", line 770, in prepare_model_and_optimizer bucket_cap_mb=25, gradient_as_bucket_view= False, process_group=rr_pg) File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__ bucket_cap_mb=25, gradient_as_bucket_view= False, process_group=rr_pg) File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__ bucket_cap_mb=25, gradient_as_bucket_view= False, process_group=rr_pg) File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__ dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 ncclSystemError: System call (socket, malloc, munmap, etc) failed. bucket_cap_mb=25, gradient_as_bucket_view= False, process_group=rr_pg) File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__ dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 ncclSystemError: System call (socket, malloc, munmap, etc) failed. dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 ncclSystemError: System call (socket, malloc, munmap, etc) failed. dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 ncclSystemError: System call (socket, malloc, munmap, etc) failed.
st176345
Using round_robin_process_group with NCCL is not currently recommended. Check out the warning under: Distributed communication package - torch.distributed — PyTorch master documentation 15 : Using multiple process groups with the NCCL backend concurrently is not safe and the user should perform explicit synchronization in their application to ensure only one process group is used at a time. This means collectives from one process group should have completed execution on the device (not just enqueued since CUDA execution is async) before collectives from another process group are enqueued. See Using multiple NCCL communicators concurrently 5 for more details.
st176346
HI, in my context I’m forwarding twice to obtain two results, and use one of them to guide the other. suncet/losses.py at master · facebookresearch/suncet · GitHub 1 target_supports, anchor_supports = encoder(simgs, return_before_head=True) target_views, anchor_views = encoder(uimgs, return_before_head=True) # then I use anchor_supports and anchor_views to calculate loss on https://github.com/facebookresearch/suncet/blob/master/src/losses.py#L65 I actually dont need the gradients of anchor_supports, so I added with torch.no_grad() like this: with torch.no_grad(): target_supports, anchor_supports = encoder(simgs, return_before_head=True) target_views, anchor_views = encoder(uimgs, return_before_head=True) After that, ddp throws runtimeerror and asked me to set find_unused_parameters=True. I tried setting it to true and it work, but I dont understand why thats necessary. I also tried this and it worked with find_unused_parameters=False. target_supports, anchor_supports = encoder(simgs, return_before_head=True) target_views, anchor_views = encoder(uimgs, return_before_head=True) anchor_supports = anchor_supports.detach() target_supports = target_supports.detach() Is there a way to both use torch.no_grad() to save memory and use find_unused_parameters=False to speed up?
st176347
There is not a way to use torch.no_grad and find_unused_parameters=False since these operations are still included in the autograd graph, but only require_grad is set to False. See the find_used_parameters argument for DDP DistributedDataParallel — PyTorch master documentation 6 When you are detaching anchor_supports and target_supports then the autograd graph will not be tracking those operations, which is why the example you gave is working with find_unused_parameters=False.
st176348
Hello everybody, I need help setting up DataDistributedParallel. I would like to run 8 processes in parallel in 8 V100 Teslas but only one machine. seed = args.seed + dist.get_rank() torch.manual_seed(seed) np.random.seed(seed) random.seed(seed) torch.cuda.set_device(dist.get_rank()) model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank]) torch.distributed.init_process_group(backend='gloo') # I tried many variations args.device="cuda:{}".format(args.local_rank) model.to(args.device) To launch it I am using: python3 -m torch.distributed.launch --nproc_per_node=8 train.py (other args...) When I run it with ‘nccl’ as backend it will freeze in torch.nn.parallel.DistributedDataParallel. When I use ‘gloo’ instead it claims I dont have memory: RuntimeError: CUDA out of memory. Tried to allocate 224.00 MiB (GPU 0; 15.78 GiB total capacity; 724.41 MiB already allocated; 191.25 MiB free; 794.00 MiB reserved in total by PyTorch) model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank]) Which doesn’t make any sense to me because I was supposed to have enough memory according to the own error message. Thank you very much
st176349
I haven’t found any useful tutorials online as well.If you have a step-by-step one it would be very nice, thank you.
st176350
RuntimeErrorauthoritative_rank): NCCL error in: /opt/conda/conda-bld/pytorch_1614378098133/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, invalid usa ge, NCCL version 2.7.8 nccl InvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collect ives at once, mixing streams in a group, etc). File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/dist ributed.py", line 1156, in _distributed_broadcast_coalesced
st176351
I am not hitting either of those error messages when using NCCL or GLOO for torch=1.8.0, both are working fine. Here is the script I am using which is based off of yours (with a dummy model): import torch import torch.distributed as dist import torch.nn as nn import numpy as np import random import argparse import os os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = "29500" parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('--local_rank', metavar='N', type=int, help='rank') parser.add_argument('--seed', metavar='N', type=int, help='seed') args = parser.parse_args() args.device="cuda:{}".format(args.local_rank) model = nn.Linear(1, 1).to(args.device) torch.distributed.init_process_group(backend='nccl') seed = args.seed + dist.get_rank() torch.manual_seed(seed) np.random.seed(seed) random.seed(seed) torch.cuda.set_device(dist.get_rank()) model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank]) model.to(args.device) and I am launching with python3 -m torch.distributed.launch --nproc_per_node=2 test.py --seed=42 Can you provide more detail about the model you are using and your torch version?
st176352
Hi, I am using DistributedDataParallel with nccl. I have two losses which are averaged before calling backward. But backward doesn’t work. Here is the part of the code that is problematic: self.inputs['qa_in'][i] = Variable (self.inputs['qa_in'][i].data, requires_grad=True) self.outputs['qa_outputs'][i] = self.qa_outputs(self.inputs['qa_in'][i]) start_logits, end_logits = self.outputs['qa_outputs'][i].split(1, dim=-1) start_logits = start_logits.squeeze(-1) end_logits = end_logits.squeeze(-1) ignored_index = start_logits.size(1) start_positions_ubatches[i].clamp_(0, ignored_index) end_positions_ubatches[i].clamp_(0, ignored_index) loss_fct = CrossEntropyLoss(ignore_index=ignored_index) start_loss = loss_fct(start_logits, start_positions_ubatches[i]) end_loss = loss_fct(end_logits, end_positions_ubatches[i]) self.outputs['loss_out'][i] = (start_loss + end_loss) / 2 self.outputs['loss_out'][i].backward( retain_graph=True) and I get the following error: File “/home/suncast/venv3/lib/python3.6/site-packages/torch/autograd/init.py”, line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by use of a module parameter outside the forward function. The return value of the forward function is inspected by the distributed data parallel wrapper to figure out if any of the module’s parameters went unused. If this is the case, it knows they won’t receive gradients in a backward pass. If any of those parameters are then used outside forward, this error condition is triggered. You can disable unused parameter detection by passing the keyword argument find_unused_parameters=False to torch.nn.parallel.DistributedDataParallel. (mark_variable_ready at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:342) I think the problem is the self.qa_outputs parameters are used twice in backward but I don’t know how to solve this. I don’t have any problem without distributed.
st176353
Have you tried disabling unused parameter detection by passing find_unused_parameters = False to torch.nn.parallel.DistributedDataParallel?
st176354
Yes. I get the following error when I set it to False: File “/home/suncast/venv3/lib/python3.6/site-packages/torch/autograd/init.py”, line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: has_marked_unused_parameters_ INTERNAL ASSERT FAILED at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:290, please report a bug to PyTorch.
st176355
Hey @maralm From your post, it is unclear which part is the DDP model. My assumption is that: self.inputs['qa_in'][i]: this is input to DDP forward self.qa_outputs: this is your DDP model self.outputs['qa_outputs'][i]: this is your DDP outputs I think the problem is the self.qa_outputs parameters are used twice in backward but I don’t know how to solve this. I don’t have any problem without distributed. This should be fine, the autograd engine should be able to manage backward inputs and dependencies from start_loss and end_loss properly. Two questions: Does it work if you directly call self.outputs['qa_outputs'][i].sum().backward() after line 3? Does any of the model parameters or outputs participates in other forward/backward passes? It will be very helpful for us to debug if you could share a minimum repro example. As we don’t know what happens outside of the posted code snippet, we can only make assumptions.
st176356
Hi @mrshenli, Thanks for your reply. Your assumption is correct and self.qa_outputs is just a linear layer. Regarding your questions: No it doesn’t work with that. No, I am trying to just run a forward layer and compute backward with autograd.backward on that layer instead of running loss.backward(). Basically, I have a large model and when I run it in a conventional way for forward and backward(loss.backward()), it works fine. But, I have a new implementation which runs backward layer by layer using autograd.backward. Using that, the algorithm works fine on a single gpu but face this error in distributed. I tried it on a different model which doesn’t have multiple losses and it is fine. In this case that I add multiple losses, the error comes.
st176357
maralm: No, I am trying to just run a forward layer and compute backward with autograd.backward on that layer instead of running loss.backward(). I see. DDP does not work for this case yet. Currently, all outputs you get from DistributedDataPararlel.forward() must participate in the same backward pass, otherwise, it would mess up with DDP’s internal communication state. Hope this can help explain that: https://pytorch.org/docs/master/notes/ddp.html#internal-design 89
st176358
I tried it on a different model which doesn’t have multiple losses and it is fine. In this case that I add multiple losses, the error comes. I might have misunderstand the use case. Adding up multiple losses should work, and this is different from running layer-by-layer backward, right? Would I be correct if I assume the code snippet you shared above is adding two losses together instead of doing layer-by-layer backward? It would be helpful if you could share a minimum repro for this error. Thanks!
st176359
No this is the same issue. To simplify, let’s assume that I want to find the gradients for the last layer of the network only which includes a linear classifier and loss (using autograd.backward()). If I use a linear layer with single loss, DDP works with autograd but when I add two losses, it gives that error.
st176360
When I launch the following script with the torch.distributed.launch utilility on a 2 GPUs machine, I get a much slower (10x) training than when I launch it on a single GPU. I realized that it seems to come from the big fully connected layer at the end of the network (130000x1024), and I suppose it is because the gradients that need to be synchronized at each iteration represent a big amount of memory. I profiled the code with Nvidia Nsight Systems and saw that there is a call to ncclAllReduceRingLLKernel_sum_f32 that takes approximately 500 ms each iteration. Is this expected behaviour with this kind of network? Or am I doing something wrong? import torch import torch.nn as nn import argparse from torch.nn.parallel import DistributedDataParallel as DPP import torch.nn.functional as F from tqdm import tqdm class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() in_channels = 3 out_channels = 64 depth = 7 m_features = [ nn.Conv2d(in_channels, out_channels, 3, padding=1), ] for i in range(depth): in_channels = out_channels if i % 2 == 1: stride = 1 out_channels *= 2 else: stride = 2 m_features.append(nn.Conv2d( in_channels, out_channels, 3, padding=1, stride=stride, )) self.features = nn.Sequential(*m_features) patch_size = 256 // (2 ** ((depth + 1) // 2)) m_classifier = [ nn.Linear(out_channels * patch_size ** 2, 1024), nn.LeakyReLU(negative_slope=0.2, inplace=False), nn.Linear(1024, 1) ] self.classifier = nn.Sequential(*m_classifier) def forward(self, f0): features = self.features(f0) output = self.classifier(features.view(features.size(0), -1)) return output torch.backends.cudnn.enabled = True # make sure to use cudnn for computational performance parser = argparse.ArgumentParser() parser.add_argument("--local_rank", type=int, default=0) args = parser.parse_args() def train(rank, world_size): if world_size > 1: torch.distributed.init_process_group("nccl", init_method="env://", rank=rank, world_size=world_size) torch.cuda.set_device(rank) discriminator = Discriminator() discriminator.to(rank) optimizer = torch.optim.Adam( discriminator.parameters(), lr=1e-5 ) # -- Initialize model for distributed training -- if torch.cuda.device_count() > 1: discriminator = DPP(discriminator, device_ids=[rank]) frame = torch.rand((1, 3, 256, 256), device=f"cuda:{rank}") d_01 = discriminator(frame) label_01 = torch.zeros_like(d_01) for i in tqdm(range(30)): # - Compute loss - d_01 = discriminator(frame) loss = F.binary_cross_entropy_with_logits(d_01, label_01) optimizer.zero_grad() loss.backward() optimizer.step() def main(): world_size = torch.cuda.device_count() with torch.autograd.profiler.emit_nvtx(): train(args.local_rank, world_size) if __name__ == '__main__': main()
st176361
so looking at your code, it looks like you didn’t create the process groups in different processes, and ended up just using one process, or are you just running the scripts on multi hosts? If not, did you try following the DDP tutorial, launch it in multiple processes and see if improves the performance?
st176362
Thank you for your answer I did follow the tutorial. I use the torch.distributed.launch 1 utility that takes care of creating one process per GPU. I am using it on one machine with two GPUs. It is true that I put my two models in the same (default) process group, but I also tried to use different process groups for both models and it did not change anything.
st176363
Martin_Castin: ncclAllReduceRingLLKernel_sum_f32 If the cost is dominated by allreduce communication, can you try no_sync context manager 1 to reduce the sync frequency? Moreover, you probably can try to register a FP16 compression communication hook 6 to compress the gradients before allreduce. It’s one-line code change. You can try even more advanced gradient compression if interested. I am really surprised at such a high allreduce cost.
st176364
Thank you for your answer, I am sure your suggestion would have helped! I finally understood the source of the problem. This network contains this layer: Linear(in_features=131072, out_features=1024, bias=True) Which requires a huge number of gradients to be synchronized (~400 MB per iteration in fp32). As I don’t have any NVLink it makes sense that this synchronisation takes 500 ms I think. So I ended up redesigning the network to avoid having such a huge fully connected layer.
st176365
Hi, What are possible reasons for gradients to differ across GPUs after a backward() call, when using DistributedDataParallel (DDP)? If I understood correctly wrapping the model with DDP in the main worker should take care of gradient averaging and their synchronization across GPUs? Thank you
st176366
If you enable no_sync context manger, you will turn off the communication that averages gradients. See: DistributedDataParallel — PyTorch 1.8.1 documentation 3
st176367
Sorry, maybe I was unclear. My problem is that the gradients are not averaging across GPUs and am searching for possible reasons as to why this is happening.