id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st175168 | Thank you for your reply @wanchaol , the bug has been solved.
The reason is that my model will not use postnet to compute loss in some cases (there is a if condition in the code), so the back-propagation will fail. |
st175169 | Hello,
I am trying to implement the DistributedDataParallel class in my training code.
The training code is a block in a larger block that I run to do the training and logging. Because the larger block runs twice when the multiprocess initialization method is set to ‘spawn’ and rewriting my ‘main’ function would be too much work, I looked into forking the subprocess, so only the trainingsblock is run in parrallel.
The way I initialize the subprocesses now is (which i stole from here:
import torch.multiprocessing as mp
mp.set_start_method('fork', force=True)
for rank in range(self.world_size):
p = mp.Process(target=self.train, args=(rank,))
p.start()
processes.append(p)
for p in processes:
p.join()
Now the train function crashes on the “net.to(device)” line in the code sample below, with the error message:
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the ‘spawn’ start method
self.net.to(rank)
self.net = DistributedDataParallel(self.net, device_ids=[rank])
So far I’ve read that this can happen when something is already initialised on the cuda before the multiprocessing starts. But I can’t find where that would be in my code (I checked and removed all the .to(device) operations).
Other possible causes that I found could be related to the dataloader (num_workers to 0 and pin_memory), but placing the dataloader code after the network initialisation code, still gives the same error.
Is it possible to use the DistributedDataParallel class this way?
Can the cuda be cleared before initialising the processes?
Thanks in advance |
st175170 | Pascal_Niville:
To use CUDA with multiprocessing, you must use the ‘spawn’ start method
thanks for posting @Pascal_Niville, this is a known issue for cuda runtime, you can see a related issue here Cannot re-initialize CUDA in forked subprocess · Issue #40403 · pytorch/pytorch · GitHub 13. The workarond is to use “spawn” instead of “fork” as suggested in the error. |
st175171 | Hello!!
Summary
My build of Pytorch v1.10.0 from source seem to have issues with the gloo and nccl backends, but works fine with mpi. The error alternate between:
Connection refused
Connection reset by peer
Socket Timeout
Even when the port is free and available.
Using the pre-built wheel from upstream (torch-1.10.0+cu113-cp39-cp39-linux_x86_64.whl), the issue cannot be reproduced. In other words, it works as expected.
Building GLOO with the same configuration (cuda and nccl) and running the tests suite : all tests passes.
Running NVIDIA NCCL examples works fine.
Test snippet
import torch.nn.parallel
import torch.distributed as dist
import os
os.environ['MASTER_ADDR'] = str(os.environ.get('HOST', '127.0.0.1'))
os.environ['MASTER_PORT'] = str(os.environ.get('PORT', 29500))
os.environ['RANK'] = str(os.environ.get('SLURM_LOCALID', 0))
os.environ['WORLD_SIZE'] = str(os.environ.get('SLURM_NTASKS', 2))
backend = os.environ.get('BACKEND', 'mpi')
print('Using backend:', backend)
dist.init_process_group(backend)
# dist.init_process_group(backend, init_method=f"tcp://{master_add}:{master", rank=rank, world_size=size)
my_rank = dist.get_rank()
my_size = dist.get_world_size()
print("my rank = %d my size = %d" % (my_rank, my_size))
dist.destroy_process_group()
Ran on an exclusive single node (SLURM) with two tasks:
srun python ddp_torch.py
Details
Configuration
>>> print(torch.__config__.show())
PyTorch built with:
- GCC 9.3
- C++ Version: 201402
- Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.4
- NVCC architecture flags: -gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80
- CuDNN 8.2 (built against CUDA 11.3)
- Magma 2.6.1
- Build settings:
BLAS_INFO=flexi
BUILD_TYPE=Release
CUDA_VERSION=11.4
CUDNN_VERSION=8.2.0
CXX_COMPILER=/cvmfs/soft.computecanada.ca/easybuild/software/2020/Core/gcccore/9.3.0/bin/c++
CXX_FLAGS= -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow
LAPACK_INFO=flexi
PERF_WITH_AVX=1
PERF_WITH_AVX2=1
PERF_WITH_AVX512=1
TORCH_VERSION=1.10.0
USE_CUDA=ON
USE_CUDNN=ON
USE_EXCEPTION_PTR=1
USE_GFLAGS=OFF
USE_GLOG=OFF
USE_MKLDNN=ON
USE_MPI=ON,
USE_NCCL=ON,
USE_NNPACK=ON,
USE_OPENMP=ON,
Missing info from the above: NCCL v2.11.4
Build summary
-- USE_DISTRIBUTED : ON
-- USE_MPI : ON
-- USE_GLOO : ON
-- USE_GLOO_WITH_OPENSSL : OFF
-- USE_TENSORPIPE : ON
Issues
Tests suite
Pytorch test suite with my build from source:
distributed/test_nccl.py passes
distributed/test_c10d_gloo.py fails with connection errors
distributed/test_c10d_nccl.py fails with connection errors
Pytorch test suite with upstream wheel (torch-1.10.0+cu113-cp39-cp39-linux_x86_64.whl):
distributed/test_nccl.py passes
distributed/test_c10d_gloo.py passes
distributed/test_c10d_nccl.py passes
Manually running the test code
GLOO
(2149) ~ $ export BACKEND=gloo
(2149) ~ $ export PORT=$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')
(2149) ~ $ srun python ddp_torch.py
terminate called after throwing an instance of 'std::system_error'
what(): Connection refused
NCCL
(2149) ~ $ export NCCL_DEBUG=INFO
(2149) ~ $ export BACKEND=gloo
(2149) ~ $ export PORT=$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')
(2149) ~ $ srun python ddp_torch.py
Using backend: nccl
my rank = 0 my size = 2
Using backend: nccl
my rank = 1 my size = 2
terminate called after throwing an instance of 'std::system_error'
what(): Connection reset by peer
Or sometimes:
what(): Connection refused
Or sometimes:
what(): Socket Timeout
Or in rare cases, it works as expected.
Expected output
NCCL
(32450) ~ $ export NCCL_DEBUG=INFO
(32450) ~ $ export BACKEND=nccl
(32450) ~ $ export PORT=$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')
(32450) ~ $ srun python ddp_torch.py
Using backend: nccl
my rank = 0 my size = 2
Using backend: nccl
my rank = 1 my size = 2
GLOO
(32450) ~ $ export BACKEND=gloo
(32450) ~ $ export PORT=$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')
(32450) ~ $ srun python ddp_torch.py
Using backend: gloo
my rank = 0 my size = 2
Using backend: gloo
my rank = 1 my size = 2
Epilog
Any clues or hint on what might be the issue with the build from source?
Next is to build with debug and see if TORCH_DISTRIBUTED_DETAIL=DEBUG can help.
Related questions:
When using NCCL backend, with environment variable NCCL_DEBUG=INFO, no NCCL output is produced. How come?
Where can I find the build configuration from the CI build? The equivalent of CMake summary. I looked in the Github actions log, but did not find it.
Thank you very much! |
st175172 | Hi, can you try building against the latest master branch and see if the issues persist/paste the error logs? A useful PR Revise the socket implementation of c10d by cbalioglu · Pull Request #68226 · pytorch/pytorch · GitHub 2 has just landed significantly improving the implementation and error logging of the c10d store so the logs should provide a lot more details in the case of errors.
In addition, since this seems like a reproducible bug, could you file an issue over at Issues · pytorch/pytorch · GitHub with reproduction instructions so we can take a deeper look? |
st175173 | Thanks! Building from HEAD to include the PR, I got:
(31286) ~ $ export TORCH_DISTRIBUTED_DETAIL=DEBUG
(31286) ~ $ export PORT=$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')
(31286) ~ $ echo $PORT
32967
(31286) ~ $ export BACKEND=gloo
(31286) ~ $ srun python ddp_torch.py
[W socket.cpp:634] The server socket on [localhost]:32967 is not yet listening (generic error: 111 - Connection refused).
terminate called after throwing an instance of 'std::system_error'
what(): Connection reset by peer
Using backend: gloo
my rank = 1 my size = 2
It worked partially, as it is missing rank 0.
With NCCL backend, no information is printed, only the error.
I’ll create an issue then.
Issue : GLOO/NCCL connection issues [build from source] · Issue #69003 · pytorch/pytorch · GitHub 5 |
st175174 | Hello,
I’m trying to move a single GPU model to a machine with 4 GPUs, only I’m on a timeline to use this machine.
I’m getting the following error:
RuntimeError Traceback (most recent call last)
<ipython-input-28-4b69b40dcdef> in <module>
18 break
19
---> 20 y_pred = combined_model(image, numerical_data, categorical_data)
21 single_loss = criterion(y_pred, label)
22
~/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
150 return self.module(*inputs[0], **kwargs[0])
151 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
--> 152 outputs = self.parallel_apply(replicas, inputs, kwargs)
153 return self.gather(outputs, self.output_device)
154
~/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs)
160
161 def parallel_apply(self, replicas, inputs, kwargs):
--> 162 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
163
164 def gather(self, outputs, output_device):
~/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices)
83 output = results[i]
84 if isinstance(output, ExceptionWrapper):
---> 85 output.reraise()
86 outputs.append(output)
87 return outputs
~/miniconda3/lib/python3.7/site-packages/torch/_utils.py in reraise(self)
392 # (https://bugs.python.org/issue2651), so we work around it.
393 msg = KeyErrorMessage(msg)
--> 394 raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/scott.farmers/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/home/scott.farmers/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-25-86287e73cc1f>", line 34, in forward
x = torch.cat(embeddings, 1)
RuntimeError: cuda runtime error (710) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1579022060824/work/aten/src/THC/THCGeneral.cpp:313
Here is my model:
class Image_Embedd(nn.Module):
def __init__(self, embedding_size):
'''
Args
---------------------------
embedding_size: Contains the embedding size for the categorical columns
num_numerical_cols: Stores the total number of numerical columns
output_size: The size of the output layer or the number of possible outputs.
layers: List which contains number of neurons for all the layers.
p: Dropout with the default value of 0.5
'''
super().__init__()
self.all_embeddings = nn.ModuleList([nn.Embedding(ni, nf) for ni, nf in embedding_size])
self.embedding_dropout = nn.Dropout(p = .04)
self.cnn = models.resnet50(pretrained=False).cuda()
self.cnn.fc = nn.Linear(self.cnn.fc.in_features, 1000)
self.fc1 = nn.Linear(1000, 1077)
self.fc2 = nn.Linear(1077, 128)
self.fc3 = nn.Linear(128, 2)
#define the foward method
def forward(self, image, x_numerical, x_categorical):
embeddings = []
for i, e in enumerate(self.all_embeddings):
embeddings.append(e(x_categorical[:,i]))
x = torch.cat(embeddings, 1)
x = self.embedding_dropout(x)
x1 = self.cnn(image)
x2 = x_numerical
x3 = torch.cat((x1, x2), dim = 1)
x4 = torch.cat((x, x3), dim = 1)
x4 = F.relu(self.fc2(x4))
x4 = self.fc3(x4)
x4 = F.log_softmax(x4)
return x4
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
torch.manual_seed(101)
combined_model = Image_Embedd(embedding_size=train_categorical_embedding_sizes)
criterion = torch.nn.NLLLoss()
optimizer = torch.optim.Adam(combined_model.parameters(), lr=0.001)
scheduler = ReduceLROnPlateau(optimizer, 'min', patience = 4, verbose = True, min_lr = .00000001)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
combined_model = nn.DataParallel(combined_model)
combined_model.to(device)
epochs = 5000
aggregated_losses = []
max_trn_batch = 11053
for i in range(epochs):
for b, (image, label, policy, numerical_data, categorical_data) in enumerate(train_loader):
image = image.to(device)
label = label.to(device)
numerical_data = numerical_data.to(device)
categorical_data = categorical_data.to(device)
#count batches
b += 1
#throttle teh batches
if b == max_trn_batch:
break
y_pred = combined_model(image, numerical_data, categorical_data)
single_loss = criterion(y_pred, label)
# statistics
print(f'epoch: {i:3}, batch: {b:3}, loss: {single_loss.item():10.8f}')
optimizer.zero_grad()
single_loss.backward()
optimizer.step()
aggregated_losses.append(single_loss.cpu().data.numpy())
scheduler.step(single_loss)
print(f'epoch: {i:3} loss: {single_loss.item():10.10f}')
I’m not sure what I’m doing wrong. I followed or try to follow the tutorial here:
https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html 32 |
st175175 | Solved by mrshenli in post #2
Looks like some layers of the model lives on GPU and others live on CPU. Is this intentional? DataParallel does not support mixed CPU-GPU model, all layers of the same model need to live on the same GPU.
If you have multi-GPU model, e.g., some layers live on cuda:0 and others live on cuda:1, you ca… |
st175176 | Looks like some layers of the model lives on GPU and others live on CPU. Is this intentional? DataParallel does not support mixed CPU-GPU model, all layers of the same model need to live on the same GPU.
If you have multi-GPU model, e.g., some layers live on cuda:0 and others live on cuda:1, you can try DistributedDataParallel. Check out this 659. |
st175177 | Hi, I’m trying to run a simple distributed PyTorch job across using GPU/NCCL across 2 g4dn.xlarge nodes. The process group seems to initialize fine, but when trying to wrap the model in DDP there is a NCCL connection error.
Failure point:
model = DistributedDataParallel(model, device_ids=[rank], output_device=rank)
Environment:
Torch: 1.9.0+cu111
NCCL: 2.7.8
Logs with NCCL_DEBUG=INFO:
Rank 0:
[0m ip-172-31-0-137:14605:14673 [0] NCCL INFO Bootstrap : Using [0]ens5:172.31.0.137<0> [1]veth75fa783:fe80::14ad:64ff:fe64:b31e%veth75fa783<0>
[0m ip-172-31-0-137:14605:14673 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
[0m
[0m ip-172-31-0-137:14605:14673 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
[0m ip-172-31-0-137:14605:14673 [0] NCCL INFO NET/Socket : Using [0]ens5:172.31.0.137<0> [1]veth75fa783:fe80::14ad:64ff:fe64:b31e%veth75fa783<0>
[0m ip-172-31-0-137:14605:14673 [0] NCCL INFO Using network Socket
[0m NCCL version 2.7.8+cuda11.1
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Channel 00/04 : 0 1
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Channel 01/04 : 0 1
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Channel 02/04 : 0 1
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Channel 03/04 : 0 1
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1 [2] -1/-1/-1->0->1|1->0->-1/-1/-1 [3] -1/-1/-1->0->1|1->0->-1/-1/-1
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Channel 00 : 1[1e0] -> 0[1e0] [receive] via NET/Socket/0
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Channel 00 : 0[1e0] -> 1[1e0] [send] via NET/Socket/0
Rank 1:
[0m ip-172-31-86-69:1538:1569 [0] NCCL INFO Bootstrap : Using [0]ens5:172.31.86.69<0> [1]veth0bd9843:fe80::6c37:83ff:fe11:cbd6%veth0bd9843<0>
[0m ip-172-31-86-69:1538:1569 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
[0m
[0m ip-172-31-86-69:1538:1569 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
[0m ip-172-31-86-69:1538:1569 [0] NCCL INFO NET/Socket : Using [0]ens5:172.31.86.69<0> [1]veth0bd9843:fe80::6c37:83ff:fe11:cbd6%veth0bd9843<0>
[0m ip-172-31-86-69:1538:1569 [0] NCCL INFO Using network Socket
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO Trees [0] -1/-1/-1->1->0|0->1->-1/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1 [2] 0/-1/-1->1->-1|-1->1->0/-1/-1 [3] 0/-1/-1->1->-1|-1->1->0/-1/-1
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO Channel 00 : 0[1e0] -> 1[1e0] [receive] via NET/Socket/0
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO NET/Socket: Using 2 threads and 8 sockets per thread
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO Channel 00 : 1[1e0] -> 0[1e0] [send] via NET/Socket/0
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO Channel 01 : 0[1e0] -> 1[1e0] [receive] via NET/Socket/1
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO Channel 01 : 1[1e0] -> 0[1e0] [send] via NET/Socket/1
[0m
[0m ip-172-31-86-69:1538:1572 [0] include/socket.h:403 NCCL WARN Connect to fe80::14ad:64ff:fe64:b31e%7<54043> failed : Network is unreachable
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO transport/net_socket.cc:313 -> 2
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO include/net.h:21 -> 2
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO transport/net.cc:161 -> 2
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO transport.cc:68 -> 2
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO init.cc:766 -> 2
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO init.cc:840 -> 2
[0m ip-172-31-86-69:1538:1572 [0] NCCL INFO group.cc:73 -> 2 [Async thread]
Rank 0:
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Channel 01 : 1[1e0] -> 0[1e0] [receive] via NET/Socket/1
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Channel 01 : 0[1e0] -> 1[1e0] [send] via NET/Socket/1
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO Call to connect returned Connection refused, retrying
[0m
[0m ip-172-31-0-137:14605:14688 [0] include/socket.h:403 NCCL WARN Connect to 172.31.86.69<37817> failed : Connection refused
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO bootstrap.cc:95 -> 2
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO bootstrap.cc:363 -> 2
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO transport.cc:59 -> 2
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO init.cc:766 -> 2
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO init.cc:840 -> 2
[0m ip-172-31-0-137:14605:14688 [0] NCCL INFO group.cc:73 -> 2 [Async thread]
Final Output:
File "/home/ray/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__
dist._verify_model_across_ranks(self.process_group, parameters)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
A few other things worth mentioning:
The script runs fine with Gloo.
The script runs when downgrading to Torch 1.60 with CUDA 10.2 and NCCL 2.4.8.
Does anyone have any ideas or suggestions on how to debug this? Thanks in advance! |
st175178 | Hi, this might be a bug if it is working fine with an older version of torch/cuda.
Could you file an issue to Issues · pytorch/pytorch · GitHub 6 with a detailed repro so that we can investigate? Thank you! |
st175179 | I was able to get past this issue by setting os.environ["NCCL_SOCKET_IFNAME"]="ens5". However, it’s still not clear to me why this is needed since this was working on an older version, so I created an issue here: NCCL Network is unreachable / Connection refused when initializing DDP · Issue #68893 · pytorch/pytorch · GitHub 18. |
st175180 | Hello, I’m working on a Triplet model for semantic search, and I experience huge problems with training it using DistributedDataParallel module. Essentially model looks like that (actual model is much more complicated, as it contains i.e., Transformer inside, but for the sake of simplicity below is a simplified scheme):
> class TripletModel(nn.Module):
> def __init__(self, encoder):
> super(TripletModel, self).__init__()
> self.encoder = encoder
>
> def forward(self, x1, x2, x3):
> anchor_emb = self.encoder(x1)
> positive_pair_emb = self.encoder(x2)
> negative_pair_emb = self.encoder(x3)
> return anchor_emb, positive_pair_emb, negative_pair_emb
representations for triplets are created by the same encoder, and compared with TripletMarginLoss and cosine distance as distance metric. I wanted to distribute the training on two GPUs (on the same node) using DistributedDataParallel, but during the training I receive following error:
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons:
1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.
2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 99 with name backbone.encoder.encoder.layer.5.output.LayerNorm.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.
Have anyone worked on similar problem? I’m training the model using HuggingFace Trainer API, with gradient checkpointing enabled, and find_unused_parameters parameter in DistributedDataParallel set to False (I have also test inside the training loop that checks for unused parameters, so there shouldn’t be any). Most of the discussions I’ve found online (i.e., does Gradient checkpointing support multi-gpu ? · Issue #63 · allenai/longformer · GitHub) point to this particular parameter (find_unused_parameters), but it is already set to False, and for sure there are no additional unused parameters
Additionally, if I turn off the gradient_checkpointing, I receive following error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [4, 447]] is at version 3; expected version 2 instead.
Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
The backtrace points to embedding layer of token_type_embeddings layer in Roberta model of Huggingface Transformers (modeling_roberta.py module) :
[...]lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 132, in forward
token_type_embeddings = self.token_type_embeddings(token_type_ids)
[...]
[...]lib/python3.6/site-packages/torch/nn/functional.py", line 2043, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
During training on single GPU I receive no errors, and the training normally progresses |
st175181 | Thanks for your post! Are you using weight sharing in your module? Looking at does Gradient checkpointing support multi-gpu ? · Issue #63 · allenai/longformer · GitHub 2, it mentions that DDP does not work with gradient checkpointing + weight sharing in some cases, but we would need a more detailed reproduction to confirm the issue.
If you can get a repro of the issue, it would be great to file an issue at Issues · pytorch/pytorch · GitHub 2 so we can look into it. |
st175182 | Thanks for the response! I’ve run additional test, in which I replaced Transformer model with single linear layer encoder, and everything ran without any problems, so it must be related only to the Transformer. I’m not sure about the weight sharing - I’ve looked at the Roberta model implementation in huggingface (I’m training distilroberta as an encoder): transformers/modeling_roberta.py at master · huggingface/transformers · GitHub , and I don’t see there any weight sharing. Additionally, with gradient_checkpointing enabled, the backtrace points to the LayerNorm layer of RobertaOutput class (5th layer of the model). I’ll try to provide a reproducible example |
st175183 | Here is a reproducible example with the second Error (inplace operation), assuming you have two GPUs available (what’s interesting, now it doesn’t throw the first error related to the ‘read only once’, even though the gradient checkpointing is still turned on). Some parts of the code were simplified, just to provide a working example (i.e. random data creation and sampling through it - I know it could’ve been done more carefully, but wanted to provide working example as quickly as possible):
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.multiprocessing as mp
from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedSampler
from transformers import AutoConfig, AutoModel
def run_training(rank):
torch.autograd.set_detect_anomaly(True)
torch.distributed.init_process_group(backend="nccl", rank=rank)
device = torch.device('cuda', rank)
length_inp = 256
data = [{'sent1': {'input_ids': torch.randint(0, 200, (16, length_inp)), 'attention_mask': torch.ones(16, length_inp)},
'sent2': {'input_ids': torch.randint(0, 200, (16, length_inp)), 'attention_mask': torch.ones(16, length_inp)},
'sent3': {'input_ids': torch.randint(0, 200, (16, length_inp)), 'attention_mask': torch.ones(16, length_inp)}
}]*8
train_sampler = DistributedSampler(
data,
num_replicas=torch.cuda.device_count(),
rank=rank,
seed=44,
)
dll = DataLoader(
data,
batch_size=1,
sampler=train_sampler,
drop_last=False,
num_workers=2,
pin_memory=True,
)
config = AutoConfig.from_pretrained('distilroberta-base')
model = AutoModel.from_pretrained('distilroberta-base', config=config, add_pooling_layer=False)
model.to(device)
model.gradient_checkpointing_enable()
model = nn.parallel.DistributedDataParallel(model, device_ids=[rank], output_device=rank, find_unused_parameters=False)
optimizer = torch.optim.Adam(model.parameters())
metric = lambda x,y: 1.0 - F.cosine_similarity(x, y)
criterion = nn.TripletMarginWithDistanceLoss(distance_function=metric, margin=0.2, reduction='none')
for n, b in enumerate(dll):
print(n)
model.zero_grad()
sent1 = {k:v.squeeze(0) for k,v in b['sent1'].items()}
sent2 = {k:v.squeeze(0) for k,v in b['sent2'].items()}
sent3 = {k:v.squeeze(0) for k,v in b['sent3'].items()}
emb1 = model(**sent1)[0][:, 0, :]
emb2 = model(**sent2)[0][:, 0, :]
emb3 = model(**sent3)[0][:, 0, :]
losses = criterion(emb1, emb2, emb3)
loss = losses.mean()
loss.backward()
optimizer.step()
print('Model device: {}, loss device: {}, loss: {}'.format(model.device, loss.device, loss))
def main():
world_size = torch.cuda.device_count()
os.environ["MASTER_PORT"] = '1234'
os.environ["MASTER_ADDR"] = '127.0.0.1'
os.environ["WORLD_SIZE"] = str(world_size)
mp.spawn(run_training,
nprocs=world_size,
join=True)
if __name__ == "__main__":
main() |
st175184 | I hope it is enough, I also wasnt sure if this problem is suitable as an issue on github, if it is, please let me know |
st175185 | Hi,
I’m training a model that doesn’t use the full GPU, but I need to train on multiple GPUs. Is there any way to use torch.distributed to accomplish this?
To illustrate, I’ve the following example. It runs on a node with 8GPUs, but I would like to run 12 processes. I launch this using
torchrun --nproc_per_node=12 --standalone gpu_reuse_test.py
where the file gpu_reuse_test.py is as follows:
import os
from torchvision.models import resnet34
import torch.distributed as dist
import torch
import torch.nn as nn
def main2():
global_rank = dist.get_rank()
local_rank = global_rank % 8
torch.cuda.set_device(local_rank)
def init_weights(m):
if isinstance(m, nn.Linear):
torch.nn.init.xavier_uniform_(m.weight)
m.bias.data.fill_(0.01 * global_rank)
a = resnet34().cuda()
a.apply(init_weights)
sync_model(a)
print(f"Local_rank: {local_rank}")
def sync_model(model):
for p in model.parameters():
dist.all_reduce(p)
if __name__ == "__main__":
local_rank = int(os.environ['LOCAL_RANK']) % 8
dist.init_process_group('nccl', rank=local_rank, world_size=int(os.environ['WORLD_SIZE']))
main2()
The code fails with return TCPStore( RuntimeError: Address already in use error.
If I replace the init_process_group call with a plain dist.init_process_group('nccl'), the all_reduce call fails with
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1634272068694/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:957, unhandled system error, NCCL version 21.0.3
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
What should I do to fix this error? Can someone help me with this? |
st175186 | Hi, the distributed package assumes that the user will use one process per GPU, so you’d want to use 8 processes in this case. Is there any reason you’re trying to use 12 processes for 8 GPUs? The underlying root cause is that libraries like NCCL won’t work well if multiple processes try to use the same GPU, resulting in deadlocks, hangs etc. |
st175187 | I’m trying to benchmark some algorithms which need more than 8 workers, but I have only 8 GPUs for myself. |
st175188 | As far as I understood, the DistributedDataParallel module performs gradient synchronization between different nodes automatically, one thing I don’t understand clearly is when this synchronization is done exactly?
For example, the below snippet is from GETTING STARTED WITH DISTRIBUTED DATA PARALLEL 44 PyTorch documentation with small change:
def demo_basic(rank, world_size):
setup(rank, world_size)
# setup devices for this process, rank 1 uses GPUs [0, 1, 2, 3] and
# rank 2 uses GPUs [4, 5, 6, 7].
n = torch.cuda.device_count() // world_size
device_ids = list(range(rank * n, (rank + 1) * n))
# create model and move it to device_ids[0]
model = ToyModel().to(device_ids[0])
# output_device defaults to device_ids[0]
ddp_model = DDP(model, device_ids=device_ids)
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(device_ids[0])
loss = loss_fn(outputs, labels)
loss.backward()
# loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
In the above example, Is computed loss synchronized among all nodes? i.e., Does loss value represents only each node loss or it is averaged among all nodes? |
st175189 | Solved by albanD in post #2
The DDP() wrapper takes care of all the synchronizations and offer a nn.Module like api so that you can use it transparently. |
st175190 | The DDP() wrapper takes care of all the synchronizations and offer a nn.Module like api so that you can use it transparently. |
st175191 | Hi, Do you know where the script of gradients synchronization during backward is in pytorch source code? |
st175192 | @meilu_zhu The DDP wrapper creates a c10d.Reducer which is responsible for concatenating multiple gradients into larger buckets and reducing them. You can find the source code at torch/csrc/distributed/c10d/reducer.cpp. |
st175193 | Hi, @pietern. Thanks for your answer. “DistributedDataParallel” automatically averages the gradient when calling loss.backward(), But I didn’t find the corresponding script about how calling loss.backward() triggers torch/csrc/distributed/c10d/reducer.cpp to concatenate multiple gradients in pytorch source code? Could you tell me where it is, please? |
st175194 | In https://pytorch.org/tutorials/intermediate/ddp_tutorial.html 51, the code demo_checkpoint really confused me. There are two main quetions:
(1) In demo_checkpoint, all processes need to be synchronized by loading the same checkponts saved by process-0. If it is necessary when we train model with DistributedDataParallel.?
(2) If it is necessary to synchronize model across multi nodes and gpus by loading the same checkpoint, how can node-B load checkpoints saved in node-A?
I don’t know if I understand demo_checkpoint in a right way. Could you please help answer this question? Thanks ! |
st175195 | @hhxx
(1) In demo_checkpoint , all processes need to be synchronized by loading the same checkponts saved by process-0. If it is necessary when we train model with DistributedDataParallel.?
No, this is not necessary. This is only useful when your training job take very long and can crash in the middle. You can then use the checkpoint to recover instead of starting over from scratch.
(2) If it is necessary to synchronize model across multi nodes and gpus by loading the same checkpoint, how can node-B load checkpoints saved in node-A?
I don’t know if I understand demo_checkpoint in a right way. Could you please help answer this question? Thanks !
The recovery scheme should be application-dependent. That tutorial demonstrates single-machine multi-GPU DDP with checkpointing. So, all DDP processes can read from the same file. If you need checkpoint, and if your training spans multiple machines, you can load it from rank0 and then broadcast it to other ranks using torch.distributed.braodcast.
BTW, we probably should call that tutorial “intermediate”, and use this one 44 as a starting example. |
st175196 | Thanks very much! It is very clear.
Another question is that when we train a model with DistributedDataparallel, DistributedSampler is suggested to use together. If we need add sampler.set_epoch(epoch) before each epoch start according to set_epoch for DistributedSampler 41. |
st175197 | i have a parameter, and it’s initialized during first forward by:
def forward(self, x):
if not self.initialized:
a = nn.Parameter(torch.mean(x))
self.initialized = True
note that, x is input data,
so the problem is,
when using ddp, batch data is different on each devices, so parameter a is initialized differently on each devices. this can not be solved by setting random seed, i think. it’s more like a sync problem.
how can i initialize them all the same? |
st175198 | Solved by rvarm1 in post #6
Distributed collectives such as broadcast/allreduce can help you initialize them in the same way across different workers.
For example, you can use broadcast to make the value the same as rank 0’s across all ranks, or allreduce to take the mean value across all ranks. The different collectives are … |
st175199 | thanks for your help!
i added more information. i think it cannot be solved by setting random seed. |
st175200 | Sorry I have misinterpreted your case. I though a was referring to a module parameter but x refers to a batch of your dataset.
I don’t see any straightforward solution as the batch of inputs will inevitably vary per device. Although, you could get a rough estimate by doing like so:
register a as a parameter when instantiating your model instead of defining it in the forward method:
class Model(torch.nn.Module):
def __init__(self, a_init):
super().__init__()
self.a = torch.nn.Parameter(a_init)
instantiate a while initializing your model through the following :
a_init = torch.mean(x) / n_batch
model = Model(a_init)
where n_batch denotes your number of batches and x is your whole dataset. This way you will instantiate a as an average of your dataset, rescaled by the number of batches. |
st175201 | Distributed collectives such as broadcast/allreduce can help you initialize them in the same way across different workers.
For example, you can use broadcast to make the value the same as rank 0’s across all ranks, or allreduce to take the mean value across all ranks. The different collectives are documented here: Distributed communication package - torch.distributed — PyTorch 1.10.0 documentation 1 |
st175202 | thanks, this solved my problem.
def forward(self, x):
if not self.initialized:
mean_x = torch.mean(x)
torch.distributed.all_reduce(mean_x, torch.distributed.ReduceOp.SUM)
a = nn.Parameter(mean_x / torch.distributed.get_world_size())
self.initialized = True |
st175203 | Hi everyone,
Fortunately, it is becoming easier and easier to access NVSwitch’ed GPUs in commodity cloud environments, even with some pretty attractive spot pricing. My general understanding is that NVSwitch allows uniform memory access to all GPUs from all GPUs, so you can think of it as “one giant GPU” with a continuous memory space. CUDA unified memory allows for this, using CUDAMallocManaged (though the other “purpose” of CUDA unified memory is to seamlessly integrate with much slower, host memory). My question is this - is there a way to get PyTorch to, when on an NVSwitch system, treat all the memory as one big block? I know that of course its better to do manual data parallelism, and/or shard your model across GPUs if it’s too big, use DeepSpeed, etc. etc. but, for ease of use… wouldn’t it be nice if you could just access more and more memory going across multiple devices? The ease of use would definitely more than make up for a performance penalty (NVSwitch bandwidth still is not the same as on-device bandwidth but it is like relatively close…)… there are a lot of times where it would be nice to port single GPU code over to a bigger system only for the purpose of making the model a little bit bigger than one GPU, making the batch size a little bigger (without dealing with distributed data parallel), etc.
I have tried to recompile pytorch replacing CUDAMalloc with CUDAMallocManaged as in this paper:
MDPI
Efficient Use of GPU Memory for Large-Scale Deep Learning Model Training 2
To achieve high accuracy when performing deep learning, it is necessary to use a large-scale training model. However, due to the limitations of GPU memory, it is difficult to train large-scale training models within a single GPU. NVIDIA introduced a...
But PyTorch still pins the memory to one device (and if it grows beyond that, it spills into CPU memory not onto the other GPUs).
So anyway figured I would ask folks here that have a more intimate knowledge of PyTorch’s allocation code to see if this was a quick fix?
Thanks everyone for your time! |
st175204 | mohotmoz:
My general understanding is that NVSwitch allows uniform memory access to all GPUs from all GPUs, so you can think of it as “one giant GPU” with a continuous memory space.
No, NVSwitch uses a more sophisticated connectivity between the devices as described here 1. It doesn’t “automatically” create a single device.
mohotmoz:
I have tried to recompile pytorch replacing CUDAMalloc with CUDAMallocManaged as in this paper
[…]
But PyTorch still pins the memory to one device (and if it grows beyond that, it spills into CPU memory not onto the other GPUs).
That would be expected, as Unified memory 4 is used. |
st175205 | thank you for the fast reply -
reading the link you sent - emphasis added:
It supports full all-to-all communication with direct GPU peer-to-peer memory addressing. These 16 GPUs can be used as a single high-performance accelerator with unified memory space and up to 10 petaFLOPS of deep learning compute power.
I guess I’m just wondering if that is possible for DL workloads with PyTorch…? Sounds like no? thank you! |
st175206 | I don’t think the “single accelerator” means it’s usable as a single device, but as an accelerated system due to the better interconnectivity.
mohotmoz:
I guess I’m just wondering if that is possible for DL workloads with PyTorch…? Sounds like no?
Yes, you can use NVLink/NVSwitch through DistributedDataParallel or any other parallel approach. |
st175207 | Hi!
I am working in image retrieval and I would like to compute the loss on the entire dataset.
In order to do that I have to first compute the features for the whole dataset, and then compute the loss in a batch-wise manner and do gradient accumulation (due to memory constraint).
I wanted to use DistributedDataParallel in order to speed-up my training, but did not manage to do it.
The training would be on a single node with 3 to 4 gpu’s.
This is an example of what I am basically trying to do :
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision.transforms import ToTensor, Compose, Resize, Normalize, CenterCrop
from torchvision.datasets import CIFAR10
from torchvision.models import resnet18
from tqdm import tqdm
transform = Compose(
(Resize((256,256)),
CenterCrop(224),
ToTensor(),
Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
)
)
dts_train = CIFAR10("/users/r/ramzie/datasets/", train=True, transform=transform, download=True)
def get_loader(dts, sampler=None):
return DataLoader(
dts,
batch_size=128,
shuffle=False,
drop_last=False,
pin_memory=True,
num_workers=10,
sampler=sampler,
)
class L2Norm(nn.Module):
def forward(self, X):
return F.normalize(X, dim=-1)
def criterion(di, lb, features, labels):
scores = torch.mm(di, features.t())
gt = lb.view(-1, 1) == labels.unsqueeze(0)
return F.relu(-scores[gt]).mean() + F.relu(scores[~gt]).mean()
net = resnet18(pretrained=True)
net.fc = L2Norm()
_ = net.cuda()
opt = torch.optim.SGD(net.parameters(), 0.1)
scaler = torch.cuda.amp.GradScaler()
for e in range(2):
loader = get_loader(dts_train)
features = []
labels = []
# We first compute the features and labels for the whole dataset
# This could a first distributed loop
for (x, y) in tqdm(loader, 'computing features'):
with torch.cuda.amp.autocast():
with torch.no_grad():
feat = net(x.cuda())
features.append(feat)
labels.append(y)
features = torch.cat(features)
labels = torch.cat(labels).cuda()
####################################################
####################################################
loader = get_loader(dts_train)
# This is the bottleneck, would could also be distributed
for (x, y) in tqdm(loader, 'accumulating gradient'):
with torch.cuda.amp.autocast():
di = net(x.cuda())
lb = y.cuda()
loss = criterion(di, lb, features, labels) / len(features)
# gradient accumulation for the entire dataset
scaler.scale(loss).backward()
####################################################
####################################################
# only at the end perform optimization (full batch)
scaler.step(opt)
scaler.update()
Thank you if you have any time to help! |
st175208 | Hi, what is exactly is the issue that you run into when using DDP? Is the training not sped up as you’d expect or does it run into memory issues (since you mentioned memory constraint)? |
st175209 | Hi,
I had several issues, one of them indeed being that it does not speed up training as I would expect.
I have the following issues:
How to make sure that I have the correct features and labels variables in all GPU’s (when using DistributedSampler I do not have the exact amount of samples, 50000 vs 50001)
Then correctly get all the batches to do the gradient accumulation (same issue as above, and also not sure this is the proper way to do gradient accumulation in a distributed setting)
My distributed version of the code is not really more efficient (block of code in this reply)
With a single GPU: 128 s for 2 epochs, 315 s for 5 epochs
With 3 GPU’s: 101 s for 2 epochs, 235 s for 5 epochs
Which is roughly x1.3 increase in speed, is it expected ?
Is there something to know about the use of mixed precision in a distributed setting ?
import os
import logging
from time import time
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision.transforms import ToTensor, Compose, Resize, Normalize, CenterCrop
from torchvision.datasets import CIFAR10
from torchvision.models import resnet18
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.utils.data.distributed import DistributedSampler
from torch.nn.parallel import DistributedDataParallel
class L2Norm(nn.Module):
def forward(self, X):
return F.normalize(X, dim=-1)
def criterion(di, lb, features, labels):
scores = torch.mm(di, features.t())
gt = lb.view(-1, 1) == labels.unsqueeze(0)
loss = F.relu(-scores[gt]).mean() + F.relu(scores[~gt]).mean()
return loss
def main_worker(rank, world_size, dts_train):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.distributed.barrier()
net = resnet18(pretrained=True)
net.fc = L2Norm()
net = torch.nn.SyncBatchNorm.convert_sync_batchnorm(net)
net.to(rank, non_blocking=True)
net = DistributedDataParallel(net, device_ids=[rank], output_device=rank)
opt = torch.optim.SGD(net.parameters(), 0.1)
scaler = torch.cuda.amp.GradScaler()
for e in range(2):
features_sampler = DistributedSampler(dts_train)
features_loader = DataLoader(
dts_train,
batch_size=128,
shuffle=False,
drop_last=False,
pin_memory=True,
num_workers=10,
sampler=features_sampler,
)
features_sampler.set_epoch(e)
features = []
labels = []
# We first compute the features and labels for the whole dataset
for (x, y) in features_loader:
with torch.cuda.amp.autocast():
with torch.no_grad():
feat = net(x.to(rank, non_blocking=True))
features.append(feat)
labels.append(y)
features = torch.cat(features)
labels = torch.cat(labels).to(rank, non_blocking=True)
new_features = [torch.empty_like(features) for i in range(world_size)]
new_labels = [torch.empty_like(labels) for i in range(world_size)]
torch.distributed.barrier()
dist.all_gather(new_features, features)
dist.all_gather(new_labels, labels)
new_features = torch.cat(new_features).to(rank, non_blocking=True)
new_labels = torch.cat(new_labels).to(rank, non_blocking=True)
del features, labels
accumulation_sampler = DistributedSampler(dts_train)
accumulation_loader = DataLoader(
dts_train,
batch_size=128,
shuffle=False,
drop_last=False,
pin_memory=True,
num_workers=10,
sampler=accumulation_sampler,
)
accumulation_sampler.set_epoch(e)
for (x, y) in accumulation_loader:
with torch.cuda.amp.autocast():
di = net(x.to(rank, non_blocking=True))
lb = y.to(rank, non_blocking=True)
loss = criterion(di, lb, new_features, new_labels) / len(new_features)
# gradient accumulation for the entire dataset
scaler.scale(loss).backward()
torch.distributed.barrier()
scaler.step(opt)
scaler.update()
print("finished")
dist.destroy_process_group()
if __name__ == '__main__':
logging.basicConfig(
format='%(asctime)s - %(levelname)s - %(message)s',
datefmt='%m/%d/%Y %I:%M:%S',
level=logging.INFO,
)
start = time()
transform = Compose(
(
Resize((256, 256)),
CenterCrop(224),
ToTensor(),
Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
)
)
dts_train = CIFAR10("/users/r/ramzie/datasets/", train=True, transform=transform, download=False)
def get_loader(dts, sampler=None):
return DataLoader(
dts,
batch_size=128,
shuffle=False,
drop_last=False,
pin_memory=True,
num_workers=10,
sampler=sampler,
)
mp.spawn(
main_worker,
nprocs=3,
args=(3, dts_train),
)
end = time()
print(f"took: {end-start}")
Thanks ! |
st175210 | Hi, I am using DDP on a single node with NCCL backend. After a couple of training epochs I got the following warning:
[E ProcessGroupNCCL.cpp:587] [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1803308 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:587] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=1800000) ran for 1803187 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:587] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1803385 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:587] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1803386 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:587] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1803385 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:587] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1802504 milliseconds before timing out.
and then I got the following traceback on each of the GPUs:
File "/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/venv/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 878, in forward
self._sync_params()
File "/venv/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1379, in _sync_params
self._distributed_broadcast_coalesced(
File "/venv/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1334, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(
RuntimeError: NCCL communicator was aborted on rank 1. Original reason for failure was: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1803385 milliseconds before timing out.
I am using torch version 1.10.0+cu102 with python3.9. Any idea? |
st175211 | This probably indicates that there might have been some sort of CUDA/NCCL deadlock causing these timeouts. There are a few ways to debug this:
Set environment variable NCCL_DEBUG=INFO, this will print NCCL debugging information.
Set environment variable TORCH_DISTRIBUTED_DETAIL=DEBUG, this will add significant additional overhead but will give you an exact error if there are mismatched collectives. |
st175212 | You can also try passing broadcast_buffers=False to DDP, although note that this will disable buffer synchronization which might affect model quality if you wanted to ensure all buffers are synchronized. |
st175213 | Thank you @pritamdamania87 @rvarm1 for your replies.
@pritamdamania87 I am running with the env variables you mentioned to see the debugging log. Since it is randomly generated at some epochs during the training.
@rvarm1 is it a standard solution to avoid such problems? |
st175214 | I’m trying to ensure consistency when creating a DataLoader instance between two cases:
I pass a DistributedSampler to the sampler argument of the DataLoader:
dataloader = DataLoader(..., sampler=DistributedSampler(...))
I directly instantiate the DataLoader:
dataloader = DataLoader(..., sampler=None)
The consistency I’m referring to is when my dataset size is not divisible by the batch size. The documentation specify that I can handle that through the drop_last argument. That way, using either:
dataloader = DataLoader(..., sampler=DistributedSampler(...,shuffle=False, drop_last=True))
dataloader = DataLoader(..., shuffle=False, drop_last=True, sampler=None)
should ensure iterating identically (i.e. same ordering) over the batches with an equal size per batch.
I have several questions linked to my case:
Regarding the way I handle the drop_last argument, is my conclusion correct?
Both DataLoader and DistributedSampler possess shuffle and drop_last arguments. When used in conjunction, which arguments have priority?
Both the documentations of DataLoader and DistributedSampler state that:
drop_last (bool, optional) – if True, then the sampler will drop the tail of the data to make it evenly divisible across the number of replicas. If False, the sampler will add extra indices to make the data evenly divisible across the replicas. Default: False.
I wonder how the extra indices are added (I assume through uniform sample with replacement over the whole set of indices?).
Is there any way to ensure consistency between my two cases if I don’t want to drop the last batch with uneven size (i.e. use drop_last=False)? I guess I would need to control how the extra indices are added; is it possible?
If some of these points have already been addressed, I apologize and please redirect me towards the specific topics. Thanks in advance. |
st175215 | Assume we have two nodes: node-A and node-B, each has 4gpus(i.e. ngpu_per_node=4). We set args.batch_size = 256 on each node, means that we want each node process 256 images in each forward.
(1) If we use DistributedDataparallel with 1gpu-per-process mode, shall we manually divide the batchsize by ngpu_per_node in torch.utils.data.DataLoader : torch.utils.data.DataLoader(batch_size = args.batch_size / 4)(the way used in pytorch-imagenet-official-example 75). In my original opinion, I think DistributedSampler can handle such thing, because we have passed world_size and rank to DistributedSampler. . If I was wrong, please point it out, thanks!
(2) If dividing the batchsize by ngpu_per_node is a correct way, I wonder what will happen if we do not that.
Does it means in each node, 4*batch_size images are processed per forward-process?
Will 4*len(dataset) images are processed in one epoch, or the forward frequency are four times less than usual(i.e. the total number images proceeded per epoch keep same)? |
st175216 | Solved by mrshenli in post #6
I agree with all your analysis on the magnitude of the gradients, and I agree that it depends on the loss function. But even with MSE loss fn, it can lead to different conclusions:
If the fw-bw has processed 8X data, we should set lr to 8X, meaning that the model should take a larger step if it ha… |
st175217 | hhxx:
(2) If dividing the batchsize by ngpu_per_node is a correct way, I wonder what will happen if we do not that.
Does it means in each node, 4*batch_size images are processed per forward-process?
Will 4*len(dataset) images are processed in one epoch, or the forward frequency are four times less than usual(i.e. the total number images proceeded per epoch keep same)?
You are correct. Each DataLoader instance pairs with a DDP instances. If you do not divide the batch-size=256 by 4, then each DDP instance will process 256 images. As your env has 8-GPUs in total, there will be 8 DDP instances. So one iteration will process 256 * 8 images in total.
However, DDP does divide the gradients by the world_size by default code 119. So, when configuring learning rate, you only need to consider the batch_size for a single DDP instance. |
st175218 | Another question is if we do not divide batch-size by 8, the total images processed in one epoch will be the same as usual or eight times?
As for learning rate, if we have 8-gpus in total, there wiil be 8 DDP instances. If the batch-size in each DDP distances is 64 (has been divides manually), then one iteration will process 64×4=256 images per node. Taking all gpu into account (2 nodes, 4gpus per node), then one iteration will process 64×8=512 images. Assuming in one-gpu-one-node scenario, we set 1×lr when batch-size=64, 4×lr when batch-size=256 and 8×lr when batch-size=512(a universal strategy that increase learning rate with batch-size linearly). Let us back to DDP scenario (2 node, 4gpus per node), what learning rate shall we use? 1×lr or 4×lr or 8×lr? |
st175219 | hhxx:
Another question is if we do not divide batch-size by 8, the total images processed in one epoch will be the same as usual or eight times?
The total number of images processed will be 8 times, because each DDP instance/process will process batch_size images.
Let us back to DDP scenario (2 node, 4gpus per node), what learning rate shall we use? 1×lr or 4×lr or 8×lr?
It should be 1x lr. Because DDP calculates the average instead of sum of all local gradients. Let’s use some number to explain this. Assuming every image leads to a torch.ones_like(param) gradient for each parameter.
For local training without DDP, if you set batch_size = 64, the gradient for each parameter will then be torch.ones_like(param) * 64.
For 8-process DDP training, if you set batch_size = 64, the local gradient for each parameter will also be torch.ones_like(param) * 64. Then DDP use collective communication to calculate the sum of gradients across all DDP instances which will be torch.ones_like(param) * 64 * 8, and then DDP divides that value by 8. So the final gradient in param.grad field will still be torch.ones_like(param) * 64 (the code actually first divide and then do globally sum). So, when set lr, you only need to consider local batch_size. |
st175220 | According to the discuss in Is average the correct way for the gradient in DistributedDataParallel 49, I think we should set 8×lr. I will state my reason under 1 node, 8gpus, local-batch=64(images processed by one gpu each iteration) scenario:
(1) Let us consider a batch images (batch-size=512), in DataParallel scenario, a complete forward-backforwad pipeline is:
the input data are split to 8 slices (each contains 64 images), each slice is feed to net to compute output
outputs are concated in master gpu (usually gpu 0) to form a [512, C] outputs
compute the loss with groundtruth(same dimension: [512, C]) : loss = \frac{1}{512} \sum_{i=1}^512 mse(output[i], groundtruth[i])( use mse loss as illustration)
use loss.backward to compute gradients.
So the finally [512, C] outputs are the same as computed on one gpu. So the learning rate here shall be set as 8×lr to keep same as 512 batchsize in one-gpu-one-node scenarior.
(2) Secondly, when DistributedDataparallel is used, the pipeline is
the input data are also split to 8 slices
outputs are computed in each gpu to form a [64, C] outputs
In each gpu, compute the loss loss = \frac{1}{64} \sum_{i=1}^64 mse(output[i], groundtruth[i]) and compute gradients grad_k (k is the gpu number, k=0,1,...,7): (this is different with Dataparallel, which need collect all outputs in master gpu)
Average the gradients between all gpus: avg_grad =\frac{1}{8} \sum_{k=1}^8 grad_k
By this way, the averaged gradients are also same as the gradients computed on one-gpu-one-node scenario. So I think learning rate here need to be set as 8×lr to keep same as 512 batchsize on one-gpu-one-node scenario.
The main difference between you and me is that when local batch is set as 64, I think local gradients will be averaged over local samples, resulting in torch.ones_like(param)*64/64, but you think the local gradients will be summed over local samples, resulting in torch.ones_like(param) * 64. I think local gradients will be averaged mainly because the loss function in pytroch, like mse(), will compute the average loss over all input samples, so the gradients computed from such loss also should be averaged over all input samples.
I do not know if I understand DistributedDataparallel in a right way. Please let me know if there has any wrong. |
st175221 | I agree with all your analysis on the magnitude of the gradients, and I agree that it depends on the loss function. But even with MSE loss fn, it can lead to different conclusions:
If the fw-bw has processed 8X data, we should set lr to 8X, meaning that the model should take a larger step if it has processed more data as the gradient is more accurate. (IIUC, this is what you advocate for)
If the gradient is of the same magnitude, we should use 1X lr, especially when approaching convergence. Otherwise, if we use 8X lr, it is more likely to overshoot and hurt converged model accuracy.
After reading your analysis, I realized that, with MSE loss fn, the discussion is mostly irrelevant to DDP. The question would then be, if I increase batch size by k, how should I adjust the learning rate, which is an open question. |
st175222 | Is it correct that when local batch-size is 64 (i.e. torch.utils.data.DataLoader(batch_size=64) and torch.utils.data.distributed.DistributedSampler() is used), and there are N processes totally in ddp (N processes distirbute in one node or more than one node), the forward-backward process is similar to the forward-backward process in 1-gpu-1-node using 64×N batch-size inputs? |
st175223 | For SGD this https://arxiv.org/abs/1706.02677 28 paper suggest When the minibatch size is multiplied by k, multiply the learning rate by k. |
st175224 | For my better understanding, @mrshenli - can you please answer?
Suppose we have 32*8 images in the dataset, and the batch size is 32. We want to train the model for 1 epoch only. Now consider following 3 scenarios. Note that we use same LR and optimizer function in all three cases below.
(1) Single Node - Single GPU: In this case, one epoch will require 8 steps to execute i.e. in each step, 32 images will be processed. The gradient will be calculated and applied 8 times. The model parameters will get updated 8 times.
(2) Single Node - Multiple GPU using DataParallel: Suppose we use 8 GPUs. In this case, one epoch will still require 8 steps to execute i.e. in each step, 32 images will be processed (albeit each GPU will process only 4 images and then the gradients will be summed over). The gradient will be calculated and applied 8 times. The model parameters will get updated 8 times.
(3) Single Node - Multiple GPU using DistributedDataParallel: Suppose we use 8 GPUs. In this case, one epoch will require just 1 step to execute as in each step 32*8 images will be processed. However, this also means that the gradient will be calculated and applied only 1 time. And consequently, the model parameters will get updated just 1 time. So, in order to get similar results as previous two points (#1 and #2 above), we will have to execute 8 epochs instead of 1, as gradients are averaged in the backward function and hence effective weight updates are almost similar in scenario 3 and those in scenarios 1 and 2 above.
Is this understanding correct? |
st175225 | I think that points 1) and 2) are correct. And if you want to get the same behaviour in the DDP experiment (3), you shouldn’t do 8 epochs with batch size 32.
You just have to pass batch_size/num_GPUs as your real batch size, i.e.: 32/8=4. Then all 8 GPUs will process 4 images, i.e. 4 images * 8 GPUs = 32 images per epoch (just like the (1) and (2)). |
st175226 | @ekurtic Thanks for quick response.
I am trying to find how is the gain in training time (without affecting accuracy as much) by using DistributedDataParallel. As per you, if I use batch size of 4, then yes the training will be faster. However in that case I will be using LR of batch size 32 for that of 4. Is this okay? And if I divide LR by 8, then I am effectively slowing down the training, and thus not gaining on training time. Am I missing something here? |
st175227 | From my understanding of how DDP works:
There are a few gains that result in better performance (faster training) with DDP. One of them is that in a DDP experiment, your model and optimizer are “replicated” on each GPU (before training starts; notice this happens only once). After that, each GPU will optimize your model independently and exchange gradients with other GPUs to ensure that optimization steps are the same at each GPU.
On the other side, DataParallel has to scatter input data from the main GPU, replicate your model on each GPU, do a forward pass, gather back all outputs to the main GPU to compute loss, then send back the results to each GPU so they can compute gradients of the model, and in the end, collect all those gradients on the main GPU to compute the optimization step. And all of this happens for each batch of input data. As you can see, in contrast to the DDP, there is a lot of communication, copying, synchronizing and so on.
Regarding the LR question:
If you use batch_size/num_GPUs = 32/8 = 4 as your batch size in DDP, then you don’t have to change the LR. It should be the same as the one in DataParallel with batch_size = 32, because the effective batch size that your model is working with is the same: 32. It’s just handled in a different way with DDP. |
st175228 | @ekurtic I agree with your description of advantages of DDP in terms of communication, copy, sync etc.
On the LR part, however, I am not sure. As per the discussion in this thread and few others, each gpu will individually process 4 images, loss will be calculated and then during backward function the gradients will be calculated individually first. Then DDP calculates the average (instead of sum) of all local gradients. This means that the gradient magnitude (the average value) is effectively in the near-range of that for batch size of 4, and not 32. So I am not sure whether LR of batchsize 4 should be applied or that of 32. |
st175229 | Loosely speaking this is why I think we don’t need to update LR if we ensure that the effective batch sizes are the same:
Screenshot 2021-07-16 at 07.45.051096×1178 101 KB |
st175230 | Thanks for this explanation, I wonder about the following sentence in the context of DDP:
We would have to multiply loss value with the number of GPUs only if the loss function had “reduction=sum” and not “reduction=mean” (to cancel out the DDP gradient averaging)
If reduction=sum is used for computing the loss, it shouldn’t matter whether:
the loss is multiply with the number of GPUs prior calling backward(),
the gradient is multiply with the number of GPUs after calling backward().
Do you agree? |
st175231 | Yes, I think I agree with that. As long as your “main loss” is a sum of some terms (either via reduction=sum or it’s just composed out of a few different losses), you would have to multiply it somewhere to cancel out the 1/num_GPUs averaging that comes with DDP. From what I have seen so far, the most common thing is to just multiply loss prior to calling the backward() (your first suggestion). However, multiplying after should be fine too but you have to be careful and multiply before the optimizer.step() is called. I guess this second approach can then be achieved in two different ways: either by multiplying gradients (which could be slow) or by multiplying your learning_rate so that the final weight update has a proper scale. |
st175232 | ekurtic:
However, multiplying after should be fine too but you have to be careful and multiply before the optimizer.step() is called. I guess this second approach can then be achieved in two different ways: either by multiplying gradients (which could be slow) or by multiplying your learning_rate so that the final weight update has a proper scale.
Good call, I will have to remember that! Thanks for your quick answer. |
st175233 | Hi,
I’m now a research assistant and I want to replicated a experiment which had used distributed data parallel method. I have also heard good things about it and therefore I kinda want to try it as well. However, I noticed that in the documentation, it says I have to ensure that the program has exclusive access to the GPU when I’m using NCCL backend (the source code of the project had also used barrier(), which was only supported by nccl without extra steps like compiling). Is the exclusiveness includes other process running on GPU that’s not distributed? Like can I have a small model (probably using non-distributed dataparallel or simple single gpu training) on the same GPU? I just want to make sure and ask before I try because it’s shared and I’m afraid I might crash others’ programs. I have looked online but I can only see people reporting problems when they’re trying to run two distributed instance on a GPU. |
st175234 | Yes, you could run into deadlocks or hard to debug hangs if your program does not have exclusive access to the GPU when using the NCCL backend. This is because synchronization done by the application can result in things such as waiting for all operations in a stream to complete, which could result in unpredictability if it ends up waiting on ops from other applications. |
st175235 | But according to what you say, as long as I’m the only one who’re using nccl backend, the other program shouldn’t be locked? Since those wouldn’t need to wait for anything to end. And as long as they finished my program should continue to run. Is that how it works? |
st175236 | I am running a Distributed Data Parallel (DDP) job on a single node. The node has 10 GPUs, but I’m using only 4 of them. The cluster is powered by slurm, and I use sbatch exp.sh to submit the job, where the exp.sh is as follows:
#!/bin/bash
#SBATCH --partition=$MY_PARTITION
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --gres=gpu:4
#SBATCH --cpus-per-task=2
srun python main.py
In the main.py (and other .py files it calls), I have set necessary environmental variables and created communication between workers:
os.environ['LOCAL_RANK'] = os.environ['SLURM_LOCALID']
os.environ['RANK'] = os.environ['SLURM_PROCID']
os.environ['WORLD_SIZE'] = os.environ['SLURM_NTASKS']
os.environ['MASTER_ADDR'] = my_host_name
os.environ['MASTER_PORT'] = '32767'
torch.distributed.init_process_group(backend='nccl')
The job seems to run smoothly. Then I login to that node, and type nvidia-smi. I get
Screen Shot 2021-11-17 at 2.07.56 PM962×1770 195 KB
Please ignore GPU 4-9, as the job is not using them at all. However, on GPU 0-3, there are 16 processes, with 4 for each GPU. 3 out of these 4 are 0 usage! Instead, I think there should be 1 process for each GPU, so a total of 4 processes.
I also tried on other nodes, some with better GPUs and network connections. The above observation is not reproducible. Sometimes, I do get 1 process for each GPU instead.
Can anyone give some insights on what’s happenning? |
st175237 | I am trying to run simple regression example with pytorch multiprocessing. I am following the example here: Multiprocessing best practices — PyTorch 1.10.0 documentation 1
However few things are unclear to me. In the example page it is written:
for data, labels in data_loader:
optimizer.zero_grad()
loss_fn(model(data), labels).backward()
optimizer.step() # This will update the shared parameters
What is meant by update the shared parameters?
Do they update on individual processes? or as the model is shared (model.shared_memory() achieves that right?) it updates the shared copy of parameters?
By default model parameters must be shared right? as it says in the note If torch.Tensor.grad is not None, it is also shared.
If it updates the shared copy, then is the backwards call calculating loss over single thread or sum over losses from all threads? If it is total loss then how can I print it?
If the model contain a layers which has conditional branches, how will optimizer update parameters?
Each process saves its own backward tree? If yes then how are parameters updated?
Sorry for the question dump but couldn’t find proper answers as tutorials and documentation on this was rather scarce. |
st175238 | Regarding 1 & 2, yes the shared parameters will be updated. With the call to model.share_memory parameter tensors are put into SHM, so each process is actually operating on the same model.
The wording is probably not the best, but that part means the grad tensor is also shared as long as grad is not none. If you’ve called model.shared_memory as example does, model params will always be shared.
There aren’t any application-level threads here, loss is computed per-process so it can be different across each process. So it is not total loss.
If the model has conditional branches, autograd automatically handles this by constructing the backward graph on the fly during forward pass.
Backward is run locally on each process, there is no inter-process coordination. The .grad field is updated in a shared way across each process, this can lead to some processes stepping on each other’s updates, but this is a result of hogwild training. |
st175239 | Hi, I don’t have any practical experience parallelizing but I was hoping to parallelize the training of my model over multiple CPU cores in a specific way. The part I want to parallelize over is a for loop that contains a cumulative sum of the error. I thought that this was the most effective way to parallelize my code because my model is supposed to predict various high-dimensional quantities from high-dimensional inputs with the inputs and outputs having variable shapes. The code for the error gradient calculation of a single optimizer step would look something like
error = 0
for training_sample in training_samples:
predicted_a, predicted_b, predicted_c = model.forward(training_sample.input)
error += torch.mean((predicted_a - training_sample.expected_a).pow(2)) +
torch.mean((predicted_b - training_sample.expected_b).pow(2)) + ...
error.backward()
For large sets of training samples, I got the impression that it was possible to parallelize this by having each CPU core work on calculating the errors due to a fraction of the training samples before adding them all up to calculate the gradient. Is this possible and advisable as a way to parallelize my problem, and how should I start to go about it? (I put this under the category of “distributed” as it seemed related but I couldn’t find a neat example that covers what I want to achieve)
Thank you! |
st175240 | By my estimations about 16 cores from a cluster I have access to is sufficient for the memory requirements. However, I’d rather not request for 16 cores just for the memory - might as well parallelize the training to make the most of the cores, hence the question. Otherwise, I guess I could just train in mini-batches, one mini-batch at a time without parallelism with 2 cores to avoid wasting resources. |
st175241 | This use case doesn’t exactly need distributed training, but you can try torch.jit.fork which is the main way to do task-level parallelism in PyTorch: torch.jit.fork — PyTorch 1.10.0 documentation. |
st175242 | I’m trying to reuse the servers in my university for Data Parallel Training (they’re hadoop nodes, no GPU, but the CPUs and memory are capable). I referred to PyTorch Distributed Overview — PyTorch Tutorials 1.10.0+cu102 documentation which seems to be super high level, can barely get a thing. Should I use DDP/RPC? Any ideas on how/where to get started?
I went through the example in examples/README.md at master · pytorch/examples · GitHub 1 ; still not very clear!
I understand that I should use gloo as the backend, but how do I configure the slave nodes?
I understand that process with rank 0 is the master, but where do I provide the slave IPs? How to get the slave(s) to listen to the master node for new jobs? How do I start the slave processes?
All of them are CentOS 7 servers, 12 cores/64GB per node, got about 12 of these.
Thanks in advance! |
st175243 | Solved by pritamdamania87 in post #4
You can just do this:
model = ToyModel()
ddp_model = DDP(model) |
st175244 | If you want to do Data Parallel Training, you should use DDP and this tutorial should help you: Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.10.0+cu102 documentation 3.
I understand that I should use gloo as the backend, but how do I configure the slave nodes?
I understand that process with rank 0 is the master, but where do I provide the slave IPs? How to get the slave(s) to listen to the master node for new jobs? How do I start the slave processes?
I think you probably need to familiarize yourself with ProcessGroups and our ProcessGroup API before looking into DDP. This tutorial would give a good overview: Writing Distributed Applications with PyTorch — PyTorch Tutorials 1.10.0+cu102 documentation 2
Essentially to run DDP, first you need spawn N processes and these processes have ranks [0, N). Now for all of these processes to talk to each other, first you need to initialize a ProcessGroup (think of this as a communication channel you are initializing across all processes) using init_process_group.
Regarding initialization you can refer to this section of our docs (TCP initialization): Distributed communication package - torch.distributed — PyTorch 1.10.0 documentation. You only need to specify the IP of the master on all the ranks/processes. All processes which are not rank 0 will try to connect to that IP and discover all of the other peers.
So essentially you start N processes and specify a master addr on all, then all processes discover each other via that master address. After that point these processes form a ProcessGroup and you can run collective operations as you wish across the entire process group. |
st175245 | Thank you for your inputs, that was really helpful!
Got the nodes talking to each other.
As I’ve mentioned, I don’t have GPUs installed on these machines. In the ToyModel example, how do I alter these two lines to move the processes to the CPUs of the respective nodes?
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# create model and move it to GPU (need to move to CPU) with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
Keeps throwing an error that “NVIDIA driver is missing”, I do not intend to use CUDA. |
st175246 | Hi,
I am running DDP for my training task. I observe that occasionally a python exception is raised:
Traceback (most recent call last):
File ".../python3.7/multiprocessing/queues.py", line 242, in _feed
send_bytes(obj)
And I observe that this always occurs at the end of a for loop (e.g., training for loop or evaluation for loop). Moreoever, this exception does not cause the process to terminate.
May I know what is the root cause of the issue? Does it affect my training? |
st175247 | Could you share the complete traceback? Its not clear what the error might be from the traceback you have provided. |
st175248 | It’s literally the complete traceback:
2021-07-02 01:58:40,520 - some log
Traceback (most recent call last):
File ".../lib/python3.7/multiprocessing/queues.py", line 242, in _feed
send_bytes(obj)
2021-07-02 01:58:40,796 - another log
As you can see, the process does not terminate.
However, sometimes I see the “full” traceback as follows:
Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
send_bytes(obj)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/usr/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/usr/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
This does not cause the process to terminate either. |
st175249 | Thanks for providing additional information, although I’m not sure what could be the reason for this traceback. The weird thing is the traceback doesn’t have any application code/ DDP code indicating where this is coming from.
Is it possible to share minimal repro script to see if we I can repro it on my end? |
st175250 | Hi @fermat97 ,
No, I didn’t. Since it doesn’t affect my task, I didn’t bother trying to fix it. |
st175251 | @hnt4499 got exactly the same error with you. Can some one help with this issue? |
st175252 | Hi, I am using nn.DataParallel module to use multiple GPUs for my Pegasus Model.
The problem I am facing is that when DataParallel divides a batch onto multiple GPUs then the length of paraphrases produced by the PegasusForConditionalGeneration on each GPU doesn’t have equal length as length of output depends upon input.
I can not force model to produce a fixed-length output (by truncating longer ones and padding smaller ones) as I don’t want to truncate bigger paraphrases or unnecessarily pad paraphrases to a large number
GPU:0 produces output of 48 length while GPU:1 of 26. Is there a way to solve this problem
/torch/nn/parallel/comm.py", line 235, in gather
return torch._C._gather(tensors, dim, destination)
RuntimeError: Input tensor at index 1 has invalid shape [15, 48], but expected [15, 26] |
st175253 | Hi, this seems like it might be a possible bug with DataParallel, but a repro with some example model and training code would be needed to confirm. If you’re able to get that could you please post an issue to Issues · pytorch/pytorch · GitHub? |
st175254 | I created a pytest fixture using decorator to create multiple processes (using torch multiprocessing) for running model parallel distributed unit tests using pytorch distributed. I randomly encountered the below CUDA initialization error all of a sudden (when I was trying to fix some unit tests logic). Since then, all my unit tests have been failing and I traced the failure back to my pytest fixture which calls torch.distributed.init_process_group(…).
Error traceback:
$ python3 -m pytest test/test_distributed.py::test_dummy
Process Process-1:
Traceback (most recent call last):
File "/usr/lib64/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/lib64/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/fsx-dev/FSxLustre20201016T182138Z/prraman/home/workspace/ws_M5_meg/src/M5ModelParallelism/test_script/commons_debug.py", line 34, in dist_init
torch.distributed.init_process_group(backend, rank=rank, world_size=world_size, init_method=init_method)
File "/usr/local/lib64/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 480, in init_process_group
barrier()
File "/usr/local/lib64/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 2186, in barrier
work = _default_pg.barrier()
RuntimeError: CUDA error: initialization error
Below is the pytest fixture I created:
# file: test_distributed.py
import os
import time
import torch
import torch.distributed as dist
from torch.multiprocessing import Process, set_start_method
import pytest
# Worker timeout *after* the first worker has completed.
WORKER_TIMEOUT = 120
def distributed_test_debug(world_size=2, backend='nccl'):
"""A decorator for executing a function (e.g., a unit test) in a distributed manner.
This decorator manages the spawning and joining of processes, initialization of
torch.distributed, and catching of errors.
Usage example:
@distributed_test_debug(worker_size=[2,3])
def my_test():
rank = dist.get_rank()
world_size = dist.get_world_size()
assert(rank < world_size)
Arguments:
world_size (int or list): number of ranks to spawn. Can be a list to spawn
multiple tests.
"""
def dist_wrap(run_func):
"""Second-level decorator for dist_test. This actually wraps the function. """
def dist_init(local_rank,
num_procs,
*func_args, **func_kwargs):
"""Initialize torch.distributed and execute the user function. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29503'
os.environ['LOCAL_RANK'] = str(local_rank)
# NOTE: unit tests don't support multi-node so local_rank == global rank
os.environ['RANK'] = str(local_rank)
os.environ['WORLD_SIZE'] = str(num_procs)
master_addr = os.environ['MASTER_ADDR']
master_port = os.environ['MASTER_PORT']
rank = local_rank
# Initializes the default distributed process group, and this will also initialize the distributed package.
init_method = "tcp://"
init_method += master_addr + ":" + master_port
print('inside dist_init, world_size: ', world_size)
torch.distributed.init_process_group(backend, rank=rank, world_size=world_size, init_method=init_method)
print("rank={} init complete".format(rank))
#torch.distributed.destroy_process_group()
# print("rank={} destroy complete".format(rank))
if torch.distributed.get_rank() == 0:
print('> testing initialize_model_parallel with size {} ...'.format(
2))
if torch.cuda.is_available():
torch.cuda.set_device(local_rank)
run_func(*func_args, **func_kwargs)
def dist_launcher(num_procs,
*func_args, **func_kwargs):
"""Launch processes and gracefully handle failures. """
# Spawn all workers on subprocesses.
#set_start_method('spawn')
processes = []
for local_rank in range(num_procs):
p = Process(target=dist_init,
args=(local_rank,
num_procs,
*func_args),
kwargs=func_kwargs)
p.start()
processes.append(p)
# Now loop and wait for a test to complete. The spin-wait here isn't a big
# deal because the number of processes will be O(#GPUs) << O(#CPUs).
any_done = False
while not any_done:
for p in processes:
if not p.is_alive():
any_done = True
break
# Wait for all other processes to complete
for p in processes:
p.join(WORKER_TIMEOUT)
failed = [(rank, p) for rank, p in enumerate(processes) if p.exitcode != 0]
for rank, p in failed:
# If it still hasn't terminated, kill it because it hung.
if p.exitcode is None:
p.terminate()
pytest.fail(f'Worker {rank} hung.', pytrace=False)
if p.exitcode < 0:
pytest.fail(f'Worker {rank} killed by signal {-p.exitcode}',
pytrace=False)
if p.exitcode > 0:
pytest.fail(f'Worker {rank} exited with code {p.exitcode}',
pytrace=False)
def run_func_decorator(*func_args, **func_kwargs):
"""Entry point for @distributed_test(). """
if isinstance(world_size, int):
dist_launcher(world_size, *func_args, **func_kwargs)
elif isinstance(world_size, list):
for procs in world_size:
dist_launcher(procs, *func_args, **func_kwargs)
time.sleep(0.5)
else:
raise TypeError(f'world_size must be an integer or a list of integers.')
return run_func_decorator
return dist_wrap
Below is how I call the pytest fixture:
@distributed_test_debug(world_size=2)
def test_dummy():
assert 1 == 1
I have seen some issues raised in the past when torch multiprocessing and CUDA not working well together, not sure if this is related to that. Perhaps a different way I should be creating my multiple processes to avoid this problem? Any help is appreciated.
I am using pytorch version: 1.8.0a0+ae5c2fe |
st175255 | You’re right that there can be a bunch of issues in getting CUDA + multiprocessing to work correctly, I’d suggest starting off reading here for more info: Multiprocessing best practices — PyTorch 1.10.0 documentation 5
The main recommendation is to try using the spawn start method via something like multiprocessing.set_start_method(...).
Alternatively, another recommendation is to try using torch.multiprocessing and the mp.spawn method as documented here: Multiprocessing package - torch.multiprocessing — PyTorch 1.10.0 documentation 3 (check torch.multiprocessing.spawn). |
st175256 | Hi,
I found this in my code, and I looked over literally 100 times and finally decided to make a replica in mnist version. And it seems like this happens as well.
Concept Description
Main Thread:
train data and toss the weight to the tester thread at a certain period of time
Tester Thread:
Thread waits through queue for next weight. When it gets a new thread it starts to do some test (pretend this is the evaluation during train). To make a similar environment I put time.sleep(3) right after tester takes the weights from the queue, so that the main thread still trains some more in its original copy.
Problem:
Saved model weights are not same as the one in loaded model
Sometimes they are same but they don’t give the same results
Code:
For reproducibility, I will attach the test code at the bottom
This is not reproducible all the time so I gave up. I just mentioned for someone has same experience.
Summary:
As you can see, the weight at count1 and its loaded weights are exactly same as below:
Weights at test count 1 while training
image989×309 21.2 KB
Weights loaded count 1
image994×319 23.3 KB
"""
ref: https://korchris.github.io/2019/08/23/mnist/
"""
#Importing Library
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
#--- NN
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 32)
self.fc3 = nn.Linear(32, 10)
def forward(self, x):
x = x.float()
h1 = F.relu(self.fc1(x.view(-1, 784)))
h2 = F.relu(self.fc2(h1))
h3 = self.fc3(h2)
return F.log_softmax(h3, dim=1)
print("init model done")
##--- Define some inits and prepare data
batch_size = 64
test_batch_size = 1000
epochs = 10
lr = 0.1
momentum = 0.5
no_cuda = True
seed = 1
log_interval = 200
use_cuda = not no_cuda and torch.cuda.is_available()
torch.manual_seed(seed)
device = torch.device("cuda" if use_cuda else "cpu")
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
print("set vars and device done")
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transform),
batch_size = batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, download=True,
transform=transform),
batch_size=test_batch_size, shuffle=True, **kwargs)
##--- Instantiate model
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=lr, momentum=momentum)
##--- define train, test
def train(log_interval, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
#----
from torch.multiprocessing import Queue
from threading import Thread
import time
class Tester(Thread):
def __init__(self, model, model_name, queue, print_log=True):
super().__init__()
self.done = False
self.model = model
self.model_name = model_name
self.model.eval()
self.queue = queue
self.print_log = print_log
def set_done(self):
self.done = True
def run(self):
count = 0
while not self.done:
count += 1
weights = self.queue.get()
time.sleep(3)
print(weights)
self.model.load_state_dict(weights)
print('[count: {}] weights: {}'.format(count, self.model.state_dict()))
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item()
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('[count: {}] Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format
(count, test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
save_name = '{}_{}.pth'.format(self.model_name, count)
print('Save path: {}'.format(save_name))
torch.save(model, save_name)
print(self.model.state_dict())
print('##########')
#---
# Train and save model starts here!
from copy import deepcopy
queue = Queue()
tester = Tester(model=Net().to(device), model_name='model', queue=queue, print_log=False)
tester.start()
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=lr, momentum=momentum)
count = 0
th_list = list()
for epoch in range(1, 2):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
count += 1
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
weights = {k: deepcopy(v) for k, v in model.state_dict().items()}
queue.put(weights)
tester.set_done()
print('Done')
#---
# Checking loaded model!
model = torch.load('model_1.pth')
model.eval()
print(model.state_dict())
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item()
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format
(test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset))) |
st175257 | This question doesn’t seem to be related to distributed, can you move it to the appropriate category? |
st175258 | Since my method is an Autoregressive algorithm It is making a huge gradient tape, I am trying to do something like this
for i in range(len(maxtrix.shape)):
output = torch.utils.checkpoint.checkpoint(NNModel(matrix[i]))
loss = -output.mean()
where NNModel is a torch.nn.Module.
It works fine on single GPU but on DDP it throws this error
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 30 with name module.model.decoder.decoder_network.layers.1.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.
I am running it with find_unused_parameters=False
Any workaround for this? |
st175259 | @shivammehta007 Can you try with find_unused_parameters=True? Also, can you provide a self contained repro of the issue?
Another option is you could use set_static_graph: pytorch/distributed.py at master · pytorch/pytorch · GitHub 10, if the parameters used in each iteration of your model are always the same. |
st175260 | Hello,
I’m learning how to train a model using DDP torch.nn.parallel.DistributedDataParallel. I run my experiments in cluster with three GPU nodes, each node has one GPU (Nvidia T4). I’m (kind of) aware that my setup isn’t ideal. I’m not yet trying to get the last drop of FLOPS from my cluster, I’m simply stuck trying to get marginal improvement on train time when distributed across the three workers.
It looks like I’m missing something obvious can you help me find what ? More precisely I’m trying to figure out:
Why my wall training time is only 40% faster when I distribute the training on three nodes ?
When distributing the training, is it expected that half of GPU time is spent on ncclKernel_AllReduce_RING_LL_Sum_float ?
Below are more details on what I do, I know that’s a lot of reading and I don’t expect much… I’ll gladly take any advice one can offer
Thanks a lot !
Hardware
I use a kubernetes cluster with:
Kubeflow CRDs (PyTorchJob, …)
7 Nodes, 3 nodes have one Telsa T4
A descent network connection between the nodes
Software
I use pytorch 1.10.0
I use CUDA
I use NCCL backend
I train for 20 epochs
I use a 1024 batch-size
I train the model straight from the example in PyTorchJob’s git repository (see below)
Results
Single node training (no distribution of any kind) :
Test accuracy at epoch 20 is 0.8386
WALL training time is 63.598 seconds
Training with 3 nodes :
Test accuracy at epoch 20 is 0.7622
WALL training time is 38.594 seconds (measured on master)
Profiling
I do profile using torch.profiler.schedule(wait=1, warmup=1, active=3, repeat=1) a step is an epoch.
Using three GPUs
7
For one single GPU
4
Code
Model
The Model has 431 080 trainable parameters :
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4 * 4 * 50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4 * 4 * 50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
Complete training code
Complete training code (most of it comes from the pytorch operator example):
import argparse
import logging
import os
import sys
import time
from typing import List, Any
import numpy as np
from torchvision import datasets, transforms
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
WORLD_SIZE = int(os.environ.get('WORLD_SIZE', 0))
RANK = int(os.environ.get('RANK', 0))
EXPE_ID = os.environ.get('EXPE_ID', "no-expe-id")
POD_NAME = os.environ.get("K8S_POD_ID", "unknown pod name")
class MockProfiler(object):
def __enter__(self):
logging.debug(f"Entering {self}")
return self
def __exit__(self, exc_type, exc_val, exc_tb):
logging.debug(f"Exiting {self}")
def step(self):
pass
class WallTime(object):
def __init__(self, name):
self.wall = 0
self.name = name
def __repr__(self):
return f"WallTime({self.name}) @ {time.time()}"
def __enter__(self):
logging.debug(f"Entering {self}")
self.wall = time.time_ns()
def __exit__(self, exc_type, exc_val, exc_tb):
self.wall -= time.time_ns()
logging.debug(f"Exiting {self} after {abs(self.wall) / 1e9:.3f} WALL seconds")
class FashionMNISTInRam(datasets.FashionMNIST):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.buffer: List[Any] = [None] * len(self)
def prefetch(self):
for _ in range(len(self)):
_ = self[_]
return self
def __getitem__(self, index: int):
if self.buffer[index] is None:
self.buffer[index] = super().__getitem__(index)
return self.buffer[index]
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4 * 4 * 50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4 * 4 * 50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def should_distribute():
return dist.is_available() and WORLD_SIZE > 1
def is_distributed():
return dist.is_available() and dist.is_initialized()
def setup_logging():
stderr_handler = logging.StreamHandler(stream=sys.stdout)
formatter = logging.Formatter('[%(asctime)s on {}] %(message)s'.format(POD_NAME))
stderr_handler.setLevel(logging.DEBUG)
stderr_handler.setFormatter(formatter)
logging.root.setLevel(logging.DEBUG)
logging.root.addHandler(stderr_handler)
def setup_cli():
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=1, metavar='N',
help='number of epochs to train (default: 10)')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
help='learning rate (default: 0.01)')
parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
help='SGD momentum (default: 0.5)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--save-model', action='store_true', default=False,
help='For Saving the current Model')
parser.add_argument('--dir', default='logs', metavar='L',
help='directory where summary logs are stored')
if dist.is_available():
parser.add_argument('--backend', type=str, help='Distributed backend',
choices=[dist.Backend.GLOO, dist.Backend.NCCL, dist.Backend.MPI],
default=dist.Backend.GLOO)
return parser.parse_args()
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
logging.info('Train Epoch: {} [{}/{} ({:.0f}%)]\tloss={:.4f}'.format(
epoch, batch_idx * len(data), len(train_loader) * len(data),
100. * batch_idx / len(train_loader), loss.item()))
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
logging.info('accuracy={:.4f}'.format(float(correct) / len(test_loader.dataset)))
def main():
args = setup_cli()
torch.manual_seed(args.seed)
use_cuda = not args.no_cuda and torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
is_master = RANK == 0
if should_distribute():
logging.info('Using distributed PyTorch with {} backend'.format(args.backend))
dist.init_process_group(backend=args.backend)
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
with WallTime("data load"):
train_set = FashionMNISTInRam('../data', train=True, download=True, transform=transform).prefetch()
test_set = FashionMNISTInRam('../data', train=False, transform=transform).prefetch()
kwargs = \
{"sampler": torch.utils.data.distributed.DistributedSampler(
train_set,
num_replicas=WORLD_SIZE,
rank=RANK,
shuffle=True)} if is_distributed() else {"shuffle": True}
test_loader = torch.utils.data.DataLoader(
test_set,
batch_size=args.test_batch_size,
shuffle=False,
num_workers=1,
pin_memory=True)
train_loader = torch.utils.data.DataLoader(
train_set,
batch_size=args.batch_size,
num_workers=1,
pin_memory=True,
**kwargs)
model = Net().to(device)
if is_distributed():
if not use_cuda:
raise RuntimeError("Not using cuda")
model = nn.parallel.DistributedDataParallel(
model,
broadcast_buffers=True,
process_group=None,
bucket_cap_mb=25,
find_unused_parameters=False,
check_reduction=False)
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
profiler = MockProfiler()
if is_master:
trace_handler = torch.profiler.tensorboard_trace_handler(os.path.join(args.dir))
profiler = torch.profiler.profile(
schedule=torch.profiler.schedule(wait=1, warmup=1, active=3, repeat=1),
activities=[torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA, ],
on_trace_ready=trace_handler)
model_parameters = filter(lambda p: p.requires_grad, model.parameters())
params = sum([np.prod(p.size()) for p in model_parameters])
logging.info(f"The model has {params} trainable parameters")
with WallTime("training"):
with profiler as p:
for epoch in range(1, args.epochs + 1):
if is_distributed():
train_loader.sampler.set_epoch(epoch)
train(args, model, device, train_loader, optimizer, epoch)
if is_master:
logging.info("Testing")
test(model, device, test_loader)
p.step()
if __name__ == '__main__':
setup_logging()
logging.info(os.environ)
with WallTime("main"):
main()
Kubeflow CRD
I run this POC using the dedicated CRD, which looks like:
apiVersion: "kubeflow.org/v1"
kind: "PyTorchJob"
metadata:
name: "classif-minst-nccl"
spec:
pytorchReplicaSpecs:
Master:
replicas: 1
restartPolicy: OnFailure
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: pytorch
imagePullPolicy: IfNotPresent
image: debug-dist-pytroch-minst:$VERSION
args: ["--backend", "nccl", "--dir", "/tmp/tb/nccl-$VERSION", "--epochs", "20", "--batch-size", "1024"]
resources:
limits:
nvidia.com/gpu: 1
Worker:
# The "replicas" value is 2 when doing distributed training
replicas: 0
restartPolicy: OnFailure
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: pytorch
image: debug-dist-pytroch-minst:$VERSION
args: ["--backend", "nccl", "--dir", "/tmp/tb/nccl-$VERSION", "--epochs", "20", "--batch-size", "1024"]
resources:
limits:
nvidia.com/gpu: 1
Docker
The docker image is simply:
FROM pytorch/pytorch:1.10.0-cuda11.3-cudnn8-runtime
RUN pip install tensorboardX==1.6.0
RUN mkdir -p /opt/mnist
WORKDIR /opt/mnist/src
ADD mnist.py /opt/mnist/src/mnist.py
RUN chgrp -R 0 /opt/mnist \
&& chmod -R g+rwX /opt/mnist
ENTRYPOINT ["python", "/opt/mnist/src/mnist.py"]
Thank a lot for reading until the end |
st175261 | The achieved speedup would depend on your overall setup. You could profile the workload with Nsight Systems to see how long each call takes and where the bottleneck might be.
The fastest runner would have to wait for the others. Again, Nsight Systems might be useful here as it should show that other runners might still be executing code while one has to wait. |
st175262 | long all reduce time could be due to 1) low network bandwidth 2) data loading or computation caused desync btw ranks, as @ptrblck mentioned fastest runner will wait for the others. Try to use simulated data and benchmark the performance.
One good way to profile is to use torch profiler and dump the trace to Chrome for viewing the event timelines |
st175263 | Hello @ptrblck & @Yanli_Zhao, first thing first, thank you for messages, it helps !
Here’s what I changed :
dist.init_process_group(backend=args.backend) is done once the data-set is fully loaded. The objective is to get ride of potential sources of desynchronization.
I added a simple model (29 330 trainable parameters, 14x smaller than the original model). My goal is to reduce the communication overhead in distributed training.
I removed the call to the test function from the master’s training loop.
Here’s what I observe :
Training times
To train the simple model with 1 GPU takes 47.328 WALL seconds
To train simple model with 3 GPUs takes 23.765 WALL seconds
To train the original model with 3 GPUs takes 26.433 WALL seconds
Training time is divided by two when I triple the GPU capacity. This looks like slightly better but still a bit unsatisfying.)
60% of the time is spent synchronizing (regardless of the model version: simple or original).
When training the simple model most of the time is spent synchronizing
When training the original model most of the time is spent in data transfert.
Based on the above, I guess that my original observation about time being spent in ncclKernel_AllReduce was wrong.
When training the simple model, the master is not spending time synchronizing. Would that indicates it is the slower node that others wait after ? or is that expected ? If this is expected, could the synchronizing time on node 1 and 2 be simply them waiting for the master doing the sum of all the weights ?
More details
For some reason until that point I did not noticed the “Distributed” view in the tensorboard “pytorch profiler” tool, it is quite insightful.
Simpler model
image3370×1276 151 KB
Original model
image3359×1281 154 KB
Profiler (simple model)
As @Yanli_Zhao suggested I loaded the profile in chrome but I’m not quite sure what I’m searching for. The dependencies of the ncclKernel_AllReduce_RING_LL_Sum_float(ncclWorkElem) do not look problematic to me (but I have no reference point of a working cluster to compare with). Below is a capture of chromium profiler.
image3770×1575 306 KB
Next steps ?
Setup and use the Nsight Systems to get a grasp on what’s going on ?
Anything else ?
The simple model definition is below.
class SimpleNet(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.fc = nn.Linear(20*12*12, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 20*12*12)
x = self.fc(x)
return F.log_softmax(x, dim=1)
ps. I ran few tests using simulated data, it does not indicate that my test throttles on data I/O (which is expected since I force load the dataset in ram beforehand). |
st175264 | as you mentioned and your profiling showed, for simpler model, other nodes are waiting for master, that is why synchronization time is long, maybe find out why they are waiting for master. for original model, also improve data transfer…? |
st175265 | @Yanli_Zhao thank you for helping me out !
As suggested by @ptrblck I switched profiler and now use “nvidia nsight” profiler.
Original model (high latency in data transfert but low in synchronization)
As expected there is not so much variance from one epoch to another. Each epoch describes a clear pattern and each epoch pattern has a “batch” sub-partern. The fact that those are low variance makes me think that the problem is not transient (eg. network throttling, …).
Master
About a third (≈310ms per epoch) is spent in cudaMemcpyAsync . Each epoch starts with a suspiciously longer call (≈110ms) to cudaMemcpyAsync . The picture below is a zoom on one epoch.
master3770×1575 423 KB
Worker 1
Could worker synchronization time be spent waiting for that first long call to cudaMemcpyAsync on master to be finished ?
worker-03770×1575 255 KB
No distribution
For the reference, the suspiciously longer call (≈110ms) to cudaMemcpyAsync does not occur when there is no distribution.
no-dist3770×1575 564 KB
Could that suspiciously long calls to cudaMemcpyAsync have something to do with hardware issue ? |
st175266 | Hello, my batch size is 1024, could my problem boil down to that batch size being too small ? |
st175267 | Hi,
The bottleneck of my training routine is its data augmentation, which is “sufficiently” optimized. In order to speed-up hyperparameter search, I thought it’d be a good idea to train two models, each on another GPU, simultaneously using one dataloader.
As far as I understand, this could be seen as model parallel. However, my implementation failed.
Down below an example. After the first epoch, I expect the network weights to be identical. However, the loss1 is equal to loss2 just in the first iteration. Detaching and cloning the batch before moving it to the graphics cards didn’t change things.
torch.manual_seed(42)
model1 = SomeModel()
torch.manual_seed(42)
model2 = SomeModel()
dev1 = torch.device("cuda:0")
dev2 = torch.device("cuda:1")
o1 = torch.optim.AdamW(model1.parameters())
o2 = torch.optim.AdamW(model2.parameters())
l1 = SomeLoss()
l2 = SomeLoss()
model1 = model1.to(dev1)
model2 = model2.to(dev2)
for batch in train_loader:
o1.zero_grad()
o2.zero_grad()
logits1 = model1(batch.to(dev1))
logits2 = model2(batch.to(dev2))
loss1 = l1(logits1)
loss2 = l2(logits2)
loss1.backward()
loss2.backward()
o1.step()
o2.step()
Do you have any hints what’s going on? I suspect the computation graph to do funny things…
Edit:
The system runs Debian 11.1, PyTorch 1.9.1 and Cuda 11.12. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.