id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st175668 | Solved by David_Harvey in post #2
Add this line right after the setup(rank, world_size)
setup(rank, world_size)
torch.cuda.set_device(rank)
for x in range(1, 10000):
... |
st175669 | Add this line right after the setup(rank, world_size)
setup(rank, world_size)
torch.cuda.set_device(rank)
for x in range(1, 10000):
... |
st175670 | Hi,
I have a question on how to set the batch size correctly when using DistributedDataParallel.
If I have N GPUs across which I’m training the model, and I set the batch size of the DataLoader to 16, would the effective batch size be 16 or 16 x N?
Here is a small worked example to make it clearer.
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.utils.data.distributed import DistributedSampler
import torch.multiprocessing as mp
import torch.distributed as dist
from argparse import ArgumentParser
import os
class MyModel(nn.Module):
def __init__(self, input_dim, inner_layer_1, inner_layer_2, output_dim):
super().__init__()
self.fc1 = nn.Linear(input_dim, inner_layer_1)
self.fc2 = nn.Linear(inner_layer_1, inner_layer_2)
self.fc3 = nn.Linear(inner_layer_2, output_dim)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.softmax(x, dim=1)
return x
def train(gpu_number, n_epochs, model, train_data, optimizer, loss_fn, log_interval=2):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
torch.distributed.init_process_group(
backend='nccl',
init_method='env://',
world_size=2, # total number of gpus
rank=gpu_number
)
sampler = DistributedSampler(train_data, num_replicas=2, rank=gpu_number)
trainloader = DataLoader(train_data, batch_size=16, sampler=sampler)
#torch.cuda.set_device(gpu_number)
model = model.cuda(gpu_number)
model = DDP(model, device_ids=[gpu_number], output_device=gpu_number)
for epoch in range(n_epochs):
for i, batch in enumerate(trainloader):
inputs, labels = batch[:,:8].cuda(gpu_number), batch[:,-2:].cuda(gpu_number)
optimizer.zero_grad()
outputs = model.forward(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
dist.barrier()
if __name__ == "__main__":
train_data = torch.rand(30000, 100)
n_epochs = 4
learning_rate = 0.001
model = MyModel(8, 800, 300, 2)
loss_fn = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
mp.spawn(train, nprocs=2, args=(n_epochs, model, train_data, optimizer, loss_fn, 2))
in this line trainloader = DataLoader(train_data, batch_size=16, sampler=sampler) I set the batch size to 16, but have two GPUs. What would be the equivalent / effective batch size? Would it be 16 or 32 in this case? |
st175671 | Solved by David_Harvey in post #2
The valid batch size is 16*N. 16 is just the batch size in each GPU. During loss backward, DDP makes all-reduce to average the gradients across all GPUs, so the valid batch size is 16*N. |
st175672 | The valid batch size is 16*N. 16 is just the batch size in each GPU. During loss backward, DDP makes all-reduce to average the gradients across all GPUs, so the valid batch size is 16*N. |
st175673 | I use the following repo. to train a TTS on my own dataset : Tacotron 2 1
All stuffs are going well, but when I put distributed_run = True in hparams.py the process has been stucked at init_process_group function in train.py at line 35 (I put my world_size = 3). My GPU driver and cuda are okey! Is this a bug? If yes how can I fix it?
Thanks |
st175674 | I really get confused when I use the function torch.multiprocessing.spawn. Consider the following code:
import torch
import torch.multiprocessing as mp
x = [1, 2]
def f(id, a):
print(x)
print(a)
if __name__ == '__main__':
x.append(3)
mp.spawn(f, nprocs=2, args=(x, ))
For any process the main function spwans, it outputs the following:
[1, 2]
[1, 2, 3]
I have the following questions:
(1) Why is the first line of output [1, 2]? I think x is a global variable, and fork new process will share the memory in linux, which follows this page: https://stackoverflow.com/questions/5983159/python-multiprocessing-arguments-deep-copy
(2) Are the parameters in spawn deep copied to the new processes? Or just pass a reference?
Thank you very much! |
st175675 | I have the exact same issue with torch.multiprocessing.spawn (mp.spawn) used for distributed parallel training.
Since I have a large dataset of csv files which i convert to a shared multiprocessing numpy array object to avoid memory leak outside of my main. but mp.spwan It makes multiple copies of it anyways. |
st175676 | It looks like python’s multiprocessing module also copies the data if we use a spawn start_method:
import multiprocessing as mp
x = [1, 2]
def foo(a):
print(x)
print(a)
if __name__ == '__main__':
mp.set_start_method("spawn")
x.append(3)
p = mp.Process(target=foo, args=(x,))
p.start()
p.join()
To answer your question, there is a deepcopy, though shared memory will be used for Tensor data.
From the pytorch multiprocessing “best practices” page (https://pytorch.org/docs/stable/notes/multiprocessing.html 91):
We recommend using python:multiprocessing.Queue for passing all kinds of PyTorch objects between processes. It is possible to e.g. inherit the tensors and storages already in shared memory, when using the fork start method, however it is very bug prone and should be used with care, and only by advanced users. Queues, even though they’re sometimes a less elegant solution, will work properly in all cases.
You could thus use the fork start method with pytorch multiprocessing to avoid the copy, though as the docs mention this is not supported. |
st175677 | To answer your question, there is a deepcopy, though shared memory will be used for Tensor data.
Is this documented somewhere? Couldn’t find it on [1] and [2], but observed this behavior by writing some simple script
[1] Multiprocessing best practices — PyTorch 1.9.0 documentation 7
[2] Multiprocessing package - torch.multiprocessing — PyTorch 1.9.0 documentation 6 |
st175678 | I attempted to create my own parameter server implementation where the PS only combines gradients, does the optimization step and some synchronization.
my code is something like this:
worker:
def run_training_loop(rank, num_gpus, train_loader, test_loader, param_server_rref):
model = resnet18(pretrained=True)
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, NUMBER_OF_CLASSES)
optimizer = optim.SGD(model.parameters(), lr=0.0002, momentum=0.9)
for epoch in range(NUMBER_OF_EPOCHS):
for i, (data, target) in enumerate(train_loader):
model_output = model(data)
loss = F.cross_entropy(model_output, target)
loss.backward()
model = param_server_rref.rpc_sync().optimize(rank, model)
get_accuracy(test_loader, model)
print("Training complete!")
server:
def optimize(self, rank, model):
self.barrier.wait()
if rank == 1:
self.combined_model = self.combine_model()
optimizer = optim.SGD(combined_model.parameters(), lr=0.0002, momentum=0.9)
optimizer.step()
optimizer.zero_grad()
self.barrier.wait()
return copy.deepcopy(self.combined_model)
Unfortunately the optimization step on the paramteter server doesn’t seem to have any effect!
I suspect that autograd is not recording a graph since the optimizer is created after calling the forward/backward functions?!
How can I get this working without doing all computation on the server?
The distributed_optimizer seems to be only useful for model parallelism and not data parallelism since it does not respect gradients from different workers. Is this correct?
Thank you in advance. |
st175679 | Hi @MichaelZ thanks for posting the question. It seems to me one of your issue is that optimize method on the server side is creating a new optimizer everytime you run a epoch, this will make a new state for the model params everytime when updating the param, although autograd might recording the gradients. You should create the optimizer once and in optimize just call the step function i think. Also I am not sure what combine_model() is and can’t be sure if it’s recording gradients correctly or not.
The distributed_optimizer seems to be only useful for model parallelism and not data parallelism since it does not respect gradients from different workers. Is this correct?
Distributed optimizer is mainly used by model parallelism as you said, but it does respect the gradients from different workers in the case of model parallelism. For DDP, the gradients are all reduced so it’s just a local optimizer. |
st175680 | Hi @MichaelZ , I am having the same intention on finding out how to run the forward() in each worker instead of on the server. Have you find the solution to this problem? Also, can you share what is in self.combine_model()? Thank you so much
edit: After brief reading the distributed-rpc topics, I found a topic where someone mentioned BATCH RPC PROCESSING 1. And this is seems to be what we are looking for. |
st175681 | Hi, sorry for the late reply I am currently very busy.
I decided to use the Pytorch c10d library instead, thus I can’t tell you how to do it with the RPC package. I found it quite intuitive to use the p2p and collective functions of c10d. The collective functions also allow you to combine gradients very easily (by summing).
Please let me know in case you need any further details about the c10d library and my implementation.
I hope this doesn’t come to late… |
st175682 | Hi, no it’s really not late. After implementing my model using parameter server as well as rpc batch update, but unfortunately with no particular execution time improvement compared to single node hogwild, I think I kinda got the feeling on how to work with pytorch distributed for my use-case. Do you have any other resources for using the functions other than Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 1 ? Btw, thank you so much for mentioning the c10d, I almost forgot about that because it was too complicated for me when I first read it when I was just learning pytorch distributed. |
st175683 | Hi, if you mean c10d functions than this might be helpful: Writing Distributed Applications with PyTorch — PyTorch Tutorials 1.9.0+cu102 documentation 2 |
st175684 | When I run “/usr/bin/nvidia_entrypoint.sh”(ngc 21.03), the following NOTE is displayed. Even if nv_peer_mem is set separately, it continues to appear. Does it keep happening because it will be deprecated soon?
NOTE: MOFED driver was detected, but nv_peer_mem driver was not detected. Multi-node communication performance may be reduced. |
st175685 | Are you seeing any other errors/warning before the MOFED warning?
Also, are you using multi-node training? |
st175686 | Hello,
There is something I seem to struggle to understand regarding how to use the DistributedDataParallel correctly. Below is a reproducible example of my code (I tried to make it as short and general as possible, and removed the evaluation step from the training).
I’m running the code on a machine with two GPUs, and my problem is that the code will save two separate torch models, one for each GPU process I’ve spawned. My assumption is that distributed multiprocessing should eventually reconvene everything under the same model, and so a single model should be saved by the code after the training. Could someone please tell me what I’m doing wrong in the code below?
Thanks in advance
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.utils.data.distributed import DistributedSampler
import torch.multiprocessing as mp
from argparse import ArgumentParser
import os
class MyModel(nn.Module):
def __init__(self, input_dim, inner_layer_1, inner_layer_2, output_dim):
super().__init__()
self.fc1 = nn.Linear(input_dim, inner_layer_1)
self.fc2 = nn.Linear(inner_layer_1, inner_layer_2)
self.fc3 = nn.Linear(inner_layer_2, output_dim)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.softmax(x, dim=1)
return x
def train(gpu_number, n_epochs, model, train_data, optimizer, loss_fn, log_interval=2):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
torch.distributed.init_process_group(
backend='nccl',
init_method='env://',
world_size=2, # total number of gpus
rank=gpu_number
)
sampler = DistributedSampler(train_data, num_replicas=2, rank=gpu_number)
trainloader = DataLoader(train_data, batch_size=8, sampler=sampler)
#torch.cuda.set_device(gpu_number)
model = model.cuda(gpu_number)
model = DDP(model, device_ids=[gpu_number], output_device=gpu_number)
for epoch in range(n_epochs):
for i, batch in enumerate(trainloader):
inputs, labels = batch[:,:8].cuda(gpu_number), batch[:,-2:].cuda(gpu_number)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
torch.save(model.state_dict(), f"model_{gpu_number}.pt")
if __name__ == "__main__":
train_data = torch.rand(100, 10)
n_epochs = 3
learning_rate = 0.001
model = MyModel(8, 800, 300, 2)
loss_fn = nn.MSELoss() # use nn.CrossEntropyLoss() for binary classification
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
mp.spawn(train, nprocs=2, args=(n_epochs, model, train_data, optimizer, loss_fn, 2)) |
st175687 | Solved by yoshitomo-matsubara in post #2
Hi @AndreaSottana
When using the distributed training mode, one of the processes should be treated as the main process, and you can save the model only for the main process.
Check one of the torchvision’s examples, which will give you a good idea for your problem.
Say you’re using two GPUs in the… |
st175688 | Hi @AndreaSottana
When using the distributed training mode, one of the processes should be treated as the main process, and you can save the model only for the main process.
Check one of the torchvision’s examples 3, which will give you a good idea for your problem.
Say you’re using two GPUs in the distributed training mode, and then there will be two processes of your script.
Both the processes will call save_on_master function in the above example at some time, but only one of the processes will save the model (i.e., when is_main_process() returns True) |
st175689 | Hi @yoshitomo-matsubara
Many thanks for your prompt reply. This is the first time I hear about save_on_master and couldn’t find it in any torch distributed documentation I’ve seen so far, it would be good to make it more visible I think.
Anyway, I have 3 follow up questions.
If I understand correctly, the functions you linked would simply save the model only if the rank is zero. Is this arbitrary, i.e. could I have just chosen rank one instead for example?
The code you linked does not contain any explicit action of bringing together the two spawned processes, which I’m therefore assuming has to happen in the background. Does this mean that at any point after calling optimiser.step(), the weights on both GPUs will synchronise automatically? If true, this would imply that the two models that my original code would have saved were completely identical and took into account the training from both GPUs, as opposed to being two different models trained on a different GPU each with a different subset of the data. If my understanding is correct, this would also imply that the save_on_master function you linked has the only advantage of not saving the model multiple times, but even if I left my code as original, it would just save multiple copies of the same model with exactly the same weights, and I could use any of these saved copies, so it wouldn’t make any real difference. Please correct me if I’m wrong.
Given that the two (or more) GPUs are independent, one might run the code slightly faster than the other. If you save the model when the rank is zero and the GPU from rank 1 hasn’t yet finished running, is there a chance that you might save the model before the whole training has actually completed?
Thank you very much for your kind help! |
st175690 | Hi @AndreaSottana
Yes, technically you can choose the rank you like as the main process. Since the rank number starts from zero, I’d suggest using rank 0 as the main process
Using DDP, the two processes synchronize gradient across processes. You can confirm the detail here 1. If you don’t control the timing of saving your model when allowing both the processes to save a model, it may cause file handler error like one of the processes attempts to overwrite a file whose save operation is not complete.
It can be controlled by torch.distributed.barrier(), which waits for other processes. For instance, I’d put torch.distributed.barrier() right before torch.save in your code above to wait for all the iterations in the epoch done before saving the model. The same example in torchvision 1 uses it in evaluation to compute global accuracy across the processes. The synchronize_between_processes() uses torch.distributed.barrier() as dist.barrier() 1 |
st175691 | Hi @yoshitomo-matsubara
Many thanks for the very detailed reply.
I will try to study the code you linked and implement your changes, and will let you know if I have further queries down the line.
Thanks again |
st175692 | Hi @yoshitomo-matsubara
I’d have a small follow up from this.
I’ve noticed that you don’t call dist.barrier() before save_on_master (for example here 1) but suggested I do that in my code. Is there a reason why you don’t need to do this in your code?
I’ve also noticed that when you save the model, you save the model_without_ddp.state_dict() instead of simply model.state_dict(). I’ve not seen this done before. Is there any advantage in doing this over simply saving the DDP model’s state dict?
Thanks again |
st175693 | Hi @AndreaSottana
I don’t know which one you referred to as “your code”, but I suggested calling dist.barrrier() before torch.save in the code you showed above as follows:
For instance, I’d put torch.distributed.barrier() right before torch.save in your code above to wait for all the iterations in the epoch done before saving the model.
The example code actually calls dist.barrier() before save_on_master, and that happens in evaluate function 1, which is called before save_on_master
I’ve also noticed that when you save the model, you save the model_without_ddp.state_dict() instead of simply model.state_dict() . I’ve not seen this done before. Is there any advantage in doing this over simply saving the DDP model’s state dict?
Since DDP and DP will wrap your model, the state_dict saved by model.stage_dict() cannot be directly loaded a model without DDP/DP.
e.g.,
Try this minimal example
from torchvision import models
model = models.resnet18()
print(model.state_dict().keys())
model = torch.nn.parallel.DistributedDataParallel(model)
# or use DP instead of DDP
# model = torch.nn.DataParallel(model)
print(model.state_dict().keys())
From the first print statement, you’ll see pure module paths used in ResNet-18. But from the second print statement, you can confirm “module.” as a prefix for all the module paths shown by the first print statement.
It can be confirmed that your model is referred to as module in DP and DDP 1 implementations |
st175694 | Thanks again!
By “your code” I meant the code here 2 but I didn’t realise that the code actually called the evaluate function just before save_on_master, which was calling dist.barrier, makes sense now. I should have said the torch vision code that you linked.
I will convert my torch.save to the save_on_master function and call dict.barrier() before it in my code as well then.
Thanks for the small example with resnet as well, all clear now. I’m still new to multi GPU training but this helped me a lot. |
st175695 | Hi, say I have a dataset that is passed into data loader and call the model to do inference (iterate through the data loader).
Say you use SequentialSampler and using a single GPU/TPU with batch size say 32.
You know that 32 predicted labels in each batch will follow the ascending order of the sample in the dataset so you know which predicted label refers to the dataset index. You stack with each iteration until the end and you have your predicted label which follows the order of the original dataset’s index.
Now if you use distributed training to quickly do inference, how do you know which GPU/TPU is taking the sample indices from the dataset and after getting the predicted label I can merge back to the original dataset? This is because some GPU/TPU may do it quicker than others and will mess with the order of the indices? Or even batch 1 will not send to GPU0, batch 2 to GPU1 etc? |
st175696 | population typically select a sample of individuals or families and compute an estimate of the population inequality index from this sample. |
st175697 | ’_MultiProcessingDataLoaderIter’ erro when num_workers>0
environment: remote Linux+ Pycharm IDE + PyTorch 1.8.0
When I tried to debug my code with Pycharm IDE, there was a bug showing that _MultiProcessingDataLoaderIter error.
Then I found that when I set ‘num_workers_per_gpu’ > 0, such bug would happen.
There is no bug in my code, cause when I directly ran it, there is no any error.
Such bug just happened when only Debug mode. And when I set num_workers_per_gpu =0, there was no bug in any mode. It looks like so strange:
(1) num_workers_per_gpu =0,
debug mode
run mode:
(2) num_workers_per_gpu >0,
debug mode:
run mode:
It seems when Debug mode, the following loop would break uncorrectly (the condition of while loop here would set false):
while self._rcvd_idx < self._send_idx:
info = self._task_info[self._rcvd_idx]
worker_id = info[0]
if len(info) == 2 or self._workers_status[worker_id]: # has data or is still active
break
del self._task_info[self._rcvd_idx]
self._rcvd_idx += 1
else:
# no valid `self._rcvd_idx` is found (i.e., didn't break)
if not self._persistent_workers:
self._shutdown_workers()
raise StopIteration
I do not know how to solve it. And I checked torch.batchSampler, the reason that conditon becomes false could be that my sampler catches no data yet. But it just my own guess. |
st175698 | I wonder if there is a particular reason why the MPI initialize of pytorch.distributed only support MPI_THREAD_SERIALIZED rather than MPI_THREAD_MULTIPLE?
I tried to modify it to MPI_THREAD_MULTIPLE and can successfully build pytorch from source. Are there particular cases where MPI_THREAD_MULTIPLE fail for pytorch? |
st175699 | Hi, I have some problems using torch.nn.parallel.DistributedDataParallel (DDP) and torch.utils.checkpoint together.
It is Ok, if I set find_unused_parameters=False in DDP. The dilemma is that my network is a dynamic CNN, which will not forward the whole model during training, which means I have to set the find_unused_parameters=True… And, if I don’t use the torch.utils.checkpoint, my network is too large to run, leading to the OOM problem…
Therefore, what should I do to meet my demands?
There are some links to this question, but they not solve my problem.
https://github.com/pytorch/pytorch/issues/43135 15
https://github.com/pytorch/pytorch/issues/24005 9
Part of error report:
image1875×360 114 KB
Thanks for all the suggestions!!! |
st175700 | Solved by mrshenli in post #2
DDP does not work with torch.utils.checkpoint yet. One work around is to run forward-backward on the local model, and then manually run all_reduce to synchronize gradients after the backward pass. |
st175701 | DDP does not work with torch.utils.checkpoint yet. One work around is to run forward-backward on the local model, and then manually run all_reduce to synchronize gradients after the backward pass. |
st175702 | It’s ok in find_unused_parameters=False, so I guess maybe manually define checkpoint can solve my problem?
Thus, I modify the checkpoint to save the variable(indicating which part to forward), and get output again in backward with this variable.
But it makes new problems (Not Enough Values to unpack in backward)… |
st175703 | LicharYuan:
It’s ok in find_unused_parameters=False , so I guess maybe manually define checkpoint can solve my problem?
The reason find_unused_parameters=True does not work is because, DDP will try to traverse the autograd graph from output at the end of the forward pass when find_unused_parameters is set to True. However, with checkpoint, the autograd graphs are reconstructed during the backward pass, and hence it is not available when DDP tries to traverse it, which will make DDP think those unreachable parameters are not used in the forward pass (although they are just hidden by checkpoint). When setting find_unused_parameters=False, DDP will skip the traverse, and expect that all parameters are used and autograd engine will compute grad for each parameter exactly once.
But it makes new problems (Not Enough Values to unpack in backward)…
Could you please elaborate more on this problem? |
st175704 | mrshenli:
The reason find_unused_parameters=True does not work is because, DDP will try to traverse the autograd graph from output at the end of the forward pass when find_unused_parameters is set to True. However, with checkpoint, the autograd graphs are reconstructed during the backward pass, and hence it is not available when DDP tries to traverse it, which will make DDP think those unreachable parameters are not used in the forward pass (although they are just hidden by checkpoint). When setting find_unused_parameters=False , DDP will skip the traverse, and expect that all parameters are used and autograd engine will compute grad for each parameter exactly once .
If so, then the new problem is caused by my codes. I will try to figure it out .
I understand the effect of find_unused_paramerts now. Therefore, even my network has some unused parameters, but in each forward, each GPU updates the same unused parameters. In such a situation, I also can set find_unused_paramerts=False all right ? |
st175705 | I fix my problem like the FP16Optimizer in mmdetection 14, which is similar to the Apex delay all reduce.
mrshenli:
One work around is to run forward-backward on the local model, and then manually run all_reduce to synchronize gradients after the backward pass.
And I realize the solution is just you said before… Thank you very much!!!
But it seems work no matter what value of find_unused_parameters. I wonder why it work when find_unused_parameters=True, the traverse should failed logically. |
st175706 | But it seems work no matter what value of find_unused_parameters . I wonder why it work when find_unused_parameters=True , the traverse should failed logically.
IDK, I would assume this would lead to autograd hook fire on a ready parameter, which should trigger an error in DDP. BTW, when you manually run allreduce, DDP is no longer necessary. Any reason for still wrapping the model with DDP? |
st175707 | For me, simply setting find_unused_parameters to False in DistributedDataParallel (DDP) solves the problem. There is no error anymore.
Ref: https://stackoverflow.com/questions/68000761/pytorch-ddp-finding-the-cause-of-expected-to-mark-a-variable-ready-only-once 27 |
st175708 | **-- how to escape if synchronization. it is a huge problem because the evaluation of a simple if ONLY takes .33 sec while the entire forward in large network takes .0001s. here is a way that leads to wrong results because of synch gpu-to-cpu with non-blocking=True issue 2 !!! WARNING: NEVER use gpu-to-cpu transfer with non-blocking=True. it is NOT safe. see the previous post. **
hi,
it seems that for a cuda tensors a and b, even with 1 element each, the if control will create a synchronization point such as in this example:
if a == b:
return 0
it is not the condition a == b that creates the synchronization, it is the if. if seems to trigger a transfer of the result of the comparison to cpu in a blocking way making it similar to call torch.cuda.synchronize() before if.
pointers: here 1 and here 4.
we will see here 2 examples: when it is if that evaluates the expression, and second example where we prepare the boolean value ourselves.
the example below is taken from my ‘real’ code. creating a snippet example wont show this issue.
i tried a solution using non_blocking=True but it GIVES THE WRONG RESULTS. this happens because the non-blcoking=True transfer. tensors are created in cpu filled with 0 but the transfer didnt finished yet while the cpu has already evaluated the condition. the doc does not seem to mention this possibility nor the transfer from gpu-2-cpu: Returns a Tensor with the specified device and (optional) dtype. If dtype is None it is inferred to be self.dtype. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor.
this issue of transfering data from gpu to cpu will happen implicitly, but in safe mode == non-blocking=False, when you run a python operation , i.e.cpu, over cuda tensors, such as control statement, print, casting (int, …), calling cuda_tensor.item(), …
1. ‘if’ evaluates the condition:
example:
x = x.detach() # no gradient. x device: cuda.
tx = time.perf_counter()
min_x = x.min()
max_x = x.max()
print('time min max {}'.format(time.perf_counter() - tx))
tx = time.perf_counter()
if min_x == max_x:
return min_x
print('simple test Z {}'.format(time.perf_counter() - tx))
Output:
time min max 0.0002352111041545868
simple test Z 0.3392159678041935
2. we evaluate the condition and provide it to ‘if’:
in this way, we avoid the synchronization by performing the transfer from gpu to cpu with non_blocking=True. the doc 1 says, that when this variable is true, the transfer is done asynchronously if possible.
the worst scenario is that you will endup doing a synch if resulting condition tensor is not ready.
but the issue, tensors on cpu are more likely to be wrong… because the copy is not finished yet while the cpu has already performed the evaluation…
example:
tx = time.perf_counter()
min_x = x.min()
max_x = x.max()
print('time min max {}'.format(time.perf_counter() - tx))
tx = time.perf_counter()
z = ((min_x - max_x) == 0).to(torch.device("cpu"), non_blocking=True)
print('compute Z {}'.format(time.perf_counter() - tx))
if z:
return min_x
print('simple test Z {}'.format(time.perf_counter() - tx))
Output:
time min max 0.00020601600408554077
compute Z 0.0007294714450836182
simple test Z 1.7356127500534058e-05
we get back to square one if we set non_blocking to false:
tx = time.perf_counter()
min_x = x.min()
max_x = x.max()
print('time min max {}'.format(time.perf_counter() - tx))
if ((min_x - max_x) == 0).to(torch.device("cpu"), non_blocking=False):
return min_x
print('simple test Z {}'.format(time.perf_counter() - tx))
Output:
otsu time min max 0.00021892786026000977
otsu simple test Z 0.3317955397069454
3. you wont see this behavior in this snippet:
import time
import torch
if __name__ == '__main__':
seed = 0
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
a = torch.rand(200, 200).to(device)
min_ = a.min()
max_ = a.max()
t = time.perf_counter()
if min_ == max_:
pass
print('time {}'.format(time.perf_counter() - t))
thanks |
st175709 | Hi why won’t the described behavior be seen in 3.
It seems as though this a python problem with the if statement because of the CPU.
I am unsure if this will help since I can’t reproduce the problem, but I will torch.equal work for 1? It outputs a bool pytorch/native_functions.yaml at 1be1c901aabd3ddcf55af3ee869e611b7f3f43b6 · pytorch/pytorch · GitHub. If it does for your if conditions you can write a GPU kernels. |
st175710 | gcramer23:
Hi why won’t the described behavior be seen in 3.
i think it has something to do with the kernels.
in my ‘real’ code, x is computed using some forward operations, kernels could take way more time to get them + the realtime load on the gpu at that moment. in the snippet code, i use torch.rand, i assume it is an easy task that we can get the results fast enough that subsequent operations wouldnt be blocked.
gcramer23:
It seems as though this a python problem with the if statement because of the CPU.
yes, the condition is evalauted on gpu and results in a boolean tensor. but in order for the cpu to access it, it ahs to transfer it to cpu.
gcramer23:
I am unsure if this will help since I can’t reproduce the problem, but I will torch.equal work for 1?
i tried almost every imaginable way to get the value of the cuda boolean tensor without causing the blocking but it didnt work. this include torch.equal, indexing, evaluating the condition beforehand, numel(), … all this results in a blocking (synchronization) in order to make sure the cpu evaluates the right value of the tensor. if it is not the case, and the access to the tensor is not synchronized, cpu will produce wrong results.
it doesnt seem there is a way to escape this. lazy transfer from gpu-2-cpu is unsafe and leads to wrong results as mentioned in header parag. |
st175711 | Just skimmed through the topic and I would claim the synchronization for Python data-dependent control flow is expected, since the CPU has to read the values in order to decide which code path to take.
Since the operation might not have finished yet, the host needs to synchronize there.
The same would apply e.g. for print(data), as the host cannot (or rather should not) print the values of a CUDATensor until the result was created. |
st175712 | @sbelharbi
Skimmed through your question. Seems to me that everything is working properly, and the problem is that you don’t (yet) have a correct mental model of what’s going on under the hood.
Let’s start with the bad news - I am quite certain that your forward pass takes much more time than you think. Probably as much time as you think the ‘if’ statement consumes. This time is just being (correctly) hidden from you by the CUDA system. What you have measured as your forward pass timing is just the time required to queue the work onto the CUDA driver and return to your CPU-based program execution.
The good news is that nothing weird is happening and everything you are trying to do is perfectly achievable. With extra work, you can also implement fine grained synchronization avoiding brute force torch.cuda.synchronize(), which is the concurrency equivalent of using a nuke in the battlefield.
Of course you will have to deal with the real run-time of the computation.
Unfortunately the Torch documentation is extremely sparse in regard to synchronization issues and mostly assumes that users are either:
naive, sequential CPU-dwellers who transfer all data to the GPU and then do all computation on the GPU from there on
experienced, scarred CUDA programmers who know everything there is to know about asynchronous kernel execution.
Your case seems to fall in between these categories so you have been seeing what seems like concurrency voodoo.
The issue of synchronization and async execution is much too big for a post here so I will just give a few basic principles, hopefully these and what I’ve written above will give enough information to seek a solution:
Every time you execute a torch operation on the GPU, it is “non-blocking”; in the sense that GPU is told to do some work in the future, and execution resumes on the Python CPU thread almost immediately. This frees up your Python program to keep generating future GPU work, or to prepare new data while the GPU is doing its thing.
The reason this asynchronous operation doesn’t cause immediate mayhem is that work is queued on something called a CUDA Stream; on the stream everything proceeds in the order you have queued it. The reason most torch users are not aware of this is that there is a “default stream”, and unless the user requests something else, all GPU operations are queued on it, making everything look sequential and calm, as long as your tensors remain in CUDA land.
All of this Stream business is a CUDA-only thing. Therefore it applies only to stuff happening on the ‘cuda’ device. On the CPU there are is no such thing as streams and everything is simply synchronous with your Python program, just as most users expect.
So we have the GPU kingdom where everything is sequential because it’s on the default stream, and the CPU realm where everything is sequential because it is Python program synchronous. Each of these two domains maintains its own order, and they are largely independent of each other. This of course leaves the border between them as the problem area.
By default when you send a tensor across the boundary (nonblocking=False), the operation is synchronous on both sides, i.e. the copy is queued on the default CUDA stream AND your Python program is blocked until the tensor is completely moved to the other domain. This is nice and safe. And wastes precious time. This is why torch came up with nonblocking=True.
When you use nonblocking transfers, the copy work is queued on the default CUDA stream but the CPU is free to proceed. If you are doing cpu->GPU transfers, then most likely what happens next is that your Python program will queue further GPU operations for the tensor you have just uploaded. This is ok because the compute work is queued on the same stream with the copy work, and placed after it, so everything will end up blissfully fine.
However, when you move data from the GPU to the CPU with nonblocking=True, there is nothing there to protect you implicitly. It will queue the copy on the CUDA default stream, but you Python program is free to proceed. Quite likely your GPU is still busy doing computations that you have previously queued, and hasn’t even “heard” of the CPU-side tensor you are expecting to do further compute on. Therefore that tensor is going to contain zeros or garbage when your CPU starts crunching it.
To properly use a GPU result on the CPU side with a non-blocking transfer, you will need to make sure that the copy to the CPU has completed before you start consuming the data.
Before discussing how this can be accomplished, you have to ask yourself whether this is at all useful to you. In many (simple) cases there is nothing useful for the CPU to do until the compute result from the GPU is available. If this is the case, the easiest way to ensure the copy is complete is to do a blocking transfer (set nonblocking=False). Since the copy work is queued on the default CUDA stream, the copy will begin after all computation on the data has finished. Since Python is blocked for it, you CPU computation will only proceed after the copy has finished. This is also when you need to stop the timer if you want to measure how much time the GPU computation really took.
If you decide that you need more advanced asynchronicity in you program, I advise you to study some more about CUDA kernel launching. Keep in mind that every torch operation on Tensors on the GPU is equivalent to a sequence of one or more CUDA kernel launches in succession. You will need to be familiar with the following CUDA concepts and their torch interfaces: Stream, default stream, Event, overlapped computation and data transfer.
I hope this helps. |
st175713 | Just adding something important I realized I haven’t said explicitly:
The reason why you “tried almost every imaginable way to get the value of the cuda boolean tensor without causing the blocking but it didnt work” is that the work for computing the min and max and possibly the comparison between them hasn’t completed yet. You NEED to wait for the GPU to finish computing before you can have a useful and correct result.
Yes, with nonblocking=False you can free up your CPU to proceed before the GPU is done computing, but you can’t expect the CPU to have a correct result to use, before the GPU produces it.
In other words, it’s not that the synchronization “wastes” time, it’s the GPU computation that takes time. The synchronization simply “cures your blindness” to this time. |
st175714 | hi,
what is the difference between torch.distributed.all_gather and torch.distributed.all_gather_multigpu?
they both have the same definition: Gathers tensors from the whole group in a list.
but, torch.dist.all_gather_multigpu has a different case usage…
*_multigpu is supposed to work for multi-nodes. all_gather should also work in multi-nodes…
this is the example provided in the doc for *_multigpu with 2 nodes for all_reduce_multigpu, but thety all work the same.
node 0
import torch
import torch.distributed as dist
dist.init_process_group(backend="nccl",
init_method="file:///distributed_test",
world_size=2,
rank=0)
tensor_list = []
for dev_idx in range(torch.cuda.device_count()):
tensor_list.append(torch.FloatTensor([1]).cuda(dev_idx))
dist.all_reduce_multigpu(tensor_list)
node 1:
import torch
import torch.distributed as dist
dist.init_process_group(backend="nccl",
init_method="file:///distributed_test",
world_size=2,
rank=1)
tensor_list = []
for dev_idx in range(torch.cuda.device_count()):
tensor_list.append(torch.FloatTensor([1]).cuda(dev_idx))
dist.all_reduce_multigpu(tensor_list)
what usage case is this?
and most importantly, why a process in a node with multigpus want to access to all devices. in this case, it is creating tensors on each device? this is not a real case…
i assume here they want to duplicate the tensor we want to synch on each device of the local node.
in all_gather_multigpu, the output is a list of size world_size * nbr_gpus_in_node.
so, when calling the synch function, each process will have the same output list that contains the tensor on each gpu in the world.
why we need this all_gather_multigpu while all_gather can already do this easily without duplicating tensor on gpus??
also, the downside of all_gather_multigpu is that it requires that EACH NODE NEEDS TO HAVE THE SAME NUMBER OF GPUS. in practice, this is less likely to happen on clusters. in slurm, you can request 8 gpus, you can have in the same node, but the rest are dispatched over 4 nodes with 1 gpu per node…
an example 1 on how to synch a tensor across all gpus.
thanks |
st175715 | sbelharbi:
they both have the same definition: Gathers tensors from the whole group in a list.
A single tensor is broadcast from a process when using all_gather. A list of tensors is broadcast from a process when using all_gather_multigpu.
what usage case is this?
and most importantly, why a process in a node with multigpus want to access to all devices. in this case, it is creating tensors on each device? this is not a real case…
Its an example of using the PyTorch API.
also, the downside of all_gather_multigpu is that it requires that EACH NODE NEEDS TO HAVE THE SAME NUMBER OF GPUS. in practice, this is less likely to happen on clusters. in slurm, you can request 8 gpus, you can have in the same node, but the rest are dispatched over 4 nodes with 1 gpu per node…
You would simply need to configure your resources correctly. |
st175716 | gcramer23:
A single tensor is broadcast from a process when using all_gather. A list of tensors is broadcast from a process when using all_gather_multigpu.
not sure about that.
they are both used to sync one tensor.
the output of both methods is a list of tensors where each tensor comes from a process.
then, you can merge this list as you like to get the final synched tensor as in here 2.
broadcasting is done using torch.dist.broadcast i think, and it copies a tensor from source and diffuses it to the rest. all_gather copies all tensors across all process into a list and makes sure that all processes have the same exact list. |
st175717 | they are both used to sync one tensor.
all_gather pytorch/distributed_c10d.py at 101a6263309ae2f9e52947c7d02d630e1190b6c3 · pytorch/pytorch · GitHub 7
all_gather_multigpu pytorch/distributed_c10d.py at 101a6263309ae2f9e52947c7d02d630e1190b6c3 · pytorch/pytorch · GitHub 7 |
st175718 | — sorry for possible redundancy with other threads but i didnt find an answer.
hi,
trying to do evaluation in ddp.
forward in each gpu works fine.
but how can i gather all the outputs to a single gpu (master for example), to measure metrics onces an over ENTIRE minibatch because each process forward only a chunk of the minibatch.
or we can compute the metric over each gpu, but average over single one to reduce the communication overhead, then broadcast to others if necessary also to reduce communication.
how to that? it isa reduce-all op over outputs.
ddp seems to focus only on synch grads…,.
what if you want to synch outputs, losses, other stuff…
for example the computed loss is not the full loss over ENTIRE minibatch, but only a chunk.
so, is there a way to gather things on demand?
still reading the doc.
i may have missed something.
but, i dont think i saw any doc about this, probably ddp does not do this but only synch grads.
pytorch lightning 1 and also here 1 seems to have started working on this…
still reading their thread.
please feel free to leave a message or a snippet of code on how to do it.
the idea is to be able to evaluate using distributed computation, then gather the output or some intermediate metrics to single machine to do the average.
also, cant evaluate properly because every process have access only to a chunk of the minibatch. so, you cant loop over entire dataset unless you create a NEW dataloader, and use ddp.module to do the forward not ddp. because if you try to use ddp forward while only the master is doing the forward, it will get stuck waiting for other processes to do their forward… also, we are using distributedsampler… so it wil prevent a process from seeing the entire data.
again, i may have missed something.
thanks |
st175719 | first of all, pytorch lightning has done it!!! that’s cool.
metrics 4 over distributed models, an entire package just for this.
based on this threads one 1 and two here are some solutions.
drop distrib.comput. meaning you loose the the distributed comp power. evaluate only over the master for example. to do this, you need to drop the distributed sampler over the validation. use it only for trainsent. the master now can see the entire dataset. you can run and get the performance over the master. either you allow the other processes to do the same thing as the master. so it is a waste of energy. but they will have the correct value of performance. but useless because you have it at the master. the one that you will use to log the perf. or you can block the other processes and let only the master do the evaluation. in this case, you need add some blocking barriers. free only the master. so this solution does not exploit distributed computation for evaluation.
use distributed comp. in this case, each gpu wil work on a chunk of minbatch in parallel. for this you need to manually synch the OUTPUT of your model either between all gpus using all_gather or just gather at the master then compute the metric. one may think to compute an unormalized metric over each gpu, then just send it in order to avoid sending an entire OUTPUT of a network. some metrics may not like that. alos, you may loose the chunk size, so you need to synched it as well. now, that you synched all the gpus, you can compute the metric.
downside of solution 2:
a. you need to deal with each type of your network output.
b. the synch may create an overhead that will slow things. does anyone know how much to cost to do torch.distributed.all_gather? compared to torch.distributed.gather? the last one is expected to be cheap. but you need to keep track in your code that only master has the right metric…
c. your code will be filled with synch calls…
d. there are metrics that do not like this at all because they need to process each sample alone. and if the metric is class that update the metric after seeing each sample, there will be a problem… you need to store all the predictions, then synch them, then do the update by looping over each sample… metrics that can simply average over the entire predictions are fine.
based on code from pytorch lightning, here is a code to synch, and pseudo-code to use it
import torch
import torch.distributed as dist
def sync_tensor_across_gpus(t: Union[torch.Tensor, None]
) -> Union[torch.Tensor, None]:
# t needs to have dim 0 for troch.cat below.
# if not, you need to prepare it.
if t is None:
return None
group = dist.group.WORLD
group_size = torch.distributed.get_world_size(group)
gather_t_tensor = [torch.zeros_like(t) for _ in
range(group_size)]
dist.all_gather(gather_t_tensor, t) # this works with nccl backend when tensors need to be on gpu.
# for gloo and mpi backends, tensors need to be on cpu. also this works single machine with
# multiple gpus. for multiple nodes, you should use dist.all_gather_multigpu. both have the
# same definition... see [here](https://pytorch.org/docs/stable/distributed.html).
# somewhere in the same page, it was mentioned that dist.all_gather_multigpu is more for
# multi-nodes. still dont see the benefit of all_gather_multigpu. the provided working case in
# the doc is vague...
return torch.cat(gather_t_tensor, dim=0)
usage,
# size images: (32, 3, 224, 224)
cl_logits = self.model(images) #
# size cl_logits: (32, 100) # one process.
cl_logits = sync_tensor_across_gpus(cl_logits)
# size now: (64, 100)
# gather the 2 processes where each has a chunk of size 32 samples.
# using torch,distributed.all_gather, now every gpu has this full minibatch.
again, please let me know what is the optimum cost for synch, all_gather, or just gather at the master? is the cost of all_gather is huge when the tensors are large?
also, in some metrics, you can simply sync the unnormalized value to avoid sync the model output. you need to synch the total number of samples for normalization.
thanks |
st175720 | sbelharbi:
how to that? it isa reduce-all op over outputs.
ddp seems to focus only on synch grads…,.
what if you want to synch outputs, losses, other stuff…
for example the computed loss is not the full loss over ENTIRE minibatch, but only a chunk.
so, is there a way to gather things on demand?
still reading the doc.
i may have missed something.
but, i dont think i saw any doc about this, probably ddp does not do this but only synch grads.
DDP provides gradient synchronization across processes. If you require data be shared between processes you need to communicate between the processes Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 22. |
st175721 | hi,
i have a model that is wrapper within a ddp (DistributedDataParallel).
what is the right way to access to all model attributes?
i recall i had similar issue with DataParallel.
in a ddp, the model is stored in ddp.module here 4.
so far, i use ddp_model.module.attribute.
is there a better way? because i have to go through entire code to change this…
also, i want to make sure that i am using the right model because i need to get other things from it: grad, and other stuff from some layers.
is it safe to use the same trick for dataparallel ? i want to avoid if isinstance(model, ddp) / isinstance(model, MYMODEL) as well.
class myDDP(torch.nn.parallel.DistributedDataParallel):
def __getattr__(self, name):
try:
return super().__getattr__(name)
except AttributeError:
return getattr(self.module, name)
this goes with the risk of undetected name clash… until later.
thanks |
st175722 | in a ddp, the model is stored in ddp.module here 10.
so far, i use ddp_model.module.attribute .
That seems correct since you are accessing a python object attribute.
is it safe to use the same trick for dataparallel 1 ?
Yes, it should be safe since you are accessing the attributes of a python object. |
st175723 | one can also reference the model inside ddp and work on the reference to avoid ddp.modul.attr.
so, the optimizer can work on ddp, but you can do everything else over the reference.
see this 18.
the example 14. |
st175724 | hi,
is there a way to further speed up the forward (in train or eval modes) of a model that has part of its layers frozen?
the freezing is done by setting:
param.requires_grad = False
BatchNorm2d_layer.eval()
Dropout_layer.eval()
the frozen layers are included in the forward.
thanks |
st175725 | Hello, I’m trying to figure out what happen behind the scenes in DistributedDataParallel when it comes to parameters that do not require a gradient. I cannot find a clear answer on this in the documentations.
Assume we have three layers: A --> B --> C. Suppose that A and B both have their requires_grad flag as False. If this model is wrapped in DistributedDataParallel, will there be any process communication that needs to be done in the backward pass of layer A or C? Specifically with the sharing of gradients.
My problem is that I have a large model and I am extremely bottle necked by process communication. I would like to freeze some of my layers such that less gradients need to be shared. I understand that depending on my model, the computation cost may be the same but I really need to bring down the communication cost. |
st175726 | Solved by mrshenli in post #2
Hey @ayalaa2, DistributedDataParallel’s (DDP) ctor would go through all parameters and skip the ones whose requires_grad=False. So, there won’t be communication on those grad, but you will have to set their require_grad field before passing it to DDP. After the ctor, changing the requires_grad attri… |
st175727 | Hey @ayalaa2, DistributedDataParallel’s (DDP) ctor would go through all parameters and skip the ones whose requires_grad=False. So, there won’t be communication on those grad, but you will have to set their require_grad field before passing it to DDP. After the ctor, changing the requires_grad attribute makes no difference. See the code below in DDP ctor.
github.com
pytorch/pytorch/blob/6bd88f581a6323d026e08b65ffee75bfe162501f/torch/nn/parallel/distributed.py#L456-L464 14
# Build tuple of (module, parameter) for all parameters that require grads.
modules_and_parameters = [
[
(module, parameter)
for module in replica.modules()
for parameter in filter(
lambda parameter: parameter.requires_grad,
parameters(module, recurse=False))
] for replica in self._module_copies]
Another way to skip communication is to use the no_sync 7 context manager. |
st175728 | Hi @mrshenli, you mentioned that DDP can skip the gradient communication for parameters whose requires_grad=False but the flag must be set before wrapping the model with DDP.
I have 2 following questions:
If I set requires_grad=False during training after the DDP ctor, will these parameters be updated anymore, if they still conduct communication?
Is there a way to dynamically freeze more parameters and skip their communication during DDP training after constructing the DDP model?
My use case is gradually freezing more layers during training/fine-tuning. Thanks! |
st175729 | Hey @wydwww
wydwww:
If I set requires_grad=False during training after the DDP ctor, will these parameters be updated anymore, if they still conduct communication?
DDP only builds the communication buffer once (it’s actually twice: 1. using the reverse order of model.parameters, 2. using the autograd order of the first iteration). So DDP will still communicate all parameter’s gradients, even if some of them are marked as requires_grad=False after DDP ctor.
Is there a way to dynamically freeze more parameters and skip their communication during DDP training after constructing the DDP model?
If this is not very frequent (say, once per epoch), you can destroy the DDP instance and create a new one. |
st175730 | I am currently using DistributedDataParallel to speed up model training.
Originally, my code looked like the following and worked well :
train_sampler = DistributedSampler(data, num_replicas=self.world_size, rank=dist.get_rank())
train_sampler.set_epoch(epoch)
data_loader = DataLoader(data, batch_size=self.batch_size, shuffle=False, num_workers=0, pin_memory=self.pin_memory, sampler=train_sampler)
for batch in tqdm(data_loader, leave=False, desc='Batch Training {:<3}'.format(epoch), ncols=100, mininterval=1):
=== train model ===
When running the above code, data gets distributed across all gpus as expected. For instance, let’s say we have a training data of length 1,000 and we want to train with batch size 100. In a single gpu environment, the number of iterations per epoch is 1,000/100 = 10. In a multi gpu (let’s say 2 gpus) environment using DDP & DistributedSampler, the number of iterations per epoch should be 5.
However, when defining a separate function that makes the dataloader, DistributedSampler does not seem to work as expected.
def make_dataloader(data, batch_size, epoch):
train_sampler = DistributedSampler(data, num_replicas=self.world_size, rank=dist.get_rank())
train_sampler.set_epoch(epoch)
data_loader = DataLoader(data, batch_size=batch_size, shuffle=False, num_workers=0, pin_memory=self.pin_memory, sampler=train_sampler)
return data_loader
data_loader = make_dataloader(data, self.batch_size, epoch)
for batch in tqdm(data_loader, leave=False, desc='Batch Training {:<3}'.format(epoch), ncols=100, mininterval=1):
=== train model ===
When running the code above, data do not get distributed as expected. That is, if I train a model in the same circumstance as in the example (data length : 1000, batch size : 100, num gpus : 2), each process runs 10 iterations per epoch, not 5.
Why would DistibutedSampler behave differently? |
st175731 | Solved by gcramer23 in post #4
I am not seeing a problem when I do a basic example
train_dataset = []
for _ in range(1000):
train_dataset.append(torch.rand(4, 4))
def basic_sampler(rank, world_size):
sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=world_size,
rank=ra… |
st175732 | When running the code above, data do not get distributed as expected. That is, if I train a model in the same circumstance as in the example (data length : 1000, batch size : 100, num gpus : 2), each process runs 10 iterations per epoch, not 5. Why would DistibutedSampler behave differently?
What is the value self.world_size in the make_dataloader function?
If it is 1 each process will run 10 iterations. |
st175733 | I am not seeing a problem when I do a basic example
train_dataset = []
for _ in range(1000):
train_dataset.append(torch.rand(4, 4))
def basic_sampler(rank, world_size):
sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=world_size,
rank=rank
)
data_loader = DataLoader(train_dataset, batch_size=100, sampler=sampler)
print(f"rank = {rank} train_dataset sample count = {len(train_dataset)}")
print(f"rank = {rank} data_loader batch count = {len(list(data_loader))}")
if __name__ == "__main__":
mp.spawn(basic_sampler, nprocs=2, args=(2,))
rank = 0 train_dataset sample count = 1000
rank = 0 data_loader batch count = 5
rank = 1 train_dataset sample count = 1000
rank = 1 data_loader batch count = 5
Are you able to add some debugging logs to confirm that everything is set correctly? |
st175734 | Sorry for the late reply. I was using a cloud instace and it worked fine this time. I guess something went wrong else where… |
st175735 | When I use RPC on more than one machine, the code will get stuck in init_rpc. I try opt.num_servers == 1, it works when opt.num_envs_each_server<30. I use python 3.7.10 and pytorch 1.8.1. Is there something wrong with this code?
# server
def main():
opt = Options().parse()
if opt.num_servers == 1:
num = NUM_TRAINER_PROCESSES + opt.num_envs_each_server
else:
num = NUM_TRAINER_PROCESSES
mp.spawn(run_worker, args=(opt,), nprocs=num, join=True)
def run_worker(idx, opt):
os.environ["MASTER_ADDR"] = opt.address
os.environ["MASTER_PORT"] = opt.port
backend = rpc.BackendType.TENSORPIPE
rpc_backend_options = rpc.TensorPipeRpcBackendOptions(
num_worker_threads=1, rpc_timeout=60
)
if idx == 0:
s = socket.socket()
s.bind((opt.address, opt.env_port))
s.listen(opt.num_envs)
for i in tqdm.trange(opt.num_envs, ascii=True):
c, addr = s.accept()
c.send(str(i).encode("utf-8"))
time.sleep(1)
s.close()
logger = utils.get_logger("agent")
logger.info("init rpc for ppo agent")
rpc.init_rpc(
"ppo_agent",
rank=0,
world_size=opt.world_size,
backend=backend,
rpc_backend_options=rpc_backend_options,
)
else:
while 1:
try:
s = socket.socket()
s.connect((opt.address, opt.env_port))
rank = s.recv(1024)
rank = int(rank.decode())
s.close()
break
except:
time.sleep(1)
pass
logger = utils.get_logger("env_{}".format(rank))
logger.info("init rpc for env {}".format(rank))
rpc.init_rpc(
"env_{}".format(rank),
rank=NUM_TRAINER_PROCESSES + rank,
world_size=opt.world_size,
backend=backend,
rpc_backend_options=rpc_backend_options,
)
logger.info("env {} is waiting".format(rank))
rpc.shutdown()
# client
def main():
opt = Options().parse()
mp.spawn(run_worker, args=(opt,), nprocs=opt.num_envs_each_server, join=True)
def run_worker(idx, opt):
os.environ["MASTER_ADDR"] = opt.address
os.environ["MASTER_PORT"] = opt.port
backend = rpc.BackendType.TENSORPIPE
rpc_backend_options = rpc.TensorPipeRpcBackendOptions(
num_worker_threads=1, rpc_timeout=60
)
logger = utils.get_logger("{}".format(idx))
while 1:
try:
s = socket.socket()
s.connect((opt.address, opt.env_port))
rank = s.recv(1024)
rank = int(rank.decode())
s.close()
break
except Exception as e:
logger.info(e)
time.sleep(1)
pass
logger.info("init rpc for env {}".format(rank))
rpc.init_rpc(
"env_{}".format(rank),
rank=NUM_TRAINER_PROCESSES + rank,
world_size=opt.world_size,
backend=backend,
rpc_backend_options=rpc_backend_options,
)
logger.info("env {} is waiting".format(rank))
rpc.shutdown() |
st175736 | Hey @yueyilia, would I be correct if I assume the world_size is NUM_TRAINER_PROCESSES + 2 * num_envs_each_server? Have you trired increasing the value of num_worker_threads to, say, 128?
BTW, when you replace init_rpc with init_process_group and use the same rank and world_size, port, and, address, does it work? |
st175737 | Hey, thanks for your reply.
Specifically, in my simplified test example, opt.world_size=33, opt.num_servers = 2, opt.num_envs_each_server=16, NUM_TRAINER_PROCESSES=1.
It does not work when modifying the value of num_worker_threads.
When using init_process_group, clients are successfully initialized, but the server still gets stuck in init_process_group. |
st175738 | yueyilia:
When using init_process_group, clients are successfully initialized, but the server still gets stuck in init_process_group.
Hmm, this means some configuration might be wrong (init_process_group has been widely used in multi-machine training). The behavior suggests the client thinks all peers are joined, but the server is still waiting for some peers.
Curious, why do you need to use your own sockets to communicate ranks? Could you please print all ranks and see if those are expected?
If that still doesn’t give a clue, could you please share a min repro? |
st175739 | mrshenli:
Curious, why do you need to use your own sockets to communicate ranks? Could you please print all ranks and see if those are expected?
I plan to use 100 servers to build thousands of environments. I don’t know how to assign a unique rank without sockets. Is there any usage that I don’t know?
In fact, I once ran the same code on pytorch1.6. There was no problem with the startup. But I could only run up to 1000 environments because of the port limit (can see When I use 1024 nodes in rpc, I meet RuntimeError "listen: Address already in use" - #6 by yueyilia). I don’t know why the error occurred now.
I still haven’t found a solution, the following is a min repro.
import os
import time
import tqdm
import socket
import argparse
import logging
import torch
import torch.multiprocessing as mp
import torch.distributed
import torch.distributed.rpc as rpc
parser = argparse.ArgumentParser()
parser.add_argument("--mode", type=str, default="trainer")
parser.add_argument("--address", type=str, default="10.90.224.127", help="")
parser.add_argument("--port", type=str, default="10088", help="")
parser.add_argument("--rank_port", type=int, default="10099", help="")
parser.add_argument("--num_envs_each_server", type=int, default=16, help="")
parser.add_argument("--num_servers", type=int, default=2, help="")
parser.add_argument("--num_envs", type=int, default=32, help="")
parser.add_argument("--world_size", type=int, default=33, help="")
opt = parser.parse_args()
def main():
if opt.mode == "trainer":
mp.spawn(run_worker, args=(opt,), nprocs=1, join=True)
else:
mp.spawn(run_worker, args=(opt,), nprocs=opt.num_envs_each_server, join=True)
def run_worker(idx, opt):
os.environ["MASTER_ADDR"] = opt.address
os.environ["MASTER_PORT"] = opt.port
backend = rpc.BackendType.TENSORPIPE
rpc_backend_options = rpc.TensorPipeRpcBackendOptions(
num_worker_threads=1, rpc_timeout=60
)
if opt.mode == "trainer":
s = socket.socket()
s.bind((opt.address, opt.rank_port))
s.listen(opt.num_envs)
for i in tqdm.trange(opt.num_envs, ascii=True):
c, addr = s.accept()
c.send(str(i).encode("utf-8"))
time.sleep(1)
s.close()
logger = get_logger("agent")
logger.info("init rpc for ppo agent")
rpc.init_rpc(
"ppo_agent",
rank=0,
world_size=opt.world_size,
backend=backend,
rpc_backend_options=rpc_backend_options,
)
# torch.distributed.init_process_group(
# backend="gloo", rank=0, world_size=opt.world_size,
# )
logger.info("end")
else:
while 1:
try:
s = socket.socket()
s.connect((opt.address, opt.rank_port))
rank = s.recv(1024)
rank = int(rank.decode())
s.close()
break
except:
time.sleep(1)
pass
logger = get_logger("env_{}".format(rank))
logger.info("init rpc for env {}".format(rank))
rpc.init_rpc(
"env_{}".format(rank),
rank=1 + rank,
world_size=opt.world_size,
backend=backend,
rpc_backend_options=rpc_backend_options,
)
# torch.distributed.init_process_group(
# backend="gloo", rank=1 + rank, world_size=opt.world_size,
# )
logger.info("env {} is waiting".format(rank))
rpc.shutdown()
def get_logger(name="", level=logging.INFO, stream=True, file=None):
try:
import absl.logging
logging.root.removeHandler(absl.logging._absl_handler)
absl.logging._warn_preinit_stderr = False
except Exception as e:
print("failed to fix absl logging bug", e)
pass
logger = logging.getLogger(name)
logger.setLevel(level)
if stream:
stream_handler = logging.StreamHandler()
stream_formatter = logging.Formatter("%(asctime)s - %(message)s")
stream_handler.setFormatter(stream_formatter)
logger.addHandler(stream_handler)
if file:
path = os.path.join(file, name + ".log")
file_handler = logging.handlers.RotatingFileHandler(
path, "a", 100 * 1024 * 1024, 1, encoding="utf-8"
)
file_formatter = logging.Formatter(
"%(asctime)s %(levelname)s [%(filename)s: %(lineno)d] [%(processName)s: %(process)d] - %(message)s"
)
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
return logger
if __name__ == "__main__":
main() |
st175740 | Hi @yueyilia, I tried your code locally (python 3.8.5 and nightly build of pytorch which is 1.10 now), it worked for me and is able to get past init_rpc. I think the issue may be with how your cluster of machines is configured where they cannot establish connections with each other, but I’m not exactly sure.
yueyilia:
I plan to use 100 servers to build thousands of environments. I don’t know how to assign a unique rank without sockets. Is there any usage that I don’t know?
You can modify your script to use torchleastic via torch.distributed.run (Elastic Launch) — PyTorch master documentation. This essentially performs the work you are doing by having all nodes rendezvous then assign ranks to each process, with added fault tolerance and elasticity if you choose to enable it. You need to modify your script so that it acts as a single process, then run the command:
python -m torch.distributed.run
--nnodes=$NUM_NODES
--nproc_per_node=$NUM_TRAINERS
--rdzv_id=$JOB_ID
--rdzv_backend=c10d
--rdzv_endpoint=$HOST_NODE_ADDR
YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...)
The rank is automatically assigned with torchelastic. Each process will be able to read it’s own rank as an environment variable (e.g. rank = os.environ["RANK"]). Here are the list of the environment variables that get be passed into your script: torch.distributed.run (Elastic Launch) — PyTorch master documentation 2 |
st175741 | torch.histc 1 performed over a minibatch cuda tensor of n samples is slow on first sample…
is that normal?
torch.histc takes way too long on the first sample, .34sec
and 0.000163sec on the left.
any idea why? and how to speedup this?
the code is in the middle of large code where warmup has already been done…
thanks |
st175742 | Could you describe how you’ve profiled the code, please?
Often profiling code misses e.g. synchronizations and could thus accumulate previously launched kernel times into the current timer and would thus return wrong timings. |
st175743 | i re-run the timing with cuda.event, and i get relatively the same time except the first sample when using python time i get way higher time than torch.cuda.event. i dont know why is that.
not sure what python time was computing? and why this huge time at the first sample? this happens constantly on every first sample of any minibatch.
what is the true time that my code will take (it cant be 2 different times)? is it the one computed with torch.cuda.event or python time? if python time is taking this long, someone must be doing something that torch.cuda.event didnt take in consideration… it could be something to do with synchronization. when serializing the execution, i get consistent time in both timers.
could torch.histc creates a synchronization point at the first sample allowing all the kernels over other samples to be done, therefore it is fast on the rest of samples but not the first one?
the goal of this measurements is to find bottlenecks to improve their speed and speedup the full code.
in this case, for every minibatch, 99% of time is spent on first sample everytime. it is fine if this happens only once. but this happens in every first sample of every minibatch. so, it costs.
thanks
here is all the timings:
time using time.perf_counter (s)
with CUDA_LAUNCH_BLOCKING=1
sample 1 histc 0.0004032440483570099
sample 2 histc 0.0005275830626487732
sample 3 histc 0.000552598387002945
without CUDA_LAUNCH_BLOCKING=1
sample 1 histc 0.3368970938026905 **WHY?**
sample 2 histc 0.00043926388025283813
sample 3 histc 0.0003735348582267761
time using torch.cuda.Event (ms)
with CUDA_LAUNCH_BLOCKING=1
sample 1 histc 0.4039680063724518
sample 2 histc 0.6740800142288208
sample 3 histc 5.139200210571289
without CUDA_LAUNCH_BLOCKING=1
sample 1 histc 0.3036479949951172
sample 2 histc 0.47996801137924194
sample 3 histc 0.5324159860610962
general setup:
def function(tensor):
# operations ...
# tx = time.perf_counter()
start_event = torch.cuda.Event(enable_timing=True)
end_event = torch.cuda.Event(enable_timing=True)
start_event.record()
hist = torch.histc(x, bins=bins)
end_event.record()
elapsed_time_ms = start_event.elapsed_time(end_event)
print('histc {}'.format(elapsed_time_ms))
# print('histc {} '.format(time.perf_counter() - tx))
for epoch in epochs:
forward(minibatch)
# prepare tensors
l_tensors = None
for t in range(minibatch_size):
l_tensors.vstack((l_tesnors, compute(tensor_t))
# process all tensors.
for tensor in l_tensors:
function(tensor) # <-- where we measure time. |
st175744 | Hi the pytorch profiler might be beneficial to you torch.profiler — PyTorch 1.9.0 documentation 1. Also you can use the trace functionality to identify bottlenecks in your code PyTorch Profiler — PyTorch Tutorials 1.9.0+cu102 documentation. |
st175745 | i think i arrived to timing the histc function specifically and its parent function where it was called through the pytorch profiler.
it doesnt detail all the functions i guess. so, i had to manually further time some functions.
the profiler also showed that min and max tensor take the cake in time. but i think it is because i used them in the evaluation of the condition in if which is blocking cause of gpu-2-cpu transfer. |
st175746 | Based on your description you might profile the CUDA init in the first iteration using the Python timer or the “wake up cycle” in case the GPU was idle before, so I would recommend to use torch.utils.benchmark to profile specific ops, as it’ll take care of the warmup iterations as well as necessary synchronizations. |
st175747 | in all the results above, torch.utils.benchmark is set to true.
the slowness over the first sample of minbatch is recurrent over all minibatchs not just the first one.
so, the warmup couldnt be the issue.
i did some more digging, and now i am sure that the slowness is caused by cuda implicit synchronization. it is not torch.histc, tensor.min/max, or python if.
to clarify, i work on a tensor z of size (32, 1, h, w) that is loaded from disk using pytorch dataloader sample by sample with multiple workers and stitched together the same way we do with loading images to form a minibatch.
after loading our tensor z, i do the only following cuda operations on it very early in the code:
# z is loaded using datalaoder
with torch.no_grad():
# z: (bsz, 1, h, w)
assert z.ndim == 4
# Quick fix
z = torch.nan_to_num(z, nan=0.0, posinf=1., neginf=0.0)
z = F.interpolate(z,
image_size,
mode='bilinear',
align_corners=False) # (bsz, 1, h, w)
and that is all.
after this, i do some other stuff that do not involve z at all, such as forward in model.
then, i call a function that iterate on each sample in z and do some cuda stuff.
what i learned is that the FIRST CUDA OPERATION ON THE FIRST SAMPLE will cause A CUDA IMPLICIT SYNCHRONIZATION. either it is z.min()/max, torch.histc, python if min=max, … it does not matter. the moment cuda is involved, it calls synch. it is like kernels on z are not done!!!
# prepare z
# other stuff that do not involve z...
def func(z_):
for i in range(bsz):
process(z_[i])
func(z)
ok, let say they are not done.
if and only if add torch.cuda.synchronize() only right before starting the loop, all samples will run in .0003sec otherwise, the first will run in .33se while the rest in .0003sec. meaning the synch works, and z is ready.
but if ask for synch right after preparing z, it wont work!!!
later when staring working on z, cuda will synch on its own… it is so weird.
# prepare z
# torch.cuda.synchronize() # does not work here. we get the same slow behavior.
# other stuff that do not involve z...
def func(z_):
torch.cuda.synchronize() # will work here. z will be ready and no need for cuda implicit synch.
for i in range(bsz):
process(z_[i])
func(z)
this implicit cuda synch happens even if i create z as zero right before the loop such
# prepare z
# other stuff that do not involve z...
def func(z_):
# forget about our z... lets create one new one full of 0.
z_ = torch.zeros((bsz, 1, h, w), dtype=torch.float, device=cuda0)
for i in range(bsz):
process(z_[i])
func(z)
the same thing will happen, first sample slow, then the rest is fast.
in the loop, i the only manipulated cuda tensor z. no external tensors are required.
i dont understand what’s blocking cuda, and why it needs to synch.
z should be ready by then, and even creating zero tensor should be instant.
– creation of z and func are wrapped within torch.no_grad(). i tried with and without, i get the same pattern. here, i simplified func, it is actually a class torch.nn.Module, and i call forward.
any advice
thanks |
st175748 | sbelharbi:
in all the results above, torch.utils.benchmark is set to true.
torch.utils.benchmark is a utility function used to profile operations and is not a flag, so setting it to True won’t do anything besides overriding it.
Based on your description, you are getting the right profiling by synchronizing before starting and stopping the timers and get the expected wrong results otherwise. |
st175749 | did more digging.
didnt arrive to the exact cause of the problem, until now.
note, i have a network where i perform a forward for training over minibatch x, then i have the part of the code that processes z.
the network forward operates directly on minibatch (parallel).
however, the part operating on z, works sequentially, only at the end, i gathered what batchable into single cuda operation over entire batch.
initially, i perform:
network(x)
process(z)
this causes the discussed issue when arriving to process(z) where first sample is very slow .33sec, then rest of samples are fast .003 with mintach times to .45sec.
then, i switched the order to
process(z)
network(x)
things works properly. problem ‘disappeared’ at process(z). all samples in z are processed in .003 and total z in .11sec without an explicit torch.cuda.synchronize().
but that’s not all.
network(x) is completely independent from process(z) in term of data or functions. so, i dont know how the order of executing kernels will affect them. also, this doesnt necessarily mean the problem is solved because the overall time of 1 epoch didnt improve much (from 2min14s to 2min03s, i expected more gain, like .33se * 188 minibatches). cuda is probably doing some implicit synch.
so, did some more digging. it turns out i was wrong about the time of network(x). it is not .001, but .33sec. i used the wrong tool to measure time cuda stuff using python time.
so, the extra .33sec were coming from the network forward and not other operations.
the order does not matter.
cuda needs to synch to get the model output causing the delay.
sorry for the inconvenience, and thanks for your help. |
st175750 | ptrblck:
torch.utils.benchmark is a utility function used to profile operations and is not a flag, so setting it to True won’t do anything besides overriding it.
sorry, i was thinking of torch.backends.cudnn.benchmark which is flag. |
st175751 | I got this error message when using DDP with 2 trainers on 2 machines. The training process worked for a few batches and then returned the following message. I was not able to fully figure out what happened in the training process. My understanding is: one worker didn’t return the gradient of some parameters while the other worker started the next iteration. That is, the gradient updates were not synchronized correctly.
I tried to add all training parameters outputs to the loss function and re-run the distributed training. But still received the same error message.
Could anyone help me figure out where the potential problem is?
Note: The problem goes away when worker number = 1.
File "/home/tiger/usr_name/simgnn-dgl-torch/train_ray.py", line 117, in train_epoch
adv_out, feature_out, feature_loss = model(block)
File "/home/tiger/anaconda3/envs/usr_name/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/tiger/anaconda3/envs/usr_name/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 787, in forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameters which did not receive grad for rank 0: _adv_encoder._ff.ln.2.beta, _adv_encoder._ff.ln.2.alpha, _adv_encoder._ff.layers.0.weight, _adv_encoder._ff.layers.0.bias, _adv_encoder._ff.layers.1.weight, _adv_encoder._ff.layers.1.bias, _adv_encoder._ff.layers.2.weight, _adv_encoder._ff.layers.2.bias, _adv_encoder._ff.ln.0.alpha, _adv_encoder._ff.ln.0.beta, _adv_encoder._ff.ln.1.alpha, _adv_encoder._ff.ln.1.beta
Parameter indices which did not receive grad for rank 0: 11 12 13 14 15 16 19 20 21 22 23 24 |
st175752 | Hey @jwyao, have you tried setting find_unused_parameters=True in DistributedDataParallel (DDP) constructor?
The error message says that DDP instance didn’t see gradients for parameters 11 12 13 14 15 16 19 20 21 22 23 24.
You can verify this by running one forward + backward and then loop over all parameters and check if there .grad field is available/updated. Sth like:
model = DistributedDataParallel(model, devices=[rank])
loss_fn(model(inputs)).backward()
for p in model.parameters():
if p.grad is None:
print("found unused param")
If above doesn’t work, could you please share a min-repro? |
st175753 | Hi @mrshenli , thanks for your reply. Setting find_unused_parameters=True generated another error.
Actually, I figured out the issue later. The problem was that the computational graphs on different workers were different. I was training a graph neural networks on heterogeneous graphs. At each iteration, the graph batch sampled on each machine is different and may miss some edge types due to random sampling. This will lead to the stated problem: the weights associated to a specific edge type that isn’t sampled will not be computed in the backprop, making the gradient updates across different machines asynchronous.
This problem can happen with other networks whose computational graph is random. To me, it is a good idea to make DDP handle this case without hard matching gradient updates from each worker. If a parameter doesn’t appear on one worker, the update for the parameter from this machine is just zero. |
st175754 | jwyao:
This problem can happen with other networks whose computational graph is random. To me, it is a good idea to make DDP handle this case without hard matching gradient updates from each worker. If a parameter doesn’t appear on one worker, the update for the parameter from this machine is just zero.
find_unused_parameters was supposed to handle this case. Which error did you see after setting find_unused_parameters=True? |
st175755 | Thanks for your suggestion. When setting find_unused_parameters=True, the following warning was returned
Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator())
I think the problem is that parameters used in each trainer changes from time to time. It may happen that in some batches some parameters don’t appear in the computational graph due to random graph sampling, and it also can happen that every parameter appears in all trainers if sampling is balanced. |
st175756 | @mrshenli Thanks for your suggestion! I found an interesting behavior. Previously the model was trained on CPU and the training procedure generated these messages. But once the training is moved to GPU, using find_unused_parameters=True works totally fine without any error or warning. |
st175757 | This is quite strange, as we don’t expect CPU → GPU device change with no model/training side changes to incur unused parameters. This implied there are unused parameters on the GPU.
If you’re curious to dig into it, you can send find_unused=False and TORCH_DISTRIBUTED_DEBUG=DETAIL (need PyTorch 1.9) which will log the unused parameter name in the crash. Then you can check whether this parameter is being used or unused as part of DDP on CPU. |
st175758 | Hi. Im training a model using DDP on 2 P100 GPUs. I notice that when I set the num_workers >0 for my val_dataloader the validation step on epoch 0 crashes. My train_dataloader has num_workers=4 and the sanity validation check runs fine. I have checked several similar issues but none seem to be the same as the one I’m facing. The model works great when validation num_workers=0. Please find the exact error output below.
My pytorch version is installed with cuda 10.2 but I am running my code on cuda 11.4. Could this be the source of the error?
Pytorch-lightning version = 1.4.2 , torch version = ‘1.9.0+cu102’.
Validation sanity check: 0it [00:00, ?it/s]/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:105: UserWarning: The dataloader, val
dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 24 which is the number of cpus on this m
achine) in the `DataLoader` init to improve performance.
rank_zero_warn(
Validation sanity check: 0%| | 0/1 [00:00<?, ?it/s]
/home/usr/pytorch/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subjec
t to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/home/usr/pytorch/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subjec
t to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Global seed set to 42
Global seed set to 42
Epoch 0: 80%|█████████████████████████████████████████████████████████████████████████████████████▌ | 4/5 [00:14<00:02, 2.80s/it, loss=4.33, v_num=d09et
erminate called after throwing an instance of 'c10::CUDAError' | 0/1 [00:00<?, ?it/s]
what(): CUDA error: initialization error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:1089 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x2b5f7135ca22 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x10d7e (0x2b5f710ecd7e in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x1a7 (0x2b5f710ee027 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10::TensorImpl::release_resources() + 0x54 (0x2b5f713465a4 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #4: <unknown function> + 0xa27e1a (0x2b5f1a569e1a in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: initialization error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:1089 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x2b4b41756a22 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x10d7e (0x2b4b414e6d7e in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x1a7 (0x2b4b414e8027 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10::TensorImpl::release_resources() + 0x54 (0x2b4b417405a4 in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #4: <unknown function> + 0xa27e1a (0x2b4aea963e1a in /home/usr/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
Traceback (most recent call last):
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 990, in _try_get_data
Traceback (most recent call last):
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 990, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/queues.py", line 107, in get
data = self._data_queue.get(timeout=timeout)
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/queues.py", line 107, in get
if not self._poll(timeout):
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 257, in poll
if not self._poll(timeout):
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
return self._poll(timeout)
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 424, in _poll
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
r = wait([self], timeout)
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 931, in wait
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
ready = selector.select(timeout)
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/selectors.py", line 415, in select
File "/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
fd_event_list = self._selector.poll(timeout)
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 3404) is killed by signal: Aborted.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/usr/mymodel/run.py", line 22, in <module>
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 3407) is killed by signal: Aborted.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/usr/mymodel/run.py", line 22, in <module>
main()
File "/home/usr/mymodel/run.py", line 18, in main
main()
File "/home/usr/mymodel/run.py", line 18, in main
return train(CFG)
File "/scratch/usr/mymodel/src/train.py", line 110, in train
return train(CFG)
File "/scratch/usr/mymodel/src/train.py", line 110, in train
trainer.fit(model,dm)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
trainer.fit(model,dm)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
self._run(model)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
self._run(model)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
self._dispatch()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
self._dispatch()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
self.accelerator.start_training(self)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
self.accelerator.start_training(self)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
self._results = trainer.run_stage()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
return self._run_train()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train
self.training_type_plugin.start_training(trainer)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
self.fit_loop.run()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self._results = trainer.run_stage()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance
return self._run_train()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train
epoch_output = self.epoch_loop.run(train_dataloader)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 112, in run
self.on_advance_end()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 177, in on_advance_end
self.fit_loop.run()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self._run_validation()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 256, in _run_validation
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance
epoch_output = self.epoch_loop.run(train_dataloader)
self.val_loop.run()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 112, in run
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance
self.on_advance_end()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 177, in on_advance_end
dl_outputs = self.epoch_loop.run(
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self._run_validation()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 256, in _run_validation
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 93, in advance
self.val_loop.run()
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
batch_idx, batch = next(dataloader_iter)
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance
dl_outputs = self.epoch_loop.run(
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
data = self._next_data()
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1186, in _next_data
self.advance(*args, **kwargs)
File "/home/usr/pytorch/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 93, in advance
batch_idx, batch = next(dataloader_iter)
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
idx, data = self._get_data()
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1152, in _get_data
data = self._next_data()
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1186, in _next_data
success, data = self._try_get_data()
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1003, in _try_get_data
idx, data = self._get_data()
File "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1152, in _get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 3404) exited unexpectedly
success, data = self._try_get_data()
ile "/home/usr/pytorch/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1003, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 3407) exited unexpectedly
@ptrblck Would really appreciate it if you could take a look! Thank you! |
st175759 | Hi! Can you please try with PyTorch nightly build, we made some changes into Cuda IPC code just recently, and I wonder if you will get same error (please paste it here if you are). |
st175760 | @VitalyFedyunin Hi! Thanks for your reply. I noticed that with my current versions, I also get the error immediately when I set a PyTorch seed or after a few epochs without seed and val_workers>0. I also find that I get the error when I have bum_workers>0 and I try reading from a pickled file containing PyTorch tensors (even if i save as numpy arrays it errors). The error also occurs sometimes when I try writing a pickled file (I write only for local_rank=0 process, but read the pickled file from both GPU processes in the DDP)
The first 2 errors are resolved with the nightly build. However the third, where I read in PyTorch tensors from a pickled file and have val_workers > 0 still persists. With val_workers=0 there is no error. Any idea on how to resolve this would be great! Please find a few more details below.
I have a small dataset, so I load all the data in the __init__ of my Dataset class. I then save it on my disk using pickle so I can save on dataloading time when I run my code again. Now, since I have 2 GPUs, DDP in pytorch-lightning starts 2 processes and each of these processes start reading from the pickle file. Both the training data and validation data are being read from pickle files. Epoch 0 training happens successfully but as soon as its starts validation, it crashes with the below error.
The error I get is -
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. [17/1779]
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from insert_events at ../c10/cuda/CUDACachingAllocator.cpp:1243 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x2b74fab78a52 in /home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x1c8ce (0x2b74fa8fc8ce in /home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x1a2 (0x2b74fa8fcee2 in /home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10::TensorImpl::release_resources() + 0x9c (0x2b74fab6205c in /home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #4: <unknown function> + 0x292b79 (0x2b749d542b79 in /home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0xacd961 (0x2b749dd7d961 in /home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
^C/home/usr/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1047: UserWarning: Detected KeyboardInterrupt, attempting graceful shutdown...
rank_zero_warn("Detected KeyboardInterrupt, attempting graceful shutdown...")
^CException ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x2b74fc2c85e0>
Traceback (most recent call last):
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1328, in __del__
self._shutdown_workers()
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1301, in _shutdown_workers
w.join(timeout=_utils.MP_STATUS_CHECK_INTERVAL)
File "/cvmfs/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/cvmfs/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/popen_fork.py", line 44, in wait
if not wait([self.sentinel], timeout):
File "/cvmfs/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/cvmfs/server/easybuild/software/2020/avx2/Core/python/3.8.10/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt:
^CTraceback (most recent call last):
File "/home/usr/mydir/mymodel/run.py", line 22, in <module>
main()
File "/home/usr/mydir/mymodel/run.py", line 18, in main
return train(CFG)
File "/mydir/usr/mymodel/mymodelsub/train.py", line 110, in train
trainer.fit(model,dm) File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
self._run(model)
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 921, in _run
self._post_dispatch()
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 976, in _post_dispatch
self.accelerator.teardown()
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu.py", line 57, in teardown
super().teardown()
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 157, in teardown
self.training_type_plugin.teardown()
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/parallel.py", line 143, in teardown
self.lightning_module.cpu()
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/core/mixins/device_dtype_mixin.py", line 136, in cpu
return super().cpu()
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 673, in cpu
return self._apply(lambda t: t.cpu())
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 533, in _apply
module._apply(fn)
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 533, in _apply
module._apply(fn)
File "/home/usr/pytorch_nightly/lib/python3.8/site-packages/torch/nn/modules/module.py", line 533, in _apply
module._apply(fn) |
st175761 | Hi.
I have the same error message, however it is transient for me, i.e., happens randomly either during training or validation. I have seen the issue occur only with DDP (multiple GPUs), single GPU runs without DDP work fine.
I am also using multiple data workers for training and validation.
I am using torch==1.9.0+cu111 and Pytorch-lightning==1.4.2.
Here is the start of the error:
terminate called after throwing an instance of 'c10::CUDAError' what(): CUDA error: initialization error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Exception raised from insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:1089 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f73757b9a22 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
... |
st175762 | Hi @tinan5. Oh thats interesting. Are you by any chance reading/writing data from/to a pickle file or hdf5 file? For me this was mostly the issue. In my __init__ method, I was previously writing data as a pickle file. When I removed this, my code works fine. I guess its a issue with reading and writing files during multiprocessing. I found this issue Data Loader does not work with Hdf5 file, when num_worker >1 · Issue #11929 · pytorch/pytorch · GitHub 5 to be quite similar. Please let me know if you get any more insights.
Also you could try with the nightly version. It fixed most of the other issues which were giving the same error for me. |
st175763 | Hi @Rohit_R. I am reading from disk but not writing and not with HDF5 format. Perhaps the nightly version can fix the DDP issues. I will try it. |
st175764 | @VitalyFedyunin @tinan5 I got the same error after few epochs in the nightly version as well…Does not seem to be fixed…
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: initialization error
Exception raised from insert_events at ../c10/cuda/CUDACachingAllocator.cpp:1243 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x2ab1c8b51a52 in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x1c8ce (0x2ab1c88d58ce in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x1a2 (0x2ab1c88d5ee2 in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10::TensorImpl::release_resources() + 0x9c (0x2ab1c8b3b05c in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #4: <unknown function> + 0x292b79 (0x2ab16b51bb79 in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0xacd961 (0x2ab16bd56961 in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: initialization error
Exception raised from insert_events at ../c10/cuda/CUDACachingAllocator.cpp:1243 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x2ab1c8b51a52 in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x1c8ce (0x2ab1c88d58ce in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x1a2 (0x2ab1c88d5ee2 in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10::TensorImpl::release_resources() + 0x9c (0x2ab1c8b3b05c in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #4: <unknown function> + 0x292b79 (0x2ab16b51bb79 in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0xacd961 (0x2ab16bd56961 in /home/me/pytorch_nightly/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #36: <unknown function> + 0x91d1 (0x2ab15dae61d1 in /Core/python/3.8.10/lib/python3.8/lib-dynload/_pickle.cpython-38-x86_64-linux-gnu.so)
frame #37: <unknown function> + 0x11821 (0x2ab15daee821 in /Core/python/3.8.10/lib/python3.8/lib-dynload/_pickle.cpython-38-x86_64-linux-gnu.so)
frame #38: <unknown function> + 0x1052d (0x2ab15daed52d in /Core/python/3.8.10/lib/python3.8/lib-dynload/_pickle.cpython-38-x86_64-linux-gnu.so)
frame #39: <unknown function> + 0x12263 (0x2ab15daef263 in /Core/python/3.8.10/lib/python3.8/lib-dynload/_pickle.cpython-38-x86_64-linux-gnu.so)
frame #40: <unknown function> + 0x113d1 (0x2ab15daee3d1 in /Core/python/3.8.10/lib/python3.8/lib-dynload/_pickle.cpython-38-x86_64-linux-gnu.so)
frame #41: <unknown function> + 0x10c62 (0x2ab15daedc62 in /Core/python/3.8.10/lib/python3.8/lib-dynload/_pickle.cpython-38-x86_64-linux-gnu.so)
frame #42: <unknown function> + 0x1052d (0x2ab15daed52d in /Core/python/3.8.10/lib/python3.8/lib-dynload/_pickle.cpython-38-x86_64-linux-gnu.so)
frame #43: <unknown function> + 0x15090 (0x2ab15daf2090 in /Core/python/3.8.10/lib/python3.8/lib-dynload/_pickle.cpython-38-x86_64-linux-gnu.so)
frame #61: <unknown function> + 0x7f27 (0x2ab15c357f27 in /cvmfs/server/gentoo/2020/lib64/libpthread.so.0)
frame #62: clone + 0x3f (0x2ab15c47087f in /cvmfs/server/gentoo/2020/lib64/libc.so.6)
Traceback (most recent call last):
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 990, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/Core/python/3.8.10/lib/python3.8/multiprocessing/queues.py", line 107, in get
if not self._poll(timeout):
File "/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
File "/Core/python/3.8.10/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/Core/python/3.8.10/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 20907) is killed by signal: Aborted.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train
self.fit_loop.run()
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance
epoch_output = self.epoch_loop.run(train_dataloader)
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 112, in run
self.on_advance_end()
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 177, in on_advance_end
self._run_validation()
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 256, in _run_validation
self.val_loop.run()
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance
dl_outputs = self.epoch_loop.run(
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 93, in advance
batch_idx, batch = next(dataloader_iter)
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1186, in _next_data
idx, data = self._get_data()
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1152, in _get_data
success, data = self._try_get_data()
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1003, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 20907) exited unexpectedly
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/me/scratch/myproj/run.py", line 20, in <module>
main()
File "/home/me/scratch/myproj/run.py", line 16, in main
return train(CFG)
File "/scratch/me/myproj/myprojsub/train.py", line 111, in train
trainer.fit(model,dm)
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
self._run(model)
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
self._dispatch()
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
self.accelerator.start_training(self)
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
self._results = trainer.run_stage()
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
return self._run_train()
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1058, in _run_train
self.training_type_plugin.reconciliate_processes(traceback.format_exc())
File "/home/me/pytorch_nightly/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 451, in reconciliate_processes
os.kill(pid, signal.SIGKILL)
ProcessLookupError: [Errno 3] No such process |
st175765 | @VitalyFedyunin @tinan5 Its mostly PL bug. Check this out - PyTorch Lightning 1.4.1 crashes during training · Issue #8821 · PyTorchLightning/pytorch-lightning · GitHub 79. |
st175766 | I notice that PyTorch has added zero redundancy optimizer since 1.8. However, it seems the zero optimizer in pytorch is using a different way to partition optimizer states from that in DeepSpeed. DeepSpeed partitions the each parameter equally in the param group and then allocate them to different ranks while PyTorch allocates each parameter directly to a different ranks. That is saying that DeepSpeed does partitioning in a finer granularity.
I am wondering if this is intended and anyone has verified the memory performance? |
st175767 | Hey Frank, yep, the difference is intentional. ZeroRedundancyOptimizer in PyTorch is supposed to be used in conjunction with DDP. Since DDP already holds the full model replica, it will be more memory efficient for ZeroRedundancyOptimizer to directly using those parameters as broadcast buffers instead of creating new intra-parameter shards. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.