id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st175068 | Hello,
I am facing an issue when back-propagating the loss due to large batch size due to the accumulation of the 8 GPU DataParallel. Is there is a way to divide the loss calculation into 8, then normalize the loss then use the loss.backward() but this time it should be smaller in size or how i can work this around ? |
st175069 | Solved by cbalioglu in post #2
I assume you are using DataParallel. I would suggest using DDP as it calculates the loss separately on each worker after syncing the gradients. |
st175070 | I assume you are using DataParallel. I would suggest using DDP as it calculates the loss separately on each worker after syncing the gradients. |
st175071 | @cbalioglu can you recommend some resources please to learn how to efficiently apply DDP to my large model using multiple gpus ? |
st175072 | Inconsistent multi-node latency with NCCL
Hi,
I deployed PyTorch on 2 servers (with 1 GPU each), and I am trying to measure the communication latency using the following codes, which simply execute AllReduce operation for multiple times and calculate the average time spent.
def run(vector_size, rank, steps):
elapsedTime = 0
for step in range(1, steps + 1):
tensor = torch.randint(10,(vector_size,)).cuda()
start = time.monotonic_ns()
dist.all_reduce(tensor, op=dist.ReduceOp.SUM, async_op=False)
latency = (time.monotonic_ns() - start) / 1e3
elapsedTime += latency
# time.sleep(0.1)
elapsedTime/=steps
print(vector_size*4, elapsedTime)
I found the measured latency abnormally high with PyTorch 1.10 + NCCL 2.10:
size(B) latency(us)
8 826.9433000000001
16 908.8419
32 1479.80385
64 2279.2819499999996
128 504.1064
256 1348.6622499999999
512 1123.6129000000003
1024 2590.2159
2048 1715.0593000000001
4096 5227.415999999999
8192 3131.40595
16384 3009.81275
32768 1614.2130499999998
65536 6010.794950000001
131072 6169.70775
262144 6595.7269
524288 4651.931450000001
1048576 5800.938
2097152 7393.041899999999
However, if I add time.sleep(0.1) at the end of each iteration, the latency becomes much smaller:
size(B) latency(us)
8 153.83099999999996
16 157.3773
32 157.008
64 157.7295
128 140.99030000000002
256 130.14204999999998
512 107.28104999999998
1024 117.73960000000002
2048 87.42374999999997
4096 86.94415000000002
8192 110.07860000000001
16384 116.90845000000002
32768 224.9045
65536 113.87135
131072 409.87255000000016
262144 370.2254
524288 837.1048000000001
1048576 1105.72925
2097152 3323.8366499999997
The inconsistency also happens between different versions of NCCL. I have recompiled PyTorch with the official build of NCCL 2.11, which has similar results. However, with NCCL 2.7, the latency is always small regardless the interval. The interval does not impact the latency of GLOO, either.
What may be the reason of these different latency values? And what is the correct way to measure the performance of AllReduce operations in PyTorch? Thanks!
Some of our other system information:
OS: Ubuntu 20.04
GPU: Tesla V100 (No GPUDirect support)
Network Interface: Mellanox mlx5 (We use RoCEv2 for NCCL) |
st175073 | Thanks @Qiaofeng! Do you mind opening a GitHub issue for this? Looks like it needs some investigation on our end. |
st175074 | Thank you for the reply! We also found MPI environment might have some impact on this problem. I have opened an GitHub issue with more detailed descriptions. |
st175075 | I want (the proper and official - bug free way) to do:
resume from a checkpoint to continue training on multiple gpus
save checkpoint correctly during training with multiple gpus
For that my guess is the following:
to do 1 we have all the processes load the checkpoint from the file, then call DDP(mdl) for each process. I assume the checkpoint saved a ddp_mdl.module.state_dict().
to do 2 simply check who is rank = 0 and have that one do the torch.save({‘model’: ddp_mdl.module.state_dict()})
Approximate code:
def save_ckpt(rank, ddp_model, path):
if rank == 0:
state = {'model': ddp_model.module.state_dict(),
'optimizer': optimizer.state_dict(),
}
torch.save(state, path)
def load_ckpt(path, distributed, map_location=map_location=torch.device('cpu')):
# loads to
checkpoint = torch.load(path, map_location=map_location)
model = Net(...)
optimizer = ...
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer'])
if distributed:
model = DDP(model, device_ids=[gpu], find_unused_parameters=True)
return model
Is this correct?
One of the reasons that I am asking is that distributed code can go subtly wrong. I want to make sure this does not happen to me. Of course I want to avoid deadlocks but that would be obvious if it happens to me (e.g. perhaps it could happen if all the processes somehow tried to open the same ckpt file at the same time. In that case I’d somehow make sure that only one of them loads it one at a time or have rank 0 only load it and then send it to the rest of the processes).
I am also asking because the official docs don’t make sense to me 1. I will paste their code and explanation since links can die sometimes:
Save and Load Checkpoints
It’s common to use torch.save and torch.load to checkpoint modules during training and recover from checkpoints. See SAVING AND LOADING MODELS for more details. When using DDP, one optimization is to save the model in only one process and then load it to all processes, reducing write overhead. This is correct because all processes start from the same parameters and gradients are synchronized in backward passes, and hence optimizers should keep setting parameters to the same values. If you use this optimization, make sure all processes do not start loading before the saving is finished. Besides, when loading the module, you need to provide an appropriate map_location argument to prevent a process to step into others’ devices. If map_location is missing, torch.load will first load the module to CPU and then copy each parameter to where it was saved, which would result in all processes on the same machine using the same set of devices. For more advanced failure recovery and elasticity support, please refer to TorchElastic.
def demo_checkpoint(rank, world_size):
print(f"Running DDP checkpoint example on rank {rank}.")
setup(rank, world_size)
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
CHECKPOINT_PATH = tempfile.gettempdir() + "/model.checkpoint"
if rank == 0:
# All processes should see same parameters as they all start from same
# random parameters and gradients are synchronized in backward passes.
# Therefore, saving it in one process is sufficient.
torch.save(ddp_model.state_dict(), CHECKPOINT_PATH)
# Use a barrier() to make sure that process 1 loads the model after process
# 0 saves it.
dist.barrier()
# configure map_location properly
map_location = {'cuda:%d' % 0: 'cuda:%d' % rank}
ddp_model.load_state_dict(
torch.load(CHECKPOINT_PATH, map_location=map_location))
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn = nn.MSELoss()
loss_fn(outputs, labels).backward()
optimizer.step()
# Not necessary to use a dist.barrier() to guard the file deletion below
# as the AllReduce ops in the backward pass of DDP already served as
# a synchronization.
if rank == 0:
os.remove(CHECKPOINT_PATH)
cleanup()
Related:
Checkpointing DDP.module instead of DDP itself 2
Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.10.1+cu102 documentation 1
DDP and Gradient checkpointing
DistributedDataParallel: resume training from a checkpoint results in additional processes on GPU 0 · Issue #23138 · pytorch/pytorch · GitHub
Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.10.1+cu102 documentation 1
python - What is the proper way to checkpoint during training when using distributed data parallel (DDP) in PyTorch? - Stack Overflow |
st175076 | Hi @Brando_Miranda,
Yes, your understanding is correct. Save the model (or any other sync’ed artifacts) to a permanent store on rank 0 during checkpointing, and load it on all ranks while resuming. What our documentation describes is the common idiom for saving/loading checkpoints, but if you have different requirements, you don’t have to follow it. For instance if your underlying store requires some form of sequential access, you can coordinate your workers using collective calls or coordinate them via a Store (e.g. TCPStore). In our example we have a basic demonstration of this coordination via a barrier call.
Do you mind explaining why the doc did not make sense to you? We always aim to improve our documentation, so any concrete feedback would be greatly appreciated. |
st175077 | How do I train on two machines, one with 4 gpus and one with 8 gpus? I find that the documentation of torch elastic run (torchrun (Elastic Launch) — PyTorch 1.10.0 documentation) suggests
" 4. This module only supports homogeneous LOCAL_WORLD_SIZE. That is, it is assumed that all nodes run the same number of local workers (per role)."
Thanks! |
st175078 | cc @Kiuk_Chung
As of today DDP does not officially support running jobs with different number of GPUs on different machines. One workaround might be to use the CUDA_VISIBLE_DEVICES environment variable to split 8 GPUs into two sets and then launch two DDP processes. |
st175079 | Hi!
Is it possible to divide a single physical GPU into several logical like in TF?
I have a node with 4 GPUs that I want to subdivide in 8 logical GPUs in order to train 8 models in parallel.
Is this possible?
4 x NVIDIA GTX1080 Ti with 11GB into 8 x logical GPUs with 5GB each.
Thanks! |
st175080 | I don’t know how TF “divides” the GPU, but you can use torch.cuda.set_per_process_memory_fraction to share the device memory between different processes. |
st175081 | Why are loss modules (criterion) often not wrapped by DDP?
Is it because loss modules typically have no learned parameters?
(and thus if they have, they should be part of DDP?) |
st175082 | Solved by ptrblck in post #2
The loss calculation is performed of each node and doesn’t need any communication. In case your loss needs to communicate some gradients, I guess it could be used as part of the model (and would thus be treated as any nn.Module). |
st175083 | The loss calculation is performed of each node and doesn’t need any communication. In case your loss needs to communicate some gradients, I guess it could be used as part of the model (and would thus be treated as any nn.Module). |
st175084 | Hi,
I am working on a multi-task model with uneven dataset size and have a custom sampler and using the sampler in dataloader (below)
sampler = BalancedBatchSchedulerSampler(dataset, batch_size)
dataloader = DataLoader(
dataset,
sampler=sampler,
batch_size=batch_size,
collate_fn=collate_fn,
num_workers=num_workers,
)
BalancedBatchSchedulerSampler is the custom sampler. Also I have set replace_sampler_ddp to False. With this custom sampler I don’t see the checkpoint folder is getting created for the model. When I don’t pass the sampler argument and use the default RandomSampler the checkpoint is getting created without any other change in the code.
Is it possible that the sampler is affecting the model checkpoint somehow?
Thank you! |
st175085 | Solved by mMagmer in post #4
That was my guess.
Based on the code, I think you’re implementing a BatchSampler not a sampler.
I don’t know how you can fix your code, |
st175086 | Maybe your custom sampler somehow has very large size, And you won’t reach the end of it. check the implemention of __len__ 1 method. |
st175087 | mMagmer:
len
Thanks so much for pointing to the __len__ method. I have a multi-task model and the three tasks have different sizes. The __len__ method is implemented to return equal proportion of samples from each dataset. I went through PyTorch documentation on Sampler and dataloader but not sure what should I change. Did you mean that the number of samples are too large?
Thank you
import math
from random import shuffle
import torch
from torch.utils.data.sampler import RandomSampler
class BalancedBatchSchedulerSampler(torch.utils.data.sampler.Sampler):
"""
iterate over tasks and provide a balanced batch per task in each mini-batch
"""
def __init__(self, dataset, batch_size):
super(BalancedBatchSchedulerSampler, self).__init__(dataset)
self.dataset = dataset
self.batch_size = batch_size
self.number_of_datasets = len(dataset.datasets)
self.largest_dataset_size = max(
[len(cur_dataset) for cur_dataset in dataset.datasets]
)
def __len__(self):
return (
self.batch_size
* math.ceil(self.largest_dataset_size / self.batch_size)
* len(self.dataset.datasets)
)
def __iter__(self):
samplers_list = []
sampler_iterators = []
for dataset_idx in range(self.number_of_datasets):
cur_dataset = self.dataset.datasets[dataset_idx]
sampler = RandomSampler(cur_dataset)
samplers_list.append(sampler)
cur_sampler_iterator = sampler.__iter__()
sampler_iterators.append(cur_sampler_iterator)
push_index_val = [0] + list(self.dataset.cumulative_sizes[:-1])
step = self.batch_size
samples_to_grab = math.ceil(self.batch_size / self.number_of_datasets)
# for this case we want to get all samples in dataset, this force us to resample from the smaller datasets
epoch_samples = self.largest_dataset_size * self.number_of_datasets
final_samples_list = [] # this is a list of indexes from the combined dataset
for _ in range(0, epoch_samples, step):
cur_batch_samples = []
for i in range(self.number_of_datasets):
cur_batch_sampler = sampler_iterators[i]
cur_samples = []
for _ in range(samples_to_grab):
try:
cur_sample_org = cur_batch_sampler.__next__()
cur_sample = cur_sample_org + push_index_val[i]
cur_samples.append(cur_sample)
except StopIteration:
# got to the end of iterator - restart the iterator and continue to get samples
# until reaching "epoch_samples"
sampler_iterators[i] = samplers_list[i].__iter__()
cur_batch_sampler = sampler_iterators[i]
cur_sample_org = cur_batch_sampler.__next__()
cur_sample = cur_sample_org + push_index_val[i]
cur_samples.append(cur_sample)
cur_batch_samples.extend(cur_samples)
shuffle(cur_batch_samples)
final_samples_list.extend(cur_batch_samples)
return iter(final_samples_list) |
st175088 | Rini:
Did you mean that the number of samples are too large?
That was my guess.
Based on the code, I think you’re implementing a BatchSampler not a sampler.
I don’t know how you can fix your code, |
st175089 | I turned off the __len__ function and checkpoint is getting created. I think I will play around a bit with batch sampler.
Thanks again for the help! |
st175090 | maybe you can get what you want by playing with batchsize,
something like:
import torch
d1 = torch.utils.data.TensorDataset(torch.ones(100,2))
d2 = torch.utils.data.TensorDataset(2*torch.ones(200,3))
d3 = torch.utils.data.TensorDataset(3*torch.ones(300,4))
dl1 = torch.utils.data.DataLoader(dataset=d1, batch_size=5, shuffle=True)
dl2 = torch.utils.data.DataLoader(dataset=d2, batch_size=10, shuffle=True)
dl3 = torch.utils.data.DataLoader(dataset=d3, batch_size=15, shuffle=True)
dl =zip(dl1,dl2,dl3)
#next(iter(dl))
#len(list(dl))
for m,n,p in dl:
# do the job |
st175091 | Hi,
I have 2 EC2 machines with 4 GPUs each. That makes 8 GPUs in total.
I want to train a PyTorch DeepLab in data-parallel over those 8 cards. What should I do:
A. Launch a DDP training with 2 scripts/processes (1 per node), each doing torch.nn.DataParallel to data-parallel within node on the 4 cards
B. Launch a DDP training with 8 scripts/processes (1 per GPU), each executing pure DDP + PyTorch code and using only 1 GPU (leaving DDP doing the allreduces).
In both options: how to launch the processes with torchrun/launch.py/torch.distributed: once for the whole cluster, from some remote client? Once per node? Once per GPU?.. |
st175092 | Hey @Olivier-CR
A. Launch a DDP training with 2 scripts/processes (1 per node), each doing torch.nn.DataParallel to data-parallel within node on the 4 cards
DataParallel is single-machine multi-GPU. It won’t work in the multi-machine scenario. DistributedDataParallel is the appropriate feature to use.
B. Launch a DDP training with 8 scripts/processes (1 per GPU), each executing pure DDP + PyTorch code and using only 1 GPU (leaving DDP doing the allreduces).
Yep, this should work. One caveat is that you need to make sure each DDP process exclusively operate on a dedicated GPU. You can do this by either setting the CUDA_VISIBLE_DEVICES to different GPU for different processes, or use the local_rank within each process. See: Distributed communication package - torch.distributed — PyTorch master documentation 1
In both options: how to launch the processes with torchrun/launch.py/torch.distributed: once for the whole cluster, from some remote client? Once per node? Once per GPU?..
If you are using torch run.py/launch.py, you just need to do it once for each machine. If you directly call your user script without run/launch, you will need to do that once for each GPU.
I do recommend using TorchElastic 1 to launch jobs, as that will also provide failure recoveries.
Hey @Kiuk_Chung what’s the best tutorial to get started with run/launch? Thanks! |
st175093 | @mrshenli , @Olivier-CR the torch.distributed.run docs isn’t a tutorial but a great place to start: torchrun (Elastic Launch) — PyTorch 1.10.0 documentation 2
For a more out-of-the-box experience, TorchX is where we are trying to make distributed job launching much easier: Distributed — PyTorch/TorchX main documentation 1 |
st175094 | wow what is that torchX thing? some other new library again ?! I though anything distributed would go to TorchElastic / Torchrun now? I’m a bit confused. Or is it just a thick client to launch distributed jobs? |
st175095 | torchrun will launch LOCAL processes for you. To run a distributed job its still on you to run torchrun on each of the nodes. TorchX has builtins to launch the job for you, and in doing so sets sensible defaults so that you don’t have to manually set configurations like --rdzv_backend, --rdzv_id etc.
Try:
$ pip install torchx-nightly
$ torchx run -s local_cwd dist.ddp -j 1x4 --script YOUR_SCRIPT.py <args to script>
Where -j is of the form {nnodes}x{nproc_per_node} so if you wanted to simulate a 2 node (each node running 4 procs per node) you’d set -j 2x4.
No need to worry about the different rendezvous settings now. Once you are ready to submit to a remote cluster (assuming you’ve setup kubernetes or slurm) then you’d run
$ torchx run -s kubernetes dist.ddp -j 1x4 --image YOUR_DOCKER_IMG --script SCRIPT.py <args to script> |
st175096 | @Kiuk_Chung what if you want to run via torchX on a plain EC2 cluster? still have to install etcd servers? “(needs you to start a local etcd server on port 2379! and have a python-etcd library installed)” (from here) |
st175097 | I am using DDP to distribute training across multiple gpu.
model = Net(...)
ddp_model = nn.SyncBatchNorm.convert_sync_batchnorm(model)
ddp_model = DDP(ddp_model, device_ids=[gpu], find_unused_parameters=True)
When checkpointing, is it ok to save ddp_model.module instead of ddp_model? I need to be able to use the checkpoint for 1. evaluation using a single gpu 2. resume training with multiple gpus
def save_ckpt(ddp_model, path):
state = {'model': ddp_model.module.state_dict(),
'optimizer': optimizer.state_dict(),
}
torch.save(state, path)
def load_ckpt(path, distributed):
checkpoint = torch.load(path, map_location=map_location)
model = Net(...)
optimizer = ...
model.load_state_dict(checkpoint['model'], strict=False)
optimizer.load_state_dict(checkpoint['optimizer'])
if distributed:
model = nn.SyncBatchNorm.convert_sync_batchnorm(model)
model = DDP(model, device_ids=[gpu], find_unused_parameters=True)
return model
I am not sure how this would behave, does manipulating ddp_model.module in general break things?
Similarly what if the number of gpus changes, would that impact the optimizer for example? does it need to be reinitialized? Thank you! |
st175098 | Solved by mrshenli in post #2
Yep, this is actually the recommended way:
On save, use one rank to save ddp_model.module to checkpoint.
On load, first use the checkpoint to load a local model, and then wrap the local model instance with DDP on all processes (i.e., sth like DistributedDataParallel(local_model, device_ids=[rank]… |
st175099 | mazabou3:
When checkpointing, is it ok to save ddp_model.module instead of ddp_model? I need to be able to use the checkpoint for 1. evaluation using a single gpu 2. resume training with multiple gpus
Yep, this is actually the recommended way:
On save, use one rank to save ddp_model.module to checkpoint.
On load, first use the checkpoint to load a local model, and then wrap the local model instance with DDP on all processes (i.e., sth like DistributedDataParallel(local_model, device_ids=[rank])). |
st175100 | I’d like to double-check because I see multiple question and the official doc 1 is confusing to me. I want to do:
resume from a checkpoint to continue training on multiple gpus
save checkpoint correctly during training with multiple gpus
For that my guess is the following:
to do 1 we have all the processes load the checkpoint from the file, then call DDP(mdl) for each process. I assume the checkpoint saved a ddp_mdl.module.state_dict().
to do 2 simply check who is rank = 0 and have that one do the torch.save({‘model’: ddp_mdl.module.state_dict()})
Is this correct?
I am not sure why there are so many other posts 2 (also this) or what the official doc 1 is talking about. I don’t get why I’d want to optimize writing (or reading) once given that I will train my model from either 100 epochs to even 600K iterations…that one write seems useless to me.
I think the correct code is this one:
def save_ckpt(rank, ddp_model, path):
if rank == 0:
state = {'model': ddp_model.module.state_dict(),
'optimizer': optimizer.state_dict(),
}
torch.save(state, path)
def load_ckpt(path, distributed, map_location=map_location=torch.device('cpu')):
# loads to
checkpoint = torch.load(path, map_location=map_location)
model = Net(...)
optimizer = ...
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer'])
if distributed:
model = DDP(model, device_ids=[gpu], find_unused_parameters=True)
return model
also, I added more details here: python - What is the proper way to checkpoint during training when using distributed data parallel (DDP) in PyTorch? - Stack Overflow 5 |
st175101 | Hi everyone,
I tried to use torch.utils.checkpoint along with DDP. However, after the first iteration, the program hanged. I read one thread last year in the forum and a person said that DDP and checkpointing havent worked together yet. Is that true? Any suggestions for my case? Thank you. |
st175102 | We currently have a prototype API _set_static_graph which can be applied to DDP if your training is static across all iterations (i.e. there is no conditional execution in the model). Documentation: pytorch/distributed.py at master · pytorch/pytorch · GitHub 21.
With static graph training, DDP will record the # of times parameters expect to get gradient and memorize this, which solves the issue around activation checkpointing and should make it work. |
st175103 | I don’t understand what the issue is. Why did your code hang - that is essential information to put in here. Did you try any of the following:
Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.10.1+cu102 documentation 3
Checkpointing DDP.module instead of DDP itself - #2 by mrshenli 5
Checkpointing DDP.module instead of DDP itself - #3 by Brando_Miranda 2
if none of them worked can you provide more details? In particular Your original post does not describe enough to know what the problem is. Things can hang for many reasons - especially in complicated multip processing code. |
st175104 | Hi, everyone,
I used DistributedDataParallel and nccl as backend in my code. I ran my code as python3 -m torch.distributed.launch --nproc_per_node=8 train.py. However, Pytorch creates a lot of redundant processes on other GPUs with memory 0 as shown below.
I used 8 * 3090 and Pytorch 1.7. It creates 8 processes on every single GPU.
image1002×1432 204 KB
Does anyone know the reason? Thanks! |
st175105 | Looks like each process is using multiple GPUs. Is this expected? If not, can you try setting CUDA_VISIBLE_DEVICES env var properly for each process before creating any CUDA context? |
st175106 | Hi, mrshenli,
Thanks for your reply. I did not know how to set CUDA_VISIBLE_DEVICES for each process. Could you please give me more information?
Thanks a lot! |
st175107 | Hello,
Because of GPU usage is not high in model training, I use DLProf to profile my model. The result shows that all_gather take a lot of time:
image1920×1095 103 KB
image1920×567 79.5 KB
Through profiling result I found these all_gather are called by sycnbn(pytorch/_functions.py at v1.8.2 · pytorch/pytorch · GitHub 1).
I want to know why it took so much time and how to speed up the training.
By the way, because of the large number of model parameters, I set batch_size=1 and use 8 V100 in training. |
st175108 | I’m not familiar with your setup, but you could check how large each communication between the devices is and how much bandwidth your system allows (e.g. using NVLink could speed it up significantly etc.). |
st175109 | Thank you for your reply.
I tested the communication speed by p2pBandwidthLatencyTest, the result is as following:
P2P Connectivity Matrix
D\D 0 1 2 3 4 5 6 7
0 1 1 1 1 0 0 0 1
1 1 1 1 1 0 0 1 0
2 1 1 1 1 0 1 0 0
3 1 1 1 1 1 0 0 0
4 0 0 0 1 1 1 1 1
5 0 0 1 0 1 1 1 1
6 0 1 0 0 1 1 1 1
7 1 0 0 0 1 1 1 1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 767.74 6.80 6.70 6.76 6.36 6.48 6.51 6.43
1 6.74 770.33 6.72 6.78 6.40 6.48 6.53 6.44
2 6.90 6.93 770.53 6.58 6.39 6.48 6.52 6.44
3 7.02 7.10 6.49 771.49 6.42 6.48 6.52 6.44
4 6.72 6.70 6.57 6.57 770.70 6.33 6.49 6.40
5 6.78 6.86 6.69 6.73 6.30 771.35 6.47 6.39
6 6.70 6.73 6.56 6.64 6.48 6.48 771.28 6.29
7 6.80 6.82 6.71 6.75 6.58 6.58 6.43 770.33
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 769.29 48.48 48.49 24.25 6.71 6.66 6.76 24.26
1 48.48 771.21 24.25 48.49 6.72 6.68 24.25 6.68
2 48.48 24.26 770.81 24.25 6.72 48.48 6.74 6.69
3 24.25 48.48 24.25 771.18 48.49 6.66 6.74 6.69
4 6.86 6.89 6.70 48.48 770.87 24.25 48.48 24.25
5 6.84 6.85 48.49 6.75 24.25 771.44 24.25 48.49
6 6.86 24.25 6.75 6.79 48.48 24.25 771.74 48.48
7 24.25 6.90 6.75 6.79 24.25 48.48 48.48 771.11
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 769.95 8.07 10.58 10.36 10.25 10.09 10.22 9.92
1 8.32 771.01 10.50 10.38 10.22 10.12 10.18 9.99
2 10.48 10.76 770.73 7.88 10.05 9.88 10.29 9.91
3 10.38 10.68 7.89 772.27 10.01 10.18 10.14 10.17
4 10.15 10.40 10.22 10.00 772.02 7.57 10.18 9.76
5 10.03 10.29 10.08 10.20 7.64 772.21 9.99 9.95
6 10.17 10.30 10.29 10.13 10.05 9.81 770.79 7.56
7 10.06 10.09 10.03 10.38 9.81 10.07 7.57 771.87
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 770.84 96.91 96.91 48.49 10.37 10.10 10.35 48.49
1 96.86 771.08 48.49 96.92 10.52 10.29 48.49 10.05
2 96.85 48.49 769.93 48.50 10.16 96.91 10.32 10.05
3 48.49 96.91 48.49 769.68 96.90 10.18 10.11 10.40
4 10.30 10.50 10.14 96.92 771.43 48.49 96.85 48.49
5 9.97 10.16 96.91 10.18 48.49 771.37 48.49 96.91
6 10.32 48.49 10.25 10.05 96.85 48.48 771.73 96.91
7 48.50 9.99 9.92 10.30 48.48 96.91 96.86 771.20
By the way, torch 1.10 can speed up training, but communication overhead is still high. |
st175110 | Thanks for the follow-up! Were you able to check the data size you would need to communicate and mapped it to the expected bandwidth? |
st175111 | Dear all,
I see some interesting examples with python (https://pytorch.org/tutorials/intermediate/dist_tuto.html 2).
I want to make a example with libtorch c++.
Please share with me the tutorial or similar functions with c++ to do that.
Thank you so much,
Best regards, |
st175112 | Hey @ph0123 DistributedDataParallel API is not yet available in C++. We are working on that. |
st175113 | mrshenli:
DistributedDataParallel
Thank you so much! Could you estimate when the API will release? |
st175114 | We can aim for in 3 months, but cannot promise that. If we see more people requesting this, we can try to allocate more resources to it and bump up the priority. |
st175115 | I want to execute a function at the worker side and return the results to the master. However, I find that the results is different when placing rpc_async at a different .py file
Method 1
master.py:
import os
import torch
import torch.distributed.rpc as rpc
from torch.distributed.rpc import RRef
from test import sub_fun
os.environ['MASTER_ADDR'] = '10.5.26.19'
os.environ['MASTER_PORT'] = '5677'
rpc.init_rpc("master", rank=0, world_size=2)
rref = torch.Tensor([0])
sub_fun(rref)
rpc.shutdown()
test.py
def f(rref):
print("function is executed on master")
def sub_fun(rref):
x = rpc.rpc_async("worker", f, args=(rref,))
worker.py:
import os
import torch
import torch.distributed.rpc as rpc
from torch.distributed.rpc import RRef
os.environ['MASTER_ADDR'] = '10.5.26.19'
os.environ['MASTER_PORT'] = '5677'
def f(rref):
print("function is executed on worker")
rpc.init_rpc("worker", rank=1, world_size=2)
rpc.shutdown()
I found that the output is “function is executed on master” at the worker side.
Method 2
when I put the two functions: sub_fun and f in the master.py rather than the test.py, the result is “function is executed on worker”.
Why the two ways output the different results. and how can I get the result 2 with the method 1.
My torch version is ‘1.5.0+cu92’ |
st175116 | The outputs differ based on where you put sub_fun due to the order in which python will load the pickled function. RPC will pickle based on function name not based on function definition. If you want to keep both function implementations they should be renamed to something like f_worker and f_master
For your case, to get the results you want, you can format your files like this:
master.py
import os
import torch
import torch.distributed.rpc as rpc
from torch.distributed.rpc import RRef
from test import sub_fun
os.environ['MASTER_ADDR'] = '10.5.26.19'
os.environ['MASTER_PORT'] = '5677'
if __name__ == "__main__":
rpc.init_rpc("master", rank=0, world_size=2)
rref = torch.Tensor([0])
sub_fun(rref)
rpc.shutdown()
test.py
import torch.distributed.rpc as rpc
from worker import f
def sub_fun(rref):
x = rpc.rpc_async("worker", f, args=(rref,))
worker.py
import os
import torch
import torch.distributed.rpc as rpc
from torch.distributed.rpc import RRef
os.environ['MASTER_ADDR'] = '10.5.26.19'
os.environ['MASTER_PORT'] = '5677'
def f(rref):
print("function is executed on worker")
if __name__ == "__main__":
rpc.init_rpc("worker", rank=1, world_size=2)
rpc.shutdown() |
st175117 | Really thanks for your reply!
However, I found it deadlock if I imported the function “f” from the worker.py. Did your code worked fine? |
st175118 | I have access to 18 nodes each with different numbers of GPUs all with at least 1 to my understanding you have to declare all the nodes to have the same number of GPUs.
first of all, am I correct?
and second, If I am is there any way around this? |
st175119 | Hi, I’m now training a point cloud analysis model with distributed data parallel (ddp). I follow all the rules to create the ddp training. It is normal at the beginning, but the losses in different gpus donot occur consistently when it comes to the end of the 1st epoch, for example:
come to epoch: 0, step: 429, loss: 0.046092418071222385
come to epoch: 0, step: 429, loss: 0.046092418071222385
come to epoch: 0, step: 429, loss: 0.046092418071222385
come to epoch: 0, step: 429, loss: 0.046092418071222385
come to epoch: 0, step: 430, loss: 0.04677124587302715
come to epoch: 0, step: 430, loss: 0.04677124587302715
come to epoch: 0, step: 430, loss: 0.04677124587302715
come to epoch: 1, step: 0, loss: 0.04677124587302715
come to epoch: 0, step: 431, loss: 0.03822317679159113
come to epoch: 0, step: 431, loss: 0.03822317679159113
come to epoch: 1, step: 1, loss: 0.03822317679159113
come to epoch: 0, step: 431, loss: 0.03822317679159113
come to epoch: 0, step: 432, loss: 0.0431362825095357
come to epoch: 0, step: 432, loss: 0.0431362825095357
come to epoch: 0, step: 432, loss: 0.0431362825095357
come to epoch: 1, step: 2, loss: 0.0431362825095357
come to epoch: 0, step: 433, loss: 0.04170320917830233
come to epoch: 1, step: 3, loss: 0.04170320917830233
come to epoch: 0, step: 433, loss: 0.04170320917830233
come to epoch: 0, step: 433, loss: 0.04170320917830233
come to epoch: 0, step: 434, loss: 0.042295407038902666
come to epoch: 0, step: 434, loss: 0.042295407038902666
come to epoch: 1, step: 4, loss: 0.042295407038902666
come to epoch: 0, step: 434, loss: 0.042295407038902666
come to epoch: 0, step: 435, loss: 0.040262431528578634
come to epoch: 1, step: 5, loss: 0.040262431528578634
come to epoch: 0, step: 435, loss: 0.040262431528578634
come to epoch: 0, step: 435, loss: 0.040262431528578634
come to epoch: 0, step: 436, loss: 0.04188207677967013
come to epoch: 0, step: 436, loss: 0.04188207677967013
come to epoch: 0, step: 436, loss: 0.04188207677967013
come to epoch: 1, step: 6, loss: 0.04188207677967013
Can anyone help?
Thanks. |
st175120 | Hi,
I was running the resnet 50 on DGX 4GPUs using pytorch imagenet examples. And I set the bucket_cap_mb=90 MB. So I guess the Backpropagation and All-Reduce will not be overlapped.
In my opinion, All-Reduce, Broadcast should launch similarly to all GPUs, but the visible Profile data is All-Reduce and Broadcast unsynced.
And Why two GPUs broadcast for a long time, And I want to know why GPUs call Broadcast.
image1066×767 165 KB
Thanks. |
st175121 | In my opinion, All-Reduce, Broadcast should launch similarly to all GPUs, but the visible Profile data is All-Reduce and Broadcast unsynced.
The attached profile is not very clear so I’m not sure what sort of synchronization issue you’re referring to wrt to broadcast and all-reduce.
And Why two GPUs broadcast for a long time, And I want to know why GPUs call Broadcast.
There are a few scenarios where DDP performs a broadcast. First during initialization a broadcast is done to ensure all parameters on all nodes are the same before we start training. Secondly, if you’re using single process multi device mode (one process driving many GPUs) a broadcast is done to synchronize the parameters across multiple GPU devices. |
st175122 | @pritamdamania87 Thanks to reply.
Can I ask another problem…?
image882×756 140 KB
Now I guess very long allreduce and Broadcast which caused by late Memcpy H2D. If you see the above figure GPU 1,3,4 call the H2D(=green bar) at the similar time, but GPU 2 called the H2D at late. These unsync behavior make inefficient at multi-GPU training.
So all GPU sync on GPU2, can I solve this problem…? For example, I allocated the memcpyH2D to new stream or any nice idea…?
Thanks. |
st175123 | Hmm, do you know why GPU2 calls the H2D late? It seems like the GPU is idle for a while and calls H2D later. Is it possible the CPU thread was busy doing something else so it didn’t schedule the H2D call earlier? If you look at the CPU trace, it might give some idea about why the H2D copy was late for GPU2. |
st175124 | @pritamdamania87 Thanks.
I checked the CPU trace. It has 20 cores and 40 threads, many of them were idle. It is strange this phenomenon was occurred every batch. |
st175125 | Does the profile have the ability to track when the copy kernel was actually launched on the CPU side? Can you check if the copy kernel itself was launched late or for some reason it was launched earlier but somehow execution got delayed on the GPU itself? |
st175126 | @pritamdamania87 Thanks.
So you said me to find the cause of late H2D.
I found the solution. When I give the option of workers to 5. it was operated well i expected. My system has 1 node at 4GPU, so I gives workes=4 which affects the performance of other GPUs. I can’t find why DGX allocated more workers than other system. But I checked the profiler, it seems correct.
My concerns is more allocated the worker makes any other problem? |
st175127 | @sangheonlee Which API are you passing number of workers to? Is this a PyTorch API or some other API? If you’re running 5 workers on a 4 GPU system, what is the additional worker doing? I’m assuming the 4 workers are driving the 4 GPUs. |
st175128 | @pritamdamania87 I don’t know what additional worker doing exactly.
In previous, It will be enough 4 workers for 4 GPUs system. They cannot MemcpyH2D simultaneously. So I just added 1 more worker. Profiler shows what I expected…
Workers are passed to data loader I guess. If you see the PyTorch document, they told that worker is a sort of subprocess of data loader. So, I think it will not be any problem for performance comparison… |
st175129 | @sangheonlee Sorry to bother. I wonder which tools you use to profile nccl kernel in this post. I also want to profile multi-gpu training process, but I met some problems. Can you share how you get the profiled timeline? Thanks a lot! |
st175130 | I have a Variational Autoencoder model with the following forward function:
def forward(self, x):
y = self.encoder(x)
means = self.linear_means(y)
log_var = self.linear_log_var(y)
z = reparameterize(means, log_var)
recon_x = self.decoder(z)
return VaeOutput(output=recon_x, mean=means, log_var=log_var)
VaeOutput is an object:
@dataclass
class VaeOutput:
output: torch.Tensor
log_var: torch.Tensor
mean: torch.Tensor
when I am training my model on a single GPU everything runs like a charm but whenever mu model is embedded with multi-gpus with nn.Dataparallel() i got an error about VaeOutput which is not serializable.
Traceback (most recent call last):
File "train.py", line 135, in <module>
main()
File "train.py", line 131, in main
model = model_runner.train(model, train_loader, learning_config, val_loader, feature_config)
File "/buzz-based-anomaly/utils/model_runner.py", line 375, in train
return self._train(model, train_dataloader, train_config, self._train_step, self._val_step,
File "/buzz-based-anomaly/utils/model_runner.py", line 455, in _train
train_epoch_loss = train_step_func(model, train_dataloader, optimizer, experiment, epoch, log_interval)
File "/buzz-based-anomaly/utils/model_runner.py", line 671, in _train_step
model_output = model(batch)
File "/root/miniconda3/envs/pytorch-cuda-env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/buzz-based-anomaly/utils/sm_data_parallel.py", line 10, in forward
return self.model(*input_data)
File "/root/miniconda3/envs/pytorch-cuda-env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/pytorch-cuda-env/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 169, in forward
return self.gather(outputs, self.output_device)
File "/root/miniconda3/envs/pytorch-cuda-env/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 181, in gather
return gather(outputs, output_device, dim=self.dim)
File "/root/miniconda3/envs/pytorch-cuda-env/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 78, in gather
res = gather_map(outputs)
File "/root/miniconda3/envs/pytorch-cuda-env/lib/python3.8/site-packages/torch/nn/parallel/scatter_gather.py", line 73, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
TypeError: 'VaeOutput' object is not iterable
How should I approach that kind of problem? Is it possible to use custom model output with multigpu support? |
st175131 | Solved by wanchaol in post #4
actually DDP might also have this issue as internally it’s using Tuple[Tensor] to detect bucket and do optimizations, etc. Looking at your case, Could you try turning VaeOutput into Named Tuple, this way it still preserves its name and will be fully compatible with DDP |
st175132 | Would it be possible to wrap the model in another Class that contains a DataParallel instantiation of the model which then returns the object? |
st175133 | Thanks for posting @tymons, I noticed that you are using nn.DataParallel to do mutigpu training, since we don’t recommend using nn.DataParallel right now, could you try using distributed data parallel (nn.DistributedDataParallel) and see if the issue still exist? You can see the tutorial here Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.10.0+cu102 documentation |
st175134 | actually DDP might also have this issue as internally it’s using Tuple[Tensor] to detect bucket and do optimizations, etc. Looking at your case, Could you try turning VaeOutput into Named Tuple, this way it still preserves its name and will be fully compatible with DDP |
st175135 | Hello,
I used to launch a multi node multi gpu code using torch.distributed.launch on two cloud servers using two different .sh script in each machine:
#machine 1 script
export NUM_NODES=2
export NUM_GPUS_PER_NODE=4
export HOST_NODE_ADDR=10.70.202.133
python -m torch.distributed.launch \
--nproc_per_node=$NUM_GPUS_PER_NODE \
--nnodes=$NUM_NODES \
--node_rank=0 \
--master_addr=$HOST_NODE_ADDR \
--master_port=1234 \
train.py --debug
#machine 2 script
export NUM_NODES=2
export NUM_GPUS_PER_NODE=4
export HOST_NODE_ADDR=10.70.202.133
python -m torch.distributed.launch \
--nproc_per_node=$NUM_GPUS_PER_NODE \
--nnodes=$NUM_NODES \
--node_rank=1 \
--master_addr=$HOST_NODE_ADDR \
--master_port=1234 \
train.py --debug
So when I started to work with PyTOrch 1.9, it says that torch.distributed.launch is deprecated and I have to migrate to torch.distributed.run. Unfortunately, there is not enough information about this module in the documentation on how it replaces torch.distributed.launch. When I try to work with the new method and I use the two new scripts in each machine:
#machine 1
export NUM_NODES=2
export NUM_GPUS_PER_NODE=4
export HOST_NODE_ADDR=10.70.202.133:1234
export JOB_ID=22641
python -m torch.distributed.run \
--nnodes=$NUM_NODES \
--nproc_per_node=$NUM_GPUS_PER_NODE \
--node_rank=0 \
--rdzv_id=$JOB_ID \
--rdzv_backend=c10d \
--rdzv_endpoint=$HOST_NODE_ADDR \
train.py --debug
#machine 2
export NUM_NODES=2
export NUM_GPUS_PER_NODE=4
export HOST_NODE_ADDR=10.70.202.133:1234
export JOB_ID=22641
python -m torch.distributed.run \
--nnodes=$NUM_NODES \
--nproc_per_node=$NUM_GPUS_PER_NODE \
--node_rank=1 \
--rdzv_id=$JOB_ID \
--rdzv_backend=c10d \
--rdzv_endpoint=$HOST_NODE_ADDR \
train.py --debug
I get this error:
[ERROR] 2021-07-09 19:37:35,417 error_handler: {
"message": {
"message": "RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.",
"extraInfo": {
Here’s how I setup my training script:
if "RANK" in os.environ and "WORLD_SIZE" in os.environ:
config["RANK"] = int(os.environ["RANK"])
config["WORLD_SIZE"] = int(os.environ["WORLD_SIZE"])
config["GPU"] = int(os.environ["LOCAL_RANK"])
else:
config["DISTRIBUTED"] = False
config["GPU"] = 1
config["WORLD_SIZE"] = 1
return
config["DISTRIBUTED"] = True
torch.cuda.set_device(config["GPU"])
config["DIST_BACKEND"] = "nccl"
torch.distributed.init_process_group(
backend=config["DIST_BACKEND"],
init_method=config["DIST_URL"],
world_size=config["WORLD_SIZE"],
rank=config["RANK"],
)
and I also parse the following argument:
parser.add_argument("--local_rank", type=int)
Am I missing something? |
st175136 | Thanks for raising this issue! Since this seems like it could be a possible bug, or at the very least, a migration issue, can you file an issue (essentially this post) over at Issues · pytorch/pytorch · GitHub 1 so that we can take a deeper look?
cc @cbalioglu @H-Huang @Kiuk_Chung @aivanou |
st175137 | Also, IIUC, torch.distributed.run should be fully backward-compatible with torch.distributed.launch. Have you tried simply dropping in torch.distributed.run with the same launch arguments, and if so what sort of issues did you hit there? |
st175138 | The docs for torch.distributed.launch|run needs some improvements to match the warning message. This issue is being tracked here: dist docs need an urgent serious update · Issue #60754 · pytorch/pytorch · GitHub 1. And most of it has been addressed in the nightly docs: torch.distributed.run (Elastic Launch) — PyTorch master documentation.
For the time being here are the things to note:
torch.distributed.run does not support parsing --local_rank as cmd arguments. If your script does this, then change it to getting local rank from int(os.environ["LOCAL_RANK"]). If you can’t change the script, then stick to torch.distributed.launch for now.
As @rvarm1 mentioned, torch.distributed.run's arguments are mostly backwards compatible with torch.distributed.launch (the exception is --use_env which is now set as True by default since we are planning to deprecate reading local_rank from cmd args in favor of env). |
st175139 | I tried torch.distributed.run with the same legacy arguments, and it works. Seems like it has a problem with the new rndvz arguments (or maybe I am not setting them up correctly). |
st175140 | Hi,
Did you understand whether one has to initialize the c10d backend manually (and in that case how)?
From the latest documentation I see that the ‘torchrun’ should be used. However, when using it on a setup of two nodes, I get rendezvous connection error on the worker node (the master node remains in a waiting state). |
st175141 | Hello! This usually means that nodes cannot reach each other due to hostname or ports being unavailable. What configuration are you using to launch jobs? Are you launching it on ec2? |
st175142 | import os
os.environ["CUDA_LAUNCH_BLOCKING"]= "1"
import torch
from torch import nn
from torch.nn import DataParallel
class Test(nn.Module):
def __init__(self):
super(Test, self).__init__()
def forward(self, feed, index, out, mask=None):
feed = torch.index_select(feed, 0, index.flatten()).view(*index.size(), -1)
o = out.repeat(1, index.size(1)).view(*index.size(), -1)
feed = torch.cat([feed, o], dim=-1)
return feed
feed = torch.tensor([[0., 1., 2.], [2., 3., 4.], [4., 5., 6.]])
index = torch.tensor([[0, 1], [1, 0], [1, 2], [2, 1]])
out = torch.tensor([[0., 1.], [1., 2.], [3., 4.], [5., 6.]])
feed = feed.cuda()
index = index.cuda()
out = out.cuda()
test = DataParallel(Test().cuda())
test(feed, index, out)
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_12948/3522224863.py in <module>
----> 1 test(feed, index, out)
~/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
166 return self.module(*inputs[0], **kwargs[0])
167 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
--> 168 outputs = self.parallel_apply(replicas, inputs, kwargs)
169 return self.gather(outputs, self.output_device)
170
~/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs)
176
177 def parallel_apply(self, replicas, inputs, kwargs):
--> 178 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
179
180 def gather(self, outputs, output_device):
~/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices)
84 output = results[i]
85 if isinstance(output, ExceptionWrapper):
---> 86 output.reraise()
87 outputs.append(output)
88 return outputs
~/lib/python3.7/site-packages/torch/_utils.py in reraise(self)
423 # have message field
424 raise self.exc_type(message=msg)
--> 425 raise self.exc_type(msg)
426
427
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File "~/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "~/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/tmp/ipykernel_12948/397267949.py", line 11, in forward
feed = torch.index_select(feed, 0, index.flatten()).view(*index.size(), -1)
RuntimeError: CUDA error: device-side assert triggered
The above code is working on a CPU and a single GPU. But, the error comes out from multiple GPUs with DataParallel |
st175143 | Seems like it has to do with the tensor operations in the forward pass because if you comment those out then the script will run. I’m not sure what the CUDA error is referring to.
Have you tried using DistributedDataParallel 1? It is preferred over DataParallel for multi-GPU training. |
st175144 | @H-Huang Thank you for the comment. Not yet with the DistributedDataParallel 1 because the DataParallel seems easy to use. It’s better to flip my script with DistributedDataParallel 1. |
st175145 | Are regular BatchNorm buffers (running_mean, running_var I guess) also broadcast and synchronized during forward pass when using DistributedDataParallel?
I thought that only SyncBatchNorm does this.
Also, will this broadcast/sync also happen in eval mode? (for SyncBatchNorm)
How is it controlled which buffers are synchronized and which are not? |
st175146 | Hi @vadimkantorov I think only SyncBatchNorm will broadcast and sync when using DDP, but they are syncing mean/invstd/count across GPUs instead of syncing buffers, the buffers maintained locally and updated locally. You can refer to the detailed implementaion in pytorch/_functions.py at 251686fc4cb1962944ed99c938df2d54f3d62e46 · pytorch/pytorch · GitHub
For SyncBatchNormed, broadcast/sync will not happen in eval mode, only in train mode. |
st175147 | I’m asking because we had this problem recently when a model without SyncBatchNorm was getting deadlocked during boradcasting of buffers.
The reason of the deadlock was different: Checkpointing may cause the NCCL error · Issue #1166 · speechbrain/speechbrain · GitHub 1, but it was still strange that it still went into the code path of broadcasting the buffers. In debugging the buffers were the stats of regular BatchNorm1d. |
st175148 | Hi. I’m using the following 1D resnet. I can train it in the non-distributed mode without any error but when switching to data distributed parallel mode I get the gradient computation has been modified by an in-place operation which usually occurs for in-place operations.
class MyConv1dPadSame(nn.Module):
"""
extend nn.Conv1d to support SAME padding
"""
def __init__(self, in_channels, out_channels, kernel_size, stride, groups=1):
super(MyConv1dPadSame, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.groups = groups
self.conv = torch.nn.Conv1d(
in_channels=self.in_channels,
out_channels=self.out_channels,
kernel_size=self.kernel_size,
stride=self.stride,
groups=self.groups)
def forward(self, x):
net = x
# compute pad shape
in_dim = net.shape[-1]
out_dim = (in_dim + self.stride - 1) // self.stride
p = max(0, (out_dim - 1) * self.stride + self.kernel_size - in_dim)
pad_left = p // 2
pad_right = p - pad_left
net = F.pad(net, (pad_left, pad_right), "constant", 0)
net = self.conv(net)
return net
class MyMaxPool1dPadSame(nn.Module):
"""
extend nn.MaxPool1d to support SAME padding
"""
def __init__(self, kernel_size):
super(MyMaxPool1dPadSame, self).__init__()
self.kernel_size = kernel_size
self.stride = 1
self.max_pool = torch.nn.MaxPool1d(kernel_size=self.kernel_size)
def forward(self, x):
net = x
# compute pad shape
in_dim = net.shape[-1]
out_dim = (in_dim + self.stride - 1) // self.stride
p = max(0, (out_dim - 1) * self.stride + self.kernel_size - in_dim)
pad_left = p // 2
pad_right = p - pad_left
net = F.pad(net, (pad_left, pad_right), "constant", 0)
net = self.max_pool(net)
return net
class BasicBlock(nn.Module):
"""
ResNet Basic Block
"""
def __init__(self, in_channels, out_channels, kernel_size, stride, groups, downsample, use_bn, use_do, is_first_block=False):
super(BasicBlock, self).__init__()
self.in_channels = in_channels
self.kernel_size = kernel_size
self.out_channels = out_channels
self.stride = stride
self.groups = groups
self.downsample = downsample
if self.downsample:
self.stride = stride
else:
self.stride = 1
self.is_first_block = is_first_block
self.use_bn = use_bn
self.use_do = use_do
# the first conv
self.bn1 = nn.BatchNorm1d(in_channels)
self.relu1 = nn.ReLU(inplace=True)
self.do1 = nn.Dropout(p=0.5)
self.conv1 = MyConv1dPadSame(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=self.stride,
groups=self.groups)
# the second conv
self.bn2 = nn.BatchNorm1d(out_channels)
self.relu2 = nn.ReLU(inplace=True)
self.do2 = nn.Dropout(p=0.5)
self.conv2 = MyConv1dPadSame(
in_channels=out_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=1,
groups=self.groups)
self.max_pool = MyMaxPool1dPadSame(kernel_size=self.stride)
def forward(self, x):
identity = x
# the first conv
out = x
if not self.is_first_block:
if self.use_bn:
out = self.bn1(out)
out = self.relu1(out)
if self.use_do:
out = self.do1(out)
out = self.conv1(out)
# the second conv
if self.use_bn:
out = self.bn2(out)
out = self.relu2(out)
if self.use_do:
out = self.do2(out)
out = self.conv2(out)
# if downsample, also downsample identity
if self.downsample:
identity = self.max_pool(identity)
# if expand channel, also pad zeros to identity
if self.out_channels != self.in_channels:
identity = torch.transpose(identity,-1,-2)
ch1 = (self.out_channels-self.in_channels)//2
ch2 = self.out_channels-self.in_channels-ch1
identity = F.pad(identity, (ch1, ch2), "constant", 0)
identity = torch.transpose(identity,-1,-2)
# shortcut
out = out + identity
return out
class ResNet1D(nn.Module):
"""
Input:
X: (n_samples, n_channel, n_length)
Y: (n_samples)
Output:
out: (n_samples)
Pararmetes:
in_channels: dim of input, the same as n_channel
base_filters: number of filters in the first several Conv layer, it will double at every 4 layers
kernel_size: width of kernel
stride: stride of kernel moving
groups: set larget to 1 as ResNeXt
n_block: number of blocks
n_classes: number of classes
"""
def __init__(self, in_channels, base_filters, kernel_size, stride, groups, n_block, n_classes, downsample_gap=2, increasefilter_gap=4, use_bn=True, use_do=True, verbose=False):
super(ResNet1D, self).__init__()
self.verbose = verbose
self.n_block = n_block
self.kernel_size = kernel_size
self.stride = stride
self.groups = groups
self.use_bn = use_bn
self.use_do = use_do
self.downsample_gap = downsample_gap # 2 for base model
self.increasefilter_gap = increasefilter_gap # 4 for base model
# first block
self.first_block_conv = MyConv1dPadSame(in_channels=in_channels, out_channels=base_filters, kernel_size=self.kernel_size, stride=1)
self.first_block_bn = nn.BatchNorm1d(base_filters)
self.first_block_relu = nn.ReLU()
out_channels = base_filters
# residual blocks
self.basicblock_list = nn.ModuleList()
for i_block in range(self.n_block):
# is_first_block
if i_block == 0:
is_first_block = True
else:
is_first_block = False
# downsample at every self.downsample_gap blocks
if i_block % self.downsample_gap == 1:
downsample = True
else:
downsample = False
# in_channels and out_channels
if is_first_block:
in_channels = base_filters
out_channels = in_channels
else:
# increase filters at every self.increasefilter_gap blocks
in_channels = int(base_filters*2**((i_block-1)//self.increasefilter_gap))
if (i_block % self.increasefilter_gap == 0) and (i_block != 0):
out_channels = in_channels * 2
else:
out_channels = in_channels
tmp_block = BasicBlock(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=self.kernel_size,
stride = self.stride,
groups = self.groups,
downsample=downsample,
use_bn = self.use_bn,
use_do = self.use_do,
is_first_block=is_first_block)
self.basicblock_list.append(tmp_block)
# final prediction
self.final_bn = nn.BatchNorm1d(out_channels)
self.final_relu = nn.ReLU(inplace=True)
# self.do = nn.Dropout(p=0.5)
self.dense = nn.Linear(out_channels, n_classes)
# self.softmax = nn.Softmax(dim=1)
def forward(self, x):
out = x
# first conv
if self.verbose:
print('input shape', out.shape)
out = self.first_block_conv(out)
if self.verbose:
print('after first conv', out.shape)
if self.use_bn:
out = self.first_block_bn(out)
out = self.first_block_relu(out)
# residual blocks, every block has two conv
for i_block in range(self.n_block):
net = self.basicblock_list[i_block]
if self.verbose:
print('i_block: {0}, in_channels: {1}, out_channels: {2}, downsample: {3}'.format(i_block, net.in_channels, net.out_channels, net.downsample))
out = net(out)
if self.verbose:
print(out.shape)
# final prediction
if self.use_bn:
out = self.final_bn(out)
out = self.final_relu(out)
out = torch.mean(out, -1)
if self.verbose:
print('final pooling', out.shape)
# out = self.do(out)
out = self.dense(out)
if self.verbose:
print('dense', out.shape)
# out = self.softmax(out)
if self.verbose:
print('softmax', out.shape)
return out
here is the error traceback:
[W python_anomaly_mode.cpp:104] Warning: Error detected in CudnnBatchNormBackward. Traceback of forward call that caused the error:
File "<string>", line 1, in <module>
File "/usr/lib64/python3.6/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/usr/lib64/python3.6/multiprocessing/spawn.py", line 118, in _main
return self._bootstrap()
File "/usr/lib64/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib64/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/alto/nima/textAnomaly/train_encoder_dd.py", line 168, in train
h1 = h_net(x1_rep)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/alto/nima/textAnomaly/resent1D.py", line 275, in forward
out = self.final_bn(out)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 136, in forward
self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/functional.py", line 2058, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
(function _print_stack)
0%| | 0/235 [00:03<?, ?it/s]
[W python_anomaly_mode.cpp:104] Warning: Error detected in CudnnBatchNormBackward. Traceback of forward call that caused the error:
File "<string>", line 1, in <module>
File "/usr/lib64/python3.6/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/usr/lib64/python3.6/multiprocessing/spawn.py", line 118, in _main
return self._bootstrap()
File "/usr/lib64/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib64/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/alto/nima/textAnomaly/train_encoder_dd.py", line 168, in train
h1 = h_net(x1_rep)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/alto/nima/textAnomaly/resent1D.py", line 275, in forward
out = self.final_bn(out)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 136, in forward
self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/functional.py", line 2058, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
(function _print_stack)
[W python_anomaly_mode.cpp:104] Warning: Error detected in CudnnBatchNormBackward. Traceback of forward call that caused the error:
File "<string>", line 1, in <module>
File "/usr/lib64/python3.6/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/usr/lib64/python3.6/multiprocessing/spawn.py", line 118, in _main
return self._bootstrap()
File "/usr/lib64/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib64/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/alto/nima/textAnomaly/train_encoder_dd.py", line 168, in train
h1 = h_net(x1_rep)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/alto/nima/textAnomaly/resent1D.py", line 275, in forward
out = self.final_bn(out)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 136, in forward
self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/nn/functional.py", line 2058, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
(function _print_stack)
0%| | 0/235 [00:04<?, ?it/s]
0%| | 0/235 [00:03<?, ?it/s]
Traceback (most recent call last):
File "train_encoder_dd.py", line 210, in <module>
mp.spawn(train, nprocs=args.num_gpus, args=(args,))
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
while not context.join():
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:
-- Process 2 terminated with the following error:
Traceback (most recent call last):
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/alto/nima/textAnomaly/train_encoder_dd.py", line 174, in train
loss.backward()
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/alto/nima/torch-env/lib/python3.6/site-packages/torch/autograd/__init__.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1024]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! |
st175149 | Solved by nima_rafiee in post #10
@shaoming20798
Yes. my problem was the normal batch norm is not working with DDP so I replaced it with syncedbatchnorm
the way I used the synced batch norm
def train(rank, conf):
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '8888'
dist.init_process_group(backend… |
st175150 | Hey @nima_rafiee, which PyTorch release are you using? Can you try v1.7+ if you are not using it? This bug is likely fixed by: [v1.7] Quick fix for view/inplace issue with DDP by albanD · Pull Request #46407 · pytorch/pytorch · GitHub 39 |
st175151 | Hey @nima_rafiee, could you please share a self-contained repro, including how DDP was called?
cc @albanD have you seen similar errors recently? Since the same model passed in local training, looks like the only difference is the scatter operator in DDP’s forward function. But I recall it was fixed by #46407 in v1.7? |
st175152 | nima_rafiee:
ResNet1D
@mrshenli here is the code
import wget
import os
os.environ["CUDA_VISIBLE_DEVICES"]= "1,2,3,4,5,6"
import argparse
from tqdm import tqdm
import torch
from torch import nn, optim
from torch.utils.data import DataLoader
import torch.distributed as dist
import torch.multiprocessing as mp
from apex import amp
from resent1D import ResNet1D
from torch.utils.data.distributed import DistributedSampler
from torch.nn.parallel import DistributedDataParallel
import numpy as np
vis = True
try:
from torch.utils.tensorboard import SummaryWriter
except:
vis = False
wt = 0.001
def train(rank, args):
print("Passed GPU:" ,rank)
dist.init_process_group(backend='nccl',init_method='env://',world_size=args.num_gpus ,rank=rank)
torch.cuda.set_device(rank)
exp_num = f'h_{args.h_dim}_bs{args.bs}_hlr{args.h_lr}'
if vis == True :
args.writer = SummaryWriter(f'runs/{exp_num}')
h_net = ResNet1D( in_channels=768,
base_filters=128, # 64 for ResNet1D, 352 for ResNeXt1D
kernel_size=16,
stride=2,
groups=32,
n_block=48,
n_classes=args.h_dim,
downsample_gap=6,
increasefilter_gap=12,
use_do=True)
h_net_opt = optim.Adam(h_net.parameters(), lr=args.h_lr, weight_decay=args.weight_decay)
h_net.cuda(rank)
model_lst , h_net_opt = amp.initialize([h_net,], h_net_opt, opt_level="O1")
h_net = model_lst[0]
h_net = DistributedDataParallel(h_net, device_ids=[rank], find_unused_parameters=True)
ds = TextDataset(dataset_name='AG_NEWS', out_cls=[])
sampler = DistributedSampler(ds, num_replicas=args.num_gpus)
loader = DataLoader(ds, batch_size=args.bs, sampler=sampler, shuffle=False)
loader = tqdm(loader)
for epoch in range(args.epoch):
sampler.set_epoch(epoch)
for i , (x1 , x2 , x_rnd, label) in enumerate(loader):
h_net_opt.zero_grad()
h1 = h_net(x1)
h2 = h_net(x2)
loss = loss(h1, h2, temprature = args.temprature )
loss.backward()
h_net_opt.step()
loader.set_description(
(
f' Epoch: {epoch + 1}; iter: {i} Loss: {loss.item()} '
)
)
if vis == True :
with torch.no_grad():
args.writer.add_scalar("Loss", loss.item(), global_step=epoch, walltime=wt)
if rank == 0:
#model_to_save = h_net.module if hasattr(h_net, 'module') else h_net # Take care of distributed/parallel training
torch.save(h_net.state_dict(), 'checkpoint/dd/h_net.checkpoint')
args.writer.close()
dist.destroy_process_group()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--bs', type=int, default=32)
parser.add_argument('--epoch', type=int, default=50)
parser.add_argument('--h_lr', type=float, default=3e-4)
parser.add_argument('--h_dim', type=int, default=128)
parser.add_argument('--weight_decay', type=float, default=1e-6)
parser.add_argument('--temprature', type=float, default=0.5)
args = parser.parse_args()
device_ids = list(range(torch.cuda.device_count()))
gpus = len(device_ids)
args.num_gpus = gpus
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '8888'
mp.spawn(train, nprocs=args.num_gpus, args=(args,)) |
st175153 | @mrshenli I don’t think this is related to the view/inplace fix that is mentioned there.
There are no views involved in this example and the error is a “classic” change inplace of a Tensor that is needed for gradients. Since the failing Node is batchnorm, I guess that that would either be in the input to the Module or the weights of the affine transformation.
Is DDP modifying weights inplace by any chance? Or the optimizer is not used properly? |
st175154 | thanks for your reply. To my understanding, I could not find any typical problematic in-place operation in the network structure ( it’s working in non-distributed mode) I don’t know if there exist new issues with in-place operation in DDP mode. for the optimizer, I’m using the classic way mentioned in tutorials. |
st175155 | @mrshenli @albanD. I found this SyncBatchNorm — PyTorch 1.7.0 documentation 9 . seems normal BN can not be used with DDP. but I dont know how to use this in my code. in specific I can not understand this line.
process_group = torch.distributed.new_group(process_ids)
I already initialised a process group using the:
dist.init_process_group(backend='nccl',init_method='env://',world_size=args.num_gpus ,rank=rank)
why should I make a new_group() and how can I get process_ids |
st175156 | Hi, @nima_rafiee,have you solved your problem? I have just met a similar question in this topic Using ‘DistributedDataParallel’ occurs a weird error - distributed - PyTorch Forums 13 |
st175157 | @shaoming20798
Yes. my problem was the normal batch norm is not working with DDP so I replaced it with syncedbatchnorm 13
the way I used the synced batch norm
def train(rank, conf):
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '8888'
dist.init_process_group(backend='nccl',init_method='env://',world_size=world_size ,rank=rank)
torch.cuda.set_device(rank)
process_group = torch.distributed.new_group()
model = Resnet18()
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model, process_group)
model.cuda(rank)
I hope this solves your problem |
st175158 | @nima_rafiee I was trying to look into why the existing code didn’t work with BatchNorm. In the code you provided, there is the following import statement:
from resent1D import ResNet1D
Where can I find the resnet1D module? |
st175159 | here is the link:
github.com
hsd1503/resnet1d/blob/master/resnet1d.py 6
"""
resnet for 1-d signal data, pytorch version
Shenda Hong, Oct 2019
"""
import numpy as np
from collections import Counter
from tqdm import tqdm
from matplotlib import pyplot as plt
from sklearn.metrics import classification_report
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
class MyDataset(Dataset):
def __init__(self, data, label):
This file has been truncated. show original |
st175160 | @nima_rafiee Thanks for sharing the resnet1d dependencies, although I still can’t repro this locally since I’m now missing TextDataset:
NameError: name 'TextDataset' is not defined |
st175161 | nima_rafiee:
why should I make a new_group() and
Hi, thanks for posting your solution. It perfectly solves my problem.
But I’m still wondering why our situations, DDP doesn’t work with normal BN.
Both BN and SyncBN are supported by DDP and work well in my toy model:
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU(inplace=True)
self.bn = nn.BatchNorm1d(10)
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.bn(self.relu(self.net1(x))))
Found the cause in my case: Inplace error if DistributedDataParallel module that contains a buffer is called twice · Issue #22095 · pytorch/pytorch · GitHub 24 |
st175162 | Using SyncBatchNorm in favor of BatchNorm worked perfectly in my case. Thank you for the advice. |
st175163 | I was wondering if gradients are scaled from fp16 to fp32 before all reducing as in Apex Distributed data parallel default flag allreduce_always_fp32 .
Also what is the equivalent of delay_allreduce in torch.nn.distributed.DistributedDataParallel ? |
st175164 | Solved by ptrblck in post #2
I wouldn’t assume to see any automatic transformation of gradients if no ddp hooks etc. are used.
The native mixed-precision training util. via torch.cuda.amp uses FP32 parameters and thus also FP32 gradients.
You could try to increase the bucket_cap_mb to kick off the gradient sync later, if need… |
st175165 | I wouldn’t assume to see any automatic transformation of gradients if no ddp hooks etc. are used.
The native mixed-precision training util. via torch.cuda.amp uses FP32 parameters and thus also FP32 gradients.
You could try to increase the bucket_cap_mb to kick off the gradient sync later, if needed (unsure if there is a cleaner way). |
st175166 | Hi,
I’m training a model called VQVAE using DDP, when I ran the code, there was an error message:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 1: 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
return forward_call(*input, **kwargs)
I used the following code to detect unused parameters:
for n, p in model.named_parameters():
if p.grad is None:
print(f'{n} has no grad')
The above code gives the following messages:
module.postnet.convolutions.0.0.conv.weight has no grad
module.postnet.convolutions.0.0.conv.bias has no grad
module.postnet.convolutions.0.1.weight has no grad
module.postnet.convolutions.0.1.bias has no grad
module.postnet.convolutions.1.0.conv.weight has no grad
module.postnet.convolutions.1.0.conv.bias has no grad
module.postnet.convolutions.1.1.weight has no grad
module.postnet.convolutions.1.1.bias has no grad
module.postnet.convolutions.2.0.conv.weight has no grad
module.postnet.convolutions.2.0.conv.bias has no grad
module.postnet.convolutions.2.1.weight has no grad
module.postnet.convolutions.2.1.bias has no grad
module.postnet.convolutions.3.0.conv.weight has no grad
module.postnet.convolutions.3.0.conv.bias has no grad
module.postnet.convolutions.3.1.weight has no grad
module.postnet.convolutions.3.1.bias has no grad
module.postnet.convolutions.4.0.conv.weight has no grad
module.postnet.convolutions.4.0.conv.bias has no grad
module.postnet.convolutions.4.1.weight has no grad
module.postnet.convolutions.4.1.bias has no grad
I do have a postnet in my model and I’m pretty sure this module is used in the forward computation, so I don’t know how to solve this problem.
I’ve tried two methods to work around this bug:
set find_unused_parameters=True, this makes the code work, but the model cannot converge.
single GPU training, no error, but training is slow.
Is there any method to solve this problem? |
st175167 | Thanks for posting @Alethia. Looking into the issue, it appears that your model didn’t have gradients produced for those postnet parameters after a backward call, is this normal or should the postnet actually produce gradients? If so, there might be some issues in your model, did you try it locally and the postnet could produce gradients correctly?
It would also be valuable if you can provide a minimal reproducible model so that we can help debugging. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.