id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st176768 | I have designed a big model, its architecture looks like this:
class BigModel(nn.Module):
def __init__(self, encoder: nn.Module, component1: nn.Module, component2: nn.Module, component3: nn.Module):
super(BigModel, self).__init__()
self.encoder = nn.DataParallel(encoder, device_ids=["cuda:0", "cuda:1","cuda:2", "cuda:3"])
self.component1 = component1
self.component2 = component2
self.component3 = component3
def deploy(self):
self.component1 = self.component1.to("cuda:4")
self.component2 = self.component2.to("cuda:4")
self.component3 = self.component3.to("cuda:5")
I have a single machine with 6 Tesla V100 GPU (32 GB)
The encoder is very big (BERT-like model) so I want to use 4 GPUs to perform encoding process and then use the rest GPUs for other works of this model.
It can work but some times the so of the outputs of the encoder are lost.
For example, I input a batch of sequences of size (16, 256) to the encoder, data parallel should split it into 4 tensor of size (4, 256) and encode them in parallel. Then gather those output and merge into a tensor of size (16, 256, 1024) .
But, sometimes I only got the output of size (12, 256, 1024), that means one split of data is lost. I can not figure out the reason of this problem…
So is there anyone can explain this problem or give a way to combine model parallel and data parallel ? |
st176769 | kstarxin:
For example, I input a batch of sequences of size (16, 256) to the encoder, data parallel should split it into 4 tensor of size (4, 256) and encode them in parallel. Then gather those output and merge into a tensor of size (16, 256, 1024) .
This is very weird. Are you using DataParallel instead of DistributedDataParallel? In your encoder's forward function, could you please add some prints to check if parallel_apply indeed spawned two threads and each thread is getting (4, 256) input and emitting (4, 256, 1024) output? |
st176770 | I do use the nn.Dataparallel instead of nn.DistributedDataParallel.
I have checked the forward function of encoder, and I found that when the program went wrong, there are only 3 encode calls instead of 4. I dont know what is going on… |
st176771 | kstarxin:
when the program went wrong
you mean there is no error message at all. Could you please share a minimum repro? |
st176772 | I think I have found out the cause of the problem, here is my code, it is a runable demo and you need to install:
torch==‘1.7.1+cu110’
transformers==4.3.2
import transformers
import torch
from transformers import ElectraModel
from transformers import AdamW
from tqdm import tqdm
class Encoder(torch.nn.Module):
def __init__(self):
super(Encoder, self).__init__()
self.encoder = ElectraModel.from_pretrained("google/electra-large-discriminator")
def forward(self, x1, x2):
return [self.encoder(x1), self.encoder(x2)]
class BigModel(torch.nn.Module):
def __init__(self):
super(BigModel, self).__init__()
self.encoder = torch.nn.DataParallel(Encoder(),device_ids=["cuda:0", "cuda:1", "cuda:2", "cuda:3"] )
self.l1 = torch.nn.Linear(1024, 1024)
self.l2 = torch.nn.Linear(1024, 1024)
self.l3 = torch.nn.Linear(1024, 2)
self.loss = torch.nn.CrossEntropyLoss()
def deploy(self):
self.l1 = self.l1.to("cuda:4")
self.l2 = self.l2.to("cuda:4")
self.l3 = self.l3.to("cuda:5")
self.loss = self.loss.to("cuda:5")
def forward(self, x: torch.LongTensor, x1: torch.LongTensor, y: torch.LongTensor):
res = self.encoder(x, x1)
x2 = self.l1(res[0][0].to("cuda:4")).to("cuda:5")
x3 = self.l2(res[1][0].to("cuda:4")).to("cuda:5")
print(x2.shape)
print(x3.shape)
x4 = self.l3((x2.unsqueeze(1) + x3.unsqueeze(0)).mean(1))
return self.loss(x4.reshape((-1, 2)), y.reshape((-1,))).mean()
def main():
model = BigModel().cuda()
model.deploy()
optimizer = AdamW(model.parameters(), lr=3e-5)
# assume we train a token binary classificaiton task for 10 epochs, and the data set has 100 batch
for epoch in range(10):
for d in tqdm(range(100)):
ids = torch.randint(0, 30000, (16, 256))
ids1 = torch.randint(0, 30000, (6, 256)) # strange size that lead to the problem
y = torch.randint(0, 2, (16, 256))
model.zero_grad()
ids = ids.cuda()
ids1 = ids1.cuda()
y = y.to("cuda:5")
loss = model(ids, ids1, y)
loss.backward()
optimizer.step()
main()
And after I start this python program, and get the error output
0%| | 0/100 [00:00<?, ?it/s]
torch.Size([12, 256, 1024])
torch.Size([6, 256, 1024])
0%| | 0/100 [00:09<?, ?it/s]
Traceback (most recent call last):
File "demo.py", line 58, in <module>
main()
File "demo.py", line 53, in main
loss = model(ids, ids1, y)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "demo.py", line 36, in forward
return self.loss(x4.reshape((-1, 2)), y.reshape((-1,))).mean()
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py", line 962, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2468, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 2262, in nll_loss
.format(input.size(0), target.size(0)))
ValueError: Expected input batch_size (3072) to match target batch_size (4096).
This error happens in loss function and torch not check the the input batch size of all input tensors. And at the same time, a split of one encode result is lost. |
st176773 | I can reproduce this locally, and here is what happened:
DP will try to scatter the inputs to the given devices:
github.com
pytorch/pytorch/blob/679f07a017d7387dce76936ced0eca55881d99f5/torch/nn/parallel/data_parallel.py#L157 1
def forward(self, *inputs, **kwargs):
if not self.device_ids:
return self.module(*inputs, **kwargs)
for t in chain(self.module.parameters(), self.module.buffers()):
if t.device != self.src_device_obj:
raise RuntimeError("module must have its parameters and buffers "
"on device {} (device_ids[0]) but found one of "
"them on device: {}".format(self.src_device_obj, t.device))
inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
# for forward function without any inputs, empty list and dict will be created
# so the module can be executed on one device which is the first one in device_ids
if not inputs and not kwargs:
inputs = ((),)
kwargs = ({},)
if len(self.device_ids) == 1:
return self.module(*inputs[0], **kwargs[0])
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
outputs = self.parallel_apply(replicas, inputs, kwargs)
In this case, the given devices are [0, 1, 2, 3], and the inputs are 16X* and 6X* tensors. The scatter function traverses the list and try to scatter every tensor in the list.
github.com
pytorch/pytorch/blob/454dd7ba8647ac11735c6563ac8e2b60313789ad/torch/nn/parallel/scatter_gather.py#L11-L39
def scatter(inputs, target_gpus, dim=0):
r"""
Slices tensors into approximately equal chunks and
distributes them across given GPUs. Duplicates
references to objects that are not tensors.
"""
def scatter_map(obj):
if isinstance(obj, torch.Tensor):
return Scatter.apply(target_gpus, None, dim, obj)
if is_namedtuple(obj):
return [type(obj)(*args) for args in zip(*map(scatter_map, obj))]
if isinstance(obj, tuple) and len(obj) > 0:
return list(zip(*map(scatter_map, obj)))
if isinstance(obj, list) and len(obj) > 0:
return [list(i) for i in zip(*map(scatter_map, obj))]
if isinstance(obj, dict) and len(obj) > 0:
return [type(obj)(i) for i in zip(*map(scatter_map, obj.items()))]
return [obj for targets in target_gpus]
# After scatter_map is called, a scatter_map cell will exist. This cell
This file has been truncated. show original
In your code, it’s same as calling x.chunk(4, 0) and x1.chunk(4, 0). However, x.chunk(4, 0) will return 4 tensors and 16 can be divided by 4. But x1.chunk(4, 0) only returns 3 tensors, as the chunk algorithm there is, when not divisible, put 6 / chunks-1 in the first chunks-1 splits and the reminder in the last split. But 6 / (4-1) is divisible, as a result, the last split has nothing.
See: torch.chunk — PyTorch 1.8.0 documentation 1
Then the scatter tries to zip together splits from x and x1, as a result, x's last split is dropped.
github.com
pytorch/pytorch/blob/454dd7ba8647ac11735c6563ac8e2b60313789ad/torch/nn/parallel/scatter_gather.py#L27
def scatter_map(obj):
if isinstance(obj, torch.Tensor):
return Scatter.apply(target_gpus, None, dim, obj)
if is_namedtuple(obj):
return [type(obj)(*args) for args in zip(*map(scatter_map, obj))]
if isinstance(obj, tuple) and len(obj) > 0:
return list(zip(*map(scatter_map, obj)))
if isinstance(obj, list) and len(obj) > 0:
return [list(i) for i in zip(*map(scatter_map, obj))]
if isinstance(obj, dict) and len(obj) > 0:
return [type(obj)(i) for i in zip(*map(scatter_map, obj.items()))]
return [obj for targets in target_gpus]
# After scatter_map is called, a scatter_map cell will exist. This cell
# has a reference to the actual function scatter_map, which has references
# to a closure that has a reference to the scatter_map cell (because the
# fn is recursive). To avoid this reference cycle, we set the function to
# None, clearing the cell
try:
res = scatter_map(inputs)
finally:
Curious, what is the expected behavior here? is it [4, 2], [4, 2], [4, 2], [4, 0], or [4, 2], [4, 2], [4, 1], [4, 1]? |
st176774 | Hi,
I am running a script for distributed processing on windows with 1 GPU. For that, I am using
torch.distributed. However, I am confused how to set the dist_url flag. Among many others, I went through this 5 documentation but couldn’t get it.
When I used args.dist_url = 'tcp://localhost:58472' or args.dist_url = 'env://' or even when I manually calculate my IP address and a free port and then set the value as: args.dist_url = "tcp://{}:{}".format(ip,port), it gives me the following error:
raise RuntimeError(“No rendezvous handler for {}://”.format(result.scheme))
RuntimeError: No rendezvous handler for tcp://
Alternatively, I tried to set args.dist_url = "file:///E:/tmp.txt" but then I get the following error:
raise RuntimeError("Distributed package doesn’t have NCCL "
RuntimeError: Distributed package doesn’t have NCCL built in
All these errors are raised when the init_process_group() function is called as following:
torch.distributed.init_process_group(backend='nccl', init_method=args.dist_url, world_size=args.world_size, rank=args.rank)
Here, note that args.world_size=1 and rank=args.rank=0. Any help on this would be appreciated, especially on how to set up the tcp init method. |
st176775 | Solved by mbehzad in post #7
Hi @mrshenli you are right indeed. I updated to PyTorch 1.8.1 and used the gloo backend and file init_method() and it works.
For anyone wondering, following is a sample code that works on Windows. However, I do not suggest distributed processing on windows.
import torch.distributed as dist
dist.i… |
st176776 | Following is a simple code which I tried on Ubuntu and it works, but it does not work on Windows. On Windows, it gives the same error (RuntimeError: No rendezvous handler for tcp://) when init_process_group() is called.
import torch.distributed as dist
dist_url = 'tcp://127.0.0.1:23456'
dist.init_process_group(
backend = 'nccl',
rank=0,
init_method=dist_url,
world_size=1)
print("Done") |
st176777 | @mrshenli Do you have any idea why this TCP init method does not work on Windows? |
st176778 | Hey @mbehzad, I noticed you are using NCCL. IIUC, NCCL is not available on Windows. Could you please try gloo instead?
Another question is that which PyTorch version are you using? In v1.7.*, the distributed package only supports FileStore rendezvous on Windows, TCPStore rendezvous is added in v1.8. |
st176779 | Hi @mrshenli you are right indeed. I updated to PyTorch 1.8.1 and used the gloo backend and file init_method() and it works.
For anyone wondering, following is a sample code that works on Windows. However, I do not suggest distributed processing on windows.
import torch.distributed as dist
dist.init_process_group(
backend = 'gloo',
rank=0,
init_method='file:///e:/tmp/some_file',
world_size=1)
print("Done") |
st176780 | Hi there,
I was wondering if there was a way to do an all_reduce between two GPUs on different nodes. For example if I have a tensor on GPU0 of machine 0 and another tensor on GPU0 of machine 1, is it possible to issue a dist.all_reduce call across the nodes using the NCCL backend.
The following code hangs:
import argparse
import torch
import torch.distributed as dist
def ProcessArgs():
parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int)
return parser.parse_args()
def MultiNodeTest(global_rank, local_rank):
torch.cuda.set_device(local_rank)
t = torch.ones(2, 2).to(local_rank)
dist.all_reduce(t)
if __name__ == '__main__':
args = ProcessArgs()
dist.init_process_group(backend='nccl', init_method='env://')
I am launching this simple script using the following:
On Node 0:
python3 -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=0 --master_addr=192.168.11.2 --master_port=12347 multinode.py
On Node 1:
python3 -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr=192.168.11.2 --master_port=12347 multinode.py |
st176781 | This code runs just fine when I use gloo as my backend. It only deadlocks when I use NCCL. Any ideas what could be happening? |
st176782 | is it possible to issue a dist.all_reduce call across the nodes using the NCCL backend.
This is definitely supported.
Your code seems correct to me, according to the launch utility tutorial.
Just my guess, have you tried to specify world_size arg in init_process_group?
Or have you tried to use TCP as the init_method? Like init_method="tcp://{}:{}".format(args.master_addr, args.master_port)
@mrshenli Any idea? |
st176783 | I found a workaround by recreating a new group object with all the ranks and then passing that group to all comm calls. If I do this it doesn’t hang anymore. Not sure why just using group.WORLD as the group in the comm calls hangs. Is this a known issue / am I missing something? |
st176784 | If you still need more investigation, please set the NCCL environment variable NCCL_SOCKET_IFNAME (probably as “eth1”) for debugging. |
st176785 | When I use DataParallel in one machine with two GPUs with 8 batch size(4 on each GPU), I get a satisfied training result. But, if I use DistributedDataParallel on two single GPU machines with 8 batch size(4 on each node), the training result is dissatisfied and convergence speed is slower than the DataParallel.
After checking the doc of DataParallel and DistributedDataParallel, I noticed that DataParallel sum the gradient of each GPU, DistributedDataParallel average the gradient of each node(GPU under my condition).
I think this difference is the reason for the different training results.
Is average the correct way for the gradient in DistributedDataParallel with multi-node? Should I modify the DistributedDataParallel to sum the gradient of each node to reproduce the same training result in my exam? |
st176786 | Solved by GeoffreyChen777 in post #11
@Lausanne I think you should keep the original learning rate.
If you use the DistributedDataParallel, the gradient will be averaged between each process. DataParallel sum the gradient. It is equal to the DistributedDataParallel. The reason is that the the loss will be averaged by 128 batchsize and … |
st176787 | Yes, average across processes is the expected behavior here.
Right now this behavior is not configurable. |
st176788 | @GeoffreyChen777 Yes, averaging is the correct way for gradient reduction among nodes. The reason you are seeing DataParallel adds gradients together is the correct way too,
The difference is that, DataParallel will split the batch size into sub-batches on each of the GPUs. When each GPU completes the computation, gradients are going to be reduced (added) onto the master GPU. Thinking about this as that: (1) this is a master-worker mode instead of true data parallel, since only the master GPU will scatter the batch and gather the results (2) we actually want to get the gradient of the total batch size, that’s why adding each worker’s gradient is the expected behavior. By comparison, Distributed Data Parallel goes completely parallel among distributed processes. And if the process itself has more than 1 GPU, the similar scatter and gather master worker mode will be employed similarly as DataParallel, and gradient will be added among worker GPU, and then averaged across distributed processes. The bottomline here is, the gradient will be averaged across data-parallel workers (processes), not slave workers (within a single process). |
st176789 | @teng-li Thank you!
I am reproducing a huge network. The author use 16 batch size with 4GPU training. And they use DataParallel. I don’t have 4GPU machine. So I want to use 2 machine(2GPUs on each machine) to traing the network with batchsize 16. If the averaging is the default operation of DisributedDataParallel, there is no way to reproduce the training process. Right? |
st176790 | @GeoffreyChen777
You can do this three ways:
(1) If you can put 16 batch size on 2 GPUs
(2) If you cannot, use two nodes (two processes) DistributedDataParallel, each node(process) has a batch size of 8. Here you should use the base LR for batch size of 8
(3) You can use four processes in two nodes with DistributedDataParallel, (this is the fastest way of doing distributed training), and each node will have two processes, and each process and DistributedDataParallel operates on one GPU (local rank, which is rank % number_of_gpu_per_node, here your rank is from 0 - 3, since you have four processes across two nodes). But then you have to use the base LR for the batch size of 4.
Hope this clarifies and helps |
st176791 | @teng-li Thank you! If LR for batch size 16 is 0.01, LR for batch size 8 should be set to 0.02. Right? |
st176792 | Thank you for your reply. After reading your answer, I have understood DataParallel but still confused by Distributed DataParallel. In my case, I have one machine with 4 GPUs. According to pytorch 1.0 tutorial , distributed dataparallel can also be used in single machine. Now I have 4 processes , each process has one GPU. So if batch_size is set to 128, that means each process (or single GPU) will be allocated batch_size 32 ? And some hyperparameters like LR should be set with batchsize 32 ? |
st176793 | @Lausanne I think you should keep the original learning rate.
If you use the DistributedDataParallel, the gradient will be averaged between each process. DataParallel sum the gradient. It is equal to the DistributedDataParallel. The reason is that the the loss will be averaged by 128 batchsize and then backward to DataParallel model. The gradient will be reduced across each minibatch. And in DistributedDataParallel, the loss will be averaged by 32 and backward to each distributed model. So, we need to average the gradients between each distributed process. |
st176794 | @GeoffreyChen777
Really thank you! Briefly speaking, Dataparallel firstly sum and then average, because each GPU calculates part of the 128 batch, then they must send loss to master GPU to update parameters. Distributed Dataparallel has independent model and parameters in each GPU, so the loss calculated on one GPU has been the average of batch_size 32, then we should average loss between different GPU. That is in Distributed Dataparallel , we do average inner model and then average between different GPU. If I do not understand correctly, please let me known, thank you again! |
st176795 | @Lausanne
You are right. I think the final gradient in DataParallel should be equal to the gradient of DistributedParallel. |
st176796 | Hi,
Just to make sure I have understood correctly. If I train on one gpu with batchsize=16 and lr=0.01, what would be the correct lr if I train on two gpus in torch.distributed mode with the same batchsize=16(8 on each gpu)? |
st176797 | Thanks, does this mean that in the distributed mode, the gradient of different gpus are summed up rather than meaned, thus I should reduce the lr to make remedy for the summation? |
st176798 | My guess. I think it depends on how you compute the loss and backward.
– DataParallel: if you merge 2 batch at the end then compute a single loss = average loss over each example, and then do loss.backward(). Then summing is mathematically correct. 1 or 2 or more GPUs, the gradient computed this way should be the same
– DistributedDataParallel: if using 2 separate losses for each GPU: loss1 = average over example in batch1, and loss2 = average over example in batch 2. To simulate loss = average over examples = (loss1+loss2)/2. You can loss1.backward(), loss2back.ward(), then average params gradient, that is equivalent to loss.backward() |
st176799 | According @GeoffreyChen777 's answer, I think the learning rate should keep same, i.e. 0.01. |
st176800 | teng-li:
and gradient will be added among worker GPU, and then averaged across distributed processes.
Why the gradients are averaged across distributed processes ? |
st176801 | I guess because the vast majority of loss functions in PyTorch have the default behavior to average losses across all samples in the batch, i.e. they have reduction=mean. To get the mathematically equivalent gradients in a DDP experiment (like the ones you’d get by running the 1-GPU experiment), you have to average them. If your loss function has reduction=sum, then you have to multiply the loss value at each GPU process with the world_size to cancel out this averaging. |
st176802 | I have a cuda9-docker with tensorflow and pytorch installed, I am doing cross validation on an image dataset. Currently I am using a for loop to do the cross validation. Something like
for data_train, data_test in sklearn.kfold(5, all_data):
train(data_train)
test(data_test)
But the for loop takes too long, will the following code work to parallelize the for loop? Maybe there is already a solution. But this is not Data Parallelization.
from multiprocessing import Pool
def f(trainset, testset):
train_result = train(trainset)
test_result = test(testset)
save_train_result()
save_test_result()
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, sklearn.cvfold(5, all_data)))
I am not sure if the multiprocessing will only paralize the cpu or both cpu and gpu? This might be easiler than doing parallel in side a model i guess like Parallelize simple for-loop for single GPU 12
since in my case, there is no need to communicate across each process? |
st176803 | I have the same issue. Anyone has solution?
I currently found this 9 may help, but haven’t check the performance. |
st176804 | I am not sure if the multiprocessing will only paralize the cpu or both cpu and gpu?
According to the official documentation of Process Pools:
processes is the number of worker processes to use. If processes is None then the number returned by os.cpu_count() is used.
Therefore, I believe it should only do the parallelization on CPU.
There is no need to communicate across each process?
I think even if you parallelize the loops here, you will still have to sync gradients/weights across processes, because there is actually an implicit data dependency of the model weights.
If you parallelize the cross validation for loop w/o communication, you will evolve a separate models on each process, and this will slower down the convergence. |
st176805 | Hi there,
I am noticing some peculiar behavior when training using gloo as my backend. I am running on an 8 GPU node and it seems like processes with ranks 0 through 7 are creating some footprint on GPU 0 when they shouldn’t? I am setting the device in each process with torch.cuda.set_device(rank). This is what the output from Nvidia-smi looks like.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.39 Driver Version: 460.39 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro RTX 5000 On | 00000000:1A:00.0 Off | Off |
| 33% 30C P2 53W / 230W | 16115MiB / 16125MiB | 14% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Quadro RTX 5000 On | 00000000:1C:00.0 Off | Off |
| 33% 35C P2 61W / 230W | 7200MiB / 16125MiB | 17% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 Quadro RTX 5000 On | 00000000:1D:00.0 Off | Off |
| 33% 37C P2 58W / 230W | 7200MiB / 16125MiB | 16% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 Quadro RTX 5000 On | 00000000:1E:00.0 Off | Off |
| 33% 35C P2 57W / 230W | 7200MiB / 16125MiB | 16% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 Quadro RTX 5000 On | 00000000:3D:00.0 Off | Off |
| 33% 31C P2 49W / 230W | 7200MiB / 16125MiB | 15% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 Quadro RTX 5000 On | 00000000:3F:00.0 Off | Off |
| 33% 34C P2 50W / 230W | 7200MiB / 16125MiB | 16% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 6 Quadro RTX 5000 On | 00000000:40:00.0 Off | Off |
| 0% 40C P2 55W / 230W | 7200MiB / 16125MiB | 16% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 7 Quadro RTX 5000 On | 00000000:41:00.0 Off | Off |
| 33% 34C P2 58W / 230W | 7200MiB / 16125MiB | 16% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 22213 C DlrmTrainer:0 10059MiB |
| 0 N/A N/A 22214 C DlrmTrainer:1 853MiB |
| 0 N/A N/A 22215 C DlrmTrainer:2 873MiB |
| 0 N/A N/A 22216 C DlrmTrainer:3 799MiB |
| 0 N/A N/A 22217 C DlrmTrainer:4 1057MiB |
| 0 N/A N/A 22218 C DlrmTrainer:5 773MiB |
| 0 N/A N/A 22219 C DlrmTrainer:6 843MiB |
| 0 N/A N/A 22220 C DlrmTrainer:7 765MiB |
| 1 N/A N/A 22214 C DlrmTrainer:1 7177MiB |
| 2 N/A N/A 22215 C DlrmTrainer:2 7177MiB |
| 3 N/A N/A 22216 C DlrmTrainer:3 7177MiB |
| 4 N/A N/A 22217 C DlrmTrainer:4 7177MiB |
| 5 N/A N/A 22218 C DlrmTrainer:5 7177MiB |
| 6 N/A N/A 22219 C DlrmTrainer:6 7177MiB |
| 7 N/A N/A 22220 C DlrmTrainer:7 7177MiB |
+-----------------------------------------------------------------------------+
Why are processes 1-7 are allocating memory on GPU0 also?
This does not happen when I use NCCL as my backend. |
st176806 | Solved by wayi in post #2
My guess is that on Gloo backend, all the data from different GPUs need to somehow first implicitly gather the data to GPU 0 and then transfer it to CPU.
Please not that as also summarized in Distributed communication package - torch.distributed — PyTorch 1.8.0 documentation
This is the rule of th… |
st176807 | My guess is that on Gloo backend, all the data from different GPUs need to somehow first implicitly gather the data to GPU 0 and then transfer it to CPU.
Please not that as also summarized in Distributed communication package - torch.distributed — PyTorch 1.8.0 documentation 3
This is the rule of thumb:
Use the NCCL backend for distributed GPU training.
Use the Gloo backend for distributed CPU training.
Therefore, we should always favor NCCL on GPUs. |
st176808 | Does PyTorch provide any options to train a model using two GPUs installed in two different virtual machines parallelly? If so, Could someone please help me to implement it?
I found that the DataParallel function is supposed to run multi GPUs present on the same machine, but couldn’t able to find any suitable information for my requirement. TIA |
st176809 | Does PyTorch provide any options to train a model using two GPUs installed in two different virtual machines parallelly?
Relevant link: Distributed communication package - torch.distributed — PyTorch 1.8.0 documentation 4
Not sure if you can do this with virtual machines.
I found that the DataParallel function is supposed to run multi GPUs present on the same machine, but couldn’t able to find any suitable information for my requirement.
You are right: DataParallel is not designed for multi-node training. |
st176810 | Thanks for your response. I’ve checked the distributed documentation and I don’t really understand the Multi-Node multi-process distributed training section.
Why --master_addr=“192.168.1.1” and --master_port=1234 arguments have the same values on both the nodes? why the IP and port values aren’t different? will it work if I assign two respective virtual machine IP addresses and port numbers in those arguments? |
st176811 | According to “Environment variable initialization” section in PyTorch Distributed tutorial 1, rank 0 node will be used to set up all connections, and both MASTER_PORT and MASTER_ADDR must be free port and address of rank 0 node. Therefore, you should always use the same values for all the nodes in the same process group. |
st176812 | How do the two virtual machines connect to each other and run GPUs from another machine parallelly? Out of two virtual machine, on which machine should I run the below distribution command?
python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE
–nnodes=2 --node_rank=0 --master_addr=“192.168.1.1”
–master_port=1234 YOUR_TRAINING_SCRIPT.py (–arg1 --arg2 --arg3
and all other arguments of your training script)
python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE
–nnodes=2 --node_rank=1 --master_addr=“192.168.1.1”
–master_port=1234 YOUR_TRAINING_SCRIPT.py (–arg1 --arg2 --arg3
and all other arguments of your training script) |
st176813 | How do the two virtual machines connect to each other and run GPUs from another machine parallelly?
Once the two machines forms a process group, you can use collective APIs like allreduce for communication.
Out of two virtual machine, on which machine should I run the below distribution command?
You should run these commands on two virtual machines, respectively. There is no “out of two virtual machine”. |
st176814 | wayi:
Once the two machines forms a process group, you can use collective APIs like allreduce for communication.
I don’t have an idea of the process group and allreduce API. Could you please elaborate on this process or share relevant links to follow up?
wayi:
You should run these commands on two virtual machines, respectively. There is no “out of two virtual machine”.
Do you mean to run the first distribution command on one VM and the second command on the other VM or run both the commands on both the VMs? |
st176815 | I don’t have an idea of the process group and allreduce API. Could you please elaborate on this process or share relevant links to follow up?
Check out “Collective functions” in the tutorial 2. This is lower-level API for communication. Ideally, you probably don’t need to do any explicit communication, and you can just try DDP 2, which does not let you feel any difference compared with the training on a single process.
Do you mean to run the first distribution command on one VM and the second command on the other VM or run both the commands on both the VMs?
Run the first distribution command on one VM and the second command on the other VM |
st176816 | Hello everyone,
I am facing some memory issues running my model on multiple GPUs with DDP. I want to report to you the experiments I made to understand the memory utilization of the combination of workers + DDP. In all the experiments I will report in this post I will use always the same model, so the size of the model is always the same.
I have 4 GPUs with 16GB each. First of all, I used only one to understand the highest batch that fits on it. The maximum batch size is 64 (I started to have CUDA OOM with 128). Then, in order to increase the batch size, I started to use DDP with batches of 64 for each process (1 process per GPU). Using 2 GPUs with a total batch of 128 caused CUDA OOM. I have read online that the DDP uses part of the GPU memory to store the communication bucket. So I assume that this communication bucket does not fit into memory. I started to decrease the per-GPU batch size to 60, then 55, and finally to 50 (resulting in a total batch size of 100 on 2 GPUs). Unfortunately, none of these batch sizes worked, For curiosity, I started to analyze the memory usage of 1 single GPUs with a batch of 1. The total memory usage is 3.4G. I then tried to use DDP with a batch of 2 (1 per-GPU) resulting in 3.9G. This experiment demonstrates the overhead made by DDP. Finally, I tried to increase the number of workers and I have found that the final memory usage on GPU increases. This last founding shocked me because I thought that the workers simply increase the RAM usage, not GPU.
More specifically, whenever I use DDP I have several processes per GPU. The first process I see is the one performing the training (3.9G), then there are N_GPUS-1 processes that I suppose are in charge of keeping the communication with the other processes opened (0G each), and finally, the other ones should be the workers (0.6G each). Thus the memory usage on each process is 3.9G + num_workers*0.6G.
It’s not clear to me why the workers have a memory footprint on the GPU. Can someone help me to understand? |
st176817 | Solved by ptrblck in post #10
Using the model or any CUDA operations in the collate_fn is uncommon, I think (at least I haven’t seen a use case so far) and I guess not well tested.
The additional CUDA context creation might come from the usage of multiprocessing.
I also don’t know what kind of 0MB process is created as I haven… |
st176818 | Seo:
Finally, I tried to increase the number of workers and I have found that the final memory usage on GPU increases. This last founding shocked me because I thought that the workers simply increase the RAM usage, not GPU.
This shouldn’t be the case. Could you share an executable code snippet, which shows an increase in GPU memory using CPUTensors in the Dataset after increasing the workers in the DataLoader? |
st176819 | Hi @ptrblck , the code I am using is spread among different files so as soon as I can I will create a code snippet and try to reproduce this issue.
In the meantime I want to share with you the findings:
1 GPU, batch=64, 0 workers:
1 GPU, batch=64, 7 workers:
2 GPUs, batch=128, 0 workers:
2GPUs, batch=128, 1 worker:
As you can see the last one has different processes spawned per GPU and is the only one causing OOM (batch=128 with 0 workers instead is fine). The screenshot was taken just before the OOM.
My DataLoader uses pin_memory=True, so I exclude to have CUDATensor inside the __getitem__, otherwise an exception occurs. I also used .to(device, non_blocking=True). |
st176820 | It seems that an additional CUDA context is created, which is wrong.
A code snippet would be helpful and needed to isolate this issue. |
st176821 | I created a simpler version of my code. If you run this code you will find the unusual worker processes to be spawned on GPUs. I have found out that these processes are spawned only if I set a custom collate_fn into my DataLoader. If you use the default collate_fn the strange processes do not seem to spawn. It is not yet clear to me the why.
code: PyTorch DDP memory inspection - Pastebin.com 6 |
st176822 | Could you upload the script to e.g. GitHub as a Gist, as I cannot access pastebin? |
st176823 | @ptrblck I tried again to debug the code. The problem is there every time I use a collate_fn that is a method of a class that contains the model as one of its attributes (self.model). Each time this happens, there is a new cuda context created. I don’t know if this can be considered a bug or it is a normal behavior known by the developers |
st176824 | Another thing that I have noticed: even if I use the default collate_fn (without specifying a custom one), additional processes are created on GPUs with 0MiB each (even with 0 workers). The number of these processes are equal to N_GPUS-1, so I suppose they are in charge of keeping the connection opened with the other GPUs. Am I right? |
st176825 | Using the model or any CUDA operations in the collate_fn is uncommon, I think (at least I haven’t seen a use case so far) and I guess not well tested.
The additional CUDA context creation might come from the usage of multiprocessing.
I also don’t know what kind of 0MB process is created as I haven’t seen it before.
In your code snippet you are also using multiprocessing with DDP. What’s the reason for not using DDP directly? |
st176826 | I implemented again everything with DDP without multiprocessing. Now it works, thanks! |
st176827 | Hi there –
TL;DR: I’m running into some performance issues where the inter-process communication cost increases linearly in the number of processes that I spawn, and wanted to know what options are available to improve communication performance.
I’m using PyTorch for a distributed learning application that requires me to create a number of process groups that is ~linear in the number of processes that I’ve created, and I have to perform a decent amount of communication within those groups during the forward and backward passes through my model. Unfortunately, it seems like the cost of communication is increasing linearly in the number of processes I’m using for training, which is negating any performance benefits I’d otherwise be getting by scaling out training across multiple CPUs / GPUs.
Here’s a MWE showing how the communication costs are scaling for me:
import datetime
import os
import sys
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
def target(process_group, n_iters = 1024):
for _ in range(n_iters):
x = torch.randn(10, 10)
dist.all_reduce(x, group=process_group)
def _launch(rank, world_size):
# Initialize process groups for every pair of consecutive processes
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
dist.init_process_group("gloo", rank=rank, world_size=world_size)
if world_size % 2 != 0:
raise ValueError("The world size must be a multiple of 2")
for i in range(0, world_size, 2):
new_group = dist.new_group([i, i+1])
if rank == i or rank == i + 1:
process_group = new_group
start = datetime.datetime.now()
target(process_group)
end = datetime.datetime.now()
if rank == 0:
print(f"Time to run target() = {end - start}")
if __name__ == "__main__":
if len(sys.argv) != 2:
raise RuntimeError("Usage: ./test.py <n_process_groups>")
n_process_groups = int(sys.argv[1])
world_size = 2 * n_process_groups
mp.spawn(
_launch,
args=(world_size,),
nprocs=world_size,
join=True,
)
Here’s my results when I run the script:
$ for p in {1..8}; do echo -n "p = $p; " && python3 test.py $p; done
p = 1; Time to run target() = 0:00:00.386286
p = 2; Time to run target() = 0:00:00.478979
p = 3; Time to run target() = 0:00:00.728055
p = 4; Time to run target() = 0:00:00.920900
p = 5; Time to run target() = 0:00:01.150142
p = 6; Time to run target() = 0:00:01.505277
p = 7; Time to run target() = 0:00:01.937280
p = 8; Time to run target() = 0:00:02.068987
I’m not surprised that communication performance decreases as the number of processes increases, but I don’t understand why the cost is growing as quickly as it is.
So my question is – are the growing communication costs I’m seeing here just something that’s expected out of the gloo backend or other elements of the PyTorch distributed communication internals? Or is there a way I could improve the script I have written above so that the target() function runs faster?
Any help would be greatly appreciated. Thank you! |
st176828 | Solved by wayi in post #5
A few other thoughts:
Do you have a chance to set async_op=True in dist.all_reduce, so that you can overlap some computation with allreduce?
Additionally, probably you can try to tune some Gloo environment variables. I am not familiar with Gloo, but I see tuning some NCCL parameters can help commu… |
st176829 | (Incidentally, I’m currently doing all of my training on CPU. Here’s some additional info about the machine I’m currently running this on, if it helps:)
$ lscpu | grep -E "^(CPU\(s\):|Model name:)"
CPU(s): 48
Model name: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz |
st176830 | Hi! Gloo backend is much slower than NCCL if can do trainings on GPU. You may want to switch to NCCL if possible.
A few relatively easy options on my mind:
It will be much more efficient if you use DDP 1 instead of implementing your own communication by allreduce primitive.
Try no_sync context manager on the tutorial, which can reduce the sync frequency by accumulating gradients locally.
If you can use NCCL, it will be much faster. You can also try DDP communication hooks 1 for gradient compression. |
st176831 | Thanks for the reply!
wayi:
It will be much more efficient if you use DDP instead of implementing your own communication by allreduce primitive.
Try no_sync context manager on the tutorial, which can reduce the sync frequency by accumulating gradients locally.
Unfortunately I can’t currently wrap my models with DDP (I’m integrating with another library that doesn’t play very well with DDP), but my plan is to do so as soon as possible.
That said, the application I’m developing for requires inter-process communication in both the forwards and backwards passes. The total communication is quite a bit higher than just the setup communication / gradient synchronization performed by DDP, so I suspect that wrapping my model with DDP won’t do very much to reduce communication costs.
wayi:
If you can use NCCL, it will be much faster. You can also try DDP communication hooks for gradient compression.
I’d like to use NCCL, but it doesn’t directly support some of the communication primitives I need. In any case, for now I’m less concerned about raw communication speed and more concerned with how it scales as the number of processes increases.
If I used the NCCL backend, would the communication speed decrease at the same rate as Gloo (so that e.g. doubling the number of processes / process groups causes all_reduce to run at half speed)? |
st176832 | A few other thoughts:
Do you have a chance to set async_op=True in dist.all_reduce, so that you can overlap some computation with allreduce?
Additionally, probably you can try to tune some Gloo environment variables. I am not familiar with Gloo, but I see tuning some NCCL parameters can help communication performance a lot.
I’m integrating with another library that doesn’t play very well with DDP
Also interested in knowing the reasons that cause DDP not feasible for this case.
If I used the NCCL backend, would the communication speed decrease at the same rate as Gloo (so that e.g. doubling the number of processes / process groups causes all_reduce to run at half speed)?
NCCL will have both a better scaling efficiency and higher raw speed than Gloo. |
st176833 | wayi:
Do you have a chance to set async_op=True in dist.all_reduce, so that you can overlap some computation with allreduce?
I have tried this, yes! It gives a pretty good speedup for the test script (and a smaller speedup for my actual code), but not enough to counteract the growth of communications costs. There may be some other comms operations I could make async, though; I’ll definitely look into it.
wayi:
Additionally, probably you can try to tune some Gloo environment variables. I am not familiar with Gloo, but I see tuning some NCCL parameters can help communication performance a lot.
I’d definitely like to know if there are any environmental variables I can tune for Gloo. I think I’ll need to dig in some more and see if anything like that exists.
wayi:
Also interested in knowing the reasons that cause DDP not feasible for this case.
The library I’m using on top of PyTorch wraps a model and mostly tries to imitate PyTorch’s API for training. Unfortunately, the only API it currently exposes for updating parameters after backprop is a single function that basically just loops over the model parameters and applies a single step of SGD on them.
I decided after some experimentation that the amount of effort required to either (a) modify the library to be able to wrap DDP or (b) create a modified version of DDP that could wrap the library was nontrivial. I might end up having a flash of inspiration that’ll help me figure out how to do one or the other, but for now it’s not possible to use DDP for my purposes.
wayi:
NCCL will have both a better scaling efficiency and higher raw speed than Gloo.
Thank you, that’s good to know. If I can’t figure out how to speed up Gloo, I’ll try to see if I can modify my code to use NCCL. |
st176834 | I am setting different seed for each process (distributed training setting) but the models weights are still same for all the processes. Is it an expected behavior?
PS: I was under the impression that since weight init depends on seed so, different seed will lead to different weights init for each process. |
st176835 | Yes. At the time of DDP wrapping, parameters and buffers (i.e., model.state_dict()) on rank0 are broadcasted to all other ranks. So although weights on different ranks were initialized from different seeds, they will have the same starting point (rank0’s state) after DDP construction. |
st176836 | sio277 already gave the right answer. This behavior is expected.
DDP must keep the weights on different processes synced at the beginning, so operating the same initial weights and synced gradients can lead to the same updated weights, and this makes the distributed training equivalent to the sequential version. Otherwise distributed training will be mathematically wrong. |
st176837 | I
I am facing a thread deadlock issue when I use multiple GPUs with DataParallel(). The model is training on a medium-size dataset with 240K training samples. The model successfully trains for one epoch. In the second epoch, the training progresses smoothly till it reaches 50%. After that, it is simply stuck with no progress. When I kill the process using ctrl+c or kill -s SIGKILL, it becomes a zombie process!
Here is what I get when I do ctrl+c
File "run_E2E_EL_RE.py", line 962, in <module>
main()
File "run_E2E_EL_RE.py", line 913, in main
global_step, tr_loss = train(args, model, tokenizer)
File "run_E2E_EL_RE.py", line 249, in train
el_loss, re_loss, _, _ = model.forward(**ned_inputs)
File "/dresden/users/rb897/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/dresden/users/rb897/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/dresden/users/rb897/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 77, in parallel_apply
thread.join()
File "/dresden/users/rb897/anaconda3/lib/python3.7/threading.py", line 1044, in join
self._wait_for_tstate_lock()
File "/dresden/users/rb897/anaconda3/lib/python3.7/threading.py", line 1060, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
Code Snippet:
model.zero_grad()
train_iterator = trange(
epochs_trained, int(args.num_train_epochs), desc="Epoch", disable=args.local_rank not in [-1, 0]
)
set_seed(args) # Added here for reproducibility
for epoch_num in train_iterator:
epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0])
for step, batch in enumerate(epoch_iterator):
model.train()
batch = tuple(t.to(args.device) for t in batch)
inputs_1 = {...}
inputs_2 = {...}
loss_1, last_hidden_states = model.forward(**inputs_1)
inputs_2["last_hidden_states"] = last_hidden_states
loss_2, loss_3 = model.forward(**inputs_2)
if args.n_gpu > 1: # mean() to average on multi-gpu parallel training
loss_1 = loss_1.mean()
loss_2 = loss_2.mean()
loss_3 = loss_3.mean()
loss = loss_1 + loss_2 + loss_3
loss.backward()
optimizer.step()
scheduler.step()
model.zero_grad()
per gpu batch size = 4
OS: Ubuntu 18.04.5
CPU: Intel Xeon® - 64 cores
CUDA Version: 11.2
GPU: Quadro RTX 6000
GPU memory: 24G
PyTorch: 1.4.0 |
st176838 | Do you consistently see it stuck in the second epoch? Does your job exclusively occupy all the GPUs? |
st176839 | Besides what @mrshenli asked: are you also seeing this issue using the latest PyTorch stable or nightly release? |
st176840 | Yes, I performed three runs and in each case it got stuck at the same point. I am using 4 out of 8 available GPUs and no other jobs are running on those GPUs. |
st176841 | I reran the code using the latest PyTorch. I am seeing the following error message. This time it happens with the first batch of the first epoch.
Traceback (most recent call last):
File "run_E2E_EL_RE.py", line 962, in <module>
main()
File "run_E2E_EL_RE.py", line 913, in main
global_step, tr_loss = train(args, model, tokenizer)
File "run_E2E_EL_RE.py", line 246, in train
ner_loss, last_hidden_states = model.forward(**ner_inputs)
File "/home/rajarshi_kingsaint_bhowmik/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/rajarshi_kingsaint_bhowmik/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/rajarshi_kingsaint_bhowmik/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/rajarshi_kingsaint_bhowmik/anaconda3/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/rajarshi_kingsaint_bhowmik/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/rajarshi_kingsaint_bhowmik/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/rajarshi_kingsaint_bhowmik/E2E-EL-RE/modeling_E2E_EL_RE.py", line 65, in forward
mention_outputs = self.bert_mention.bert(
File "/home/rajarshi_kingsaint_bhowmik/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/rajarshi_kingsaint_bhowmik/E2E-EL-RE/modeling_bert.py", line 758, in forward
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
StopIteration |
st176842 | Hello,
I am trying to do gradient accumulation
model.zero_grad() # Reset gradients tensors
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad() # Reset gradients tensors
if (i+1) % evaluation_steps == 0: # Evaluate the model when we...
evaluate_model()
as mentioned here 6
But my model is on multiple GPUs
So when my loss is output from the model how do I scale it? also I want to clip grad norms while training it. How should my training loop modify?
My code is this, does this looks correct?
for i, batch in enumerate(train_loader_with_distributed_sampler):
for param_group in optimizer.param_groups:
param_group['lr'] = learning_rate
x, y = batch
y_hat = model(x)
loss = criterion(y_hat, y).mean()
loss = loss / hparams.gradient_accumulation_step
reduced_loss = reduce_tensor(loss.data, n_gpus).item()
loss.backward()
current_accumulation_run = iteration % hparams.gradient_accumulation_step + 1
# my grad clip thres is 1.0 so it will be multiplied with 1, 2, 3, 4, 5 based on my gradient accumulation step size
# Or maybe I don't need to manage this?
grad_norm = torch.nn.utils.clip_grad_norm_(
model.parameters(), hparams.grad_clip_thresh * current_accumulation_run)
grad_norm = grad_norm
if (i + 1) % hparams.gradient_accumulation_step == 0:
optimizer.step()
model.zero_grad()
if rank == 0:
print("Optimizing Step")
print("Train loss {} {:.6f} Grad Norm {:.6f}".format(
i, reduced_loss, grad_norm)) |
st176843 | shivammehta007:
But my model is on multiple GPUs
So when my loss is output from the model how do I scale it?
You mean your gradients are on multiple GPUs, so that you cannot directly pass it to the same operator? @ptrblck @albanD does this mean the gradients need to be move to the same device first, and then calculate the scaling ratio? Or is the recommended way to do per device/model-shard scaling? |
st176844 | I think I misunderstood in the above comment. Would I be correct if I assume each model replica is on one GPU, and then you have DDP on top for gradient averaging? |
st176845 | If you have single-GPU model replica + DDP, will it be acceptable to let DDP first do gradient averaging, and then do gradient scaling/clipping independently on every process before calling optimizer.step(). Since DDP will make sure that all model replicas have the same gradient, their should reach the same scaling/clipping result.
Another thing is that, to accumulate gradients from multiple iterations, you can try using the ddp.no_sync() 24, which can help avoid unnecessary communication overheads. |
st176846 | mrshenli:
I think I misunderstood in the above comment. Would I be correct if I assume each model replica is on one GPU, and then you have DDP on top for gradient averaging?
I think this is the default behaviour of DDP no? Since it is initialised with distributed.init, and triggered by the different script where I have set the world size and everything as mentioned in the DDP basic tutorial. So let’s assume I have 4 GPUs my each model is replicated on each GPU and distributed sampler is providing each GPU with equal number of batches. |
st176847 | shivammehta007:
I think this is the default behaviour of DDP no?
Yep, it’s the default behavior. Just wanna make sure this is how DDP is used in your application so that my comment can be relevant.
Could you please elaborate more on your requirements for gradient clipping? Do you need to do that before DDP gradient averaging comm or after?
If you need to before DDP comm, then you probably can use the DDP communication hook 9. More specifically, you can wrap the gradient bucket clipping with the allreduce communication in the hook.
If it is OK to do clipping after DDP comm, then you can run the clipping ops after DDP backward() and before optimizer step. |
st176848 | I have no specific requirements actually, I just want to prevent gradient explosion in my implementation, so I was clipping the norm, but then I switched to DDP module and was accumulating the gradients as well and was not sure where to put the clipping now. |
st176849 | I have a dataset class which reads samples.
Sample attributes: (domain, image, label)
In each batch, I want to enforce that every single domain is represented. For example, 8 domains would warrant a batch size of 8K with K samples/domain.
I have written a custom shuffle function which sorts the entire pool of images and rearranges the index such that each batch will have K samples from each domain.
This sorting occurs as train_loader.dataset.sort('my_custom_shuffler') at every reset point (like an epoch or some other logic). To ensure that train_loader does not jumble my list again, train_loader.shuffle has been set of False.
In my application, DistributedSampler(shuffle=False) has been defined and assigned to train_loader before the train looping starts. Will my manual sorting affect DistributedSampler (DS)? Does DS assign samples to each rank in the very beginning and maintain that order? If yes, then manual shuffling should not affect DS operation. |
st176850 | Hi there,
I am trying to use DistributedDataParallel for multi-GPU use with multiple nn.Modules. But, my programme consists of two nn.Modules: 1. a model (containing model parameters) and 2. a constraint (containing Lagrangian parameters). They depend on the same backward pass, but have their own optimisers, something along the lines of:
model_optimiser = optimiser(model.parameters(), ...)
constraint_optimiser = optimiser(constraint.parameters(), ...)
predictions = model(inputs)
loss_1, loss_2 = calculate_loss(predictions)
total_loss = loss_1 + constraint(loss_2)
total_loss.backward()
model_optimiser.step()
constraint_optimiser.step()
Now, this seems to run okay on one GPU without DistributedDataParallel, but when moving to multiple GPUs with DistributedDataParallel it errors.
I found this answer and tried what is suggested there, something along the lines of:
# in main
mp.spawn(train, nprocs=int(config.n_gpus * config.n_nodes), args=(config,))
...
# in train (the distributed function)
torch.cuda.set_device(device_rank)
os.environ['MASTER_ADDR'] = get_ip()
os.environ['MASTER_PORT'] = str(port_nr)
dist.init_process_group(backend='nccl', init_method='env://', world_size=int(config.n_gpus * config.n_nodes), rank=device_rank)
pg_model = torch.distributed.new_group(list(range(world_size)))
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[device_rank], find_unused_parameters=True, process_group=pg_model)
pg_constraint = torch.distributed.new_group(list(range(world_size)))
constraint = torch.nn.parallel.DistributedDataParallel(constraint, device_ids=[device_rank], find_unused_parameters=True, process_group=pg_constraint)
The error I get is:
File "/home/cbarkhof/code-thesis/NewsVAE/loss_and_optimisation.py", line 401, in assemble_loss
beta_kl = self.manager["beta_KL"]["constraint"](kl_loss)
File "/home/cbarkhof/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/cbarkhof/.local/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 617, in forward
inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
File "/home/cbarkhof/.local/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 643, in scatter
return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
File "/home/cbarkhof/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 36, in scatter_kwargs
inputs = scatter(inputs, target_gpus, dim) if inputs else []
File "/home/cbarkhof/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 28, in scatter
res = scatter_map(inputs)
File "/home/cbarkhof/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 15, in scatter_map
return list(zip(*map(scatter_map, obj)))
File "/home/cbarkhof/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 13, in scatter_map
return Scatter.apply(target_gpus, None, dim, obj)
File "/home/cbarkhof/.local/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 92, in forward
outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams)
File "/home/cbarkhof/.local/lib/python3.6/site-packages/torch/nn/parallel/comm.py", line 186, in scatter
return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams))
RuntimeError: chunk expects at least a 1-dimensional tensor
which errors for the part constraint(loss_2)
loss_2 is indeed is zero dimensional, but that is expected as it is just an average loss term. I guess I have something wrong in my set-up. Anyone that can point me in the right direction? That would help me a lot, thanks!
Cheers,
Claartje |
st176851 | Solved by mrshenli in post #2
Hey @ClaartjeBarkhof, can you try wrapping your model and constraint into one nn.Module first and then wrap that module with DistributedDataParallel. Sth like:
class WrapperModule(nn.Module):
def __init__(self, model, constraint):
super().__init__()
self.model = model
self.constraint… |
st176852 | Hey @ClaartjeBarkhof, can you try wrapping your model and constraint into one nn.Module first and then wrap that module with DistributedDataParallel. Sth like:
class WrapperModule(nn.Module):
def __init__(self, model, constraint):
super().__init__()
self.model = model
self.constraint = contraint
def forward(self, inputs):
predictions = self.model(inputs)
loss_1, loss_2 = calculate_loss(predictions)
total_loss = loss_1 + constraint(loss_2)
return total_loss
ddp = DistributedDataParallel(
WrapperModule(model, constraints),
device_ids=[device_rank],
find_unused_parameters=True
)
model_optimiser = optimiser(model.parameters(), ...)
constraint_optimiser = optimiser(constraint.parameters(), ...)
ddp(inputs).backward()
model_optimiser.step()
constraint_optimiser.step()
BTW, I noticed that the code uses two different processes groups for model and constraints. When using NCCL backend, this could lead to deadlock. As of 2.8, NCCL requires using only one communicator for each CUDA device at any given time instance (i.e., not thread-safe) |
st176853 | I I need to use Pytorch with open MPI, so I’m having to install it from source. I’m using CentOS 8.
while runnning - python setup.py install i’m running into the following error.
CMake Warning at caffe2/CMakeLists.txt:755 (add_library):
Cannot generate a safe runtime search path for target torch_cpu because
files in some directories may conflict with libraries in implicit
directories:
runtime library [libgomp.so.1] in /usr/lib/gcc/x86_64-redhat-linux/8 may be hidden by files in:
/home/aditya/anaconda3/lib
Some of these libraries may not be found correctly.
-- Generating done
-- Build files have been written to: /home/aditya/pytorch/build
cmake3 --build . --target install --config Release -- -j 1
[2/2038] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Version.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Version.cpp.o
/usr/bin/g++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -isystem third_party/gloo -isystem ../cmake/../third_party/gloo -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem ../third_party/gemmlowp -isystem ../third_party/neon2sse -isystem ../third_party/XNNPACK/include -I../cmake/../third_party/benchmark/include -isystem ../third_party -isystem ../cmake/../third_party/eigen -isystem /home/aditya/anaconda3/include/python3.8 -isystem /home/aditya/anaconda3/lib/python3.8/site-packages/numpy/core/include -isystem ../cmake/../third_party/pybind11/include -isystem /home/aditya/anaconda3/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -isystem ../third_party/ideep/mkl-dnn/include -isystem ../third_party/ideep/include -I../torch/csrc/api -I../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../torch/csrc -I../third_party/miniz-2.0.8 -I../third_party/kineto/libkineto/include -I../third_party/kineto/libkineto/src -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../caffe2/core/nomnigraph/include -isystem include -I../third_party/FXdiv/include -I../c10/.. -Ithird_party/ideep/mkl-dnn/include -I../third_party/ideep/mkl-dnn/src/../include -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../third_party/QNNPACK/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/NNPACK/include -I../third_party/fbgemm/include -I../third_party/fbgemm -I../third_party/fbgemm/third_party/asmjit/src -I../third_party/FP16/include -I../third_party/tensorpipe -Ithird_party/tensorpipe -I../third_party/tensorpipe/third_party/libnop/include -I../third_party/fmt/include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O2 -DNDEBUG -DNDEBUG -fPIC -DCAFFE2_USE_GLOO -DHAVE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -DASMJIT_STATIC -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Version.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Version.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Version.cpp.o -c ../aten/src/ATen/Version.cpp
../aten/src/ATen/Version.cpp: In function ‘std::__cxx11::string at::get_mkldnn_version()’:
../aten/src/ATen/Version.cpp:45:13: error: ‘mkldnn_version_t’ does not name a type; did you mean ‘dnnl_version_t’?
const mkldnn_version_t* ver = mkldnn_version();
^~~~~~~~~~~~~~~~
dnnl_version_t
../aten/src/ATen/Version.cpp:46:37: error: ‘ver’ was not declared in this scope
ss << "Intel(R) MKL-DNN v" << ver->major << "." << ver->minor << "." << ver->patch
^~~
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "setup.py", line 864, in <module>
build_deps()
File "setup.py", line 354, in build_deps
build_caffe2(version=version,
File "/home/aditya/pytorch/tools/build_pytorch_libs.py", line 58, in build_caffe2
cmake.build(my_env)
File "/home/aditya/pytorch/tools/setup_helpers/cmake.py", line 345, in build
self.run(build_args, my_env)
File "/home/aditya/pytorch/tools/setup_helpers/cmake.py", line 140, in run
check_call(command, cwd=self.build_dir, env=env)
File "/home/aditya/anaconda3/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake3', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '1']' returned non-zero exit status 1. |
st176854 | This seems to be general build issue. cc @seemethere
@ptrblck have you seen similar error before? If not, I can create an issue on github to report this failure. |
st176855 | No, I haven’t seen this MKL issue before.
Based on the code usage, it seems that MKL might either not be available, while AT_MKL_ENABLED() is set (lines of code) or maybe there is some kind of mix between MKL/OneAPI on the system.
@bogoman are you seeing the same issue after a python setup.py clean and a new git submodule update --init --recursive? |
st176856 | Hey!
I uninstalled conda mkl, and installed the pip libraries. It worked just fine. |
st176857 | Hi,
I’ve been trying to train a GNN with pytorch. My sample graphs can have 1-8 nodes.
Unfortunately, the computation graph is too large to fit inside the resources I have. I split up the computation for each node in the graph network using a for loop that computes/propagates the messages, calculates the loss & calls backward. Inside this loop, I call a couple of functions from my model, not forward(). This helps me fit the whole thing inside a Titan with 12GBs of memory but I have 2 and the second one has remained unused.
I couldn’t use DataParallel or DistributedDataParallel because the wrapped models only have the forward function. I could technically fit everything in a single forward function and call it multiple times instead of once, but sample graphs do not have the same number of nodes. e.g. I can’t call losses.backward() 3 times in one process and 5 times in the other.
Is there any other way for me to utilize both GPUs?
I appreciate any tips or advice. |
st176858 | Solved by mrshenli in post #6
here is the link: DistributedDataParallel — PyTorch master documentation
no_sync basically disables DDP comm within the context. |
st176859 | Hey @hos-b does model parallelism work in your case? i.e., split your model into two shards and put each shard into one GPU.
If the two GPUs are on the same machine, here is a single-machine model parallel tutorial:
Single-Machine Model Parallel Best Practices — PyTorch Tutorials 1.8.0 documentation
If the two GPUs are on different machines, here are some tutorials on RPC:
RPC: Distributed RPC Framework — PyTorch master documentation 3
tutorials: PyTorch Distributed Overview — PyTorch Tutorials 1.8.0 documentation |
st176860 | Hello @mrshenli. Thank you for your reply.
Model parallelism will probably also not work because even though the torch calls are async, I have to call backward() at the end of the for loop, creating a blocking bottleneck. The computation graph would otherwise get too large for a single GPU.
Your reply made me realize I had silently forgotten about DataParallel. Since my batches contain whole graphs, I somehow forgot I could further divide the graph into batches of size 2. I had stopped thinking about DataParallel when I found out about the advantages of DistributedDataParallel. |
st176861 | hos-b:
Model parallelism will probably also not work because even though the torch calls are async, I have to call backward() at the end of the for loop, creating a blocking bottleneck. The computation graph would otherwise get too large for a single GPU.
I see. Have you tried checkpointing some part of the autograd graph? This would drop the the activations and autograd graph in the forward and recompute them again in the backward. It’s like paying more compute to reduce memory footprint.
https://pytorch.org/docs/stable/checkpoint.html
I have to call backward() at the end of the for loop
Curious, is this because you need to compute the global loss across all iterations, so that you cannot run multiple fwd-bwd-fwd-bwd to accumulate grads (DDP can support this with no_sync() context manager) and then run optimizer.step() once to update parameters? |
st176862 | I did not know about checkpointing. It looks promising. Thank you
mrshenli:
Curious, is this because you need to compute the global loss across all iterations, so that you cannot run multiple fwd-bwd-fwd-bwd to accumulate grads (DDP can support this with no_sync() context manager) and then run optimizer.step() once to update parameters?
Sorry, I didn’t explain my training well. Each graph has up to 8 nodes. A full forward pass for a single node + the detached messages from the other nodes takes about 8GBs of memory. I go through the nodes using a for loop. I have to call backward() at the end of each iteration to calculate the loss, so it does get accumulated at each iteration. I do a single optimizer.step() after the loop to update the parameters. I technically do fwd-bwd x [1 - 8], into step().
From my understanding of DDP, the gradient reduction across multiple processes starts with the backward() call. That’s why I thought it wouldn’t work if different processes called it different amount of times. I did not know about no_sync() either. I’ll have to read about it. Thanks. |
st176863 | here is the link: DistributedDataParallel — PyTorch master documentation 5
no_sync basically disables DDP comm within the context. |
st176864 | Thanks. It took me some time to adapt the code to distributed training but no_sync() did the trick. |
st176865 | HI, Thank you for your interest in my writing.
I’ve seen a lot of posts on the pytorch forum but couldn’t find a solution to what I was wondering about.
Can 3 dataloaders be applied to DistributedDataParallel??
The process I think is as follows.
GPU1 GPU2 GPU3
model model model
dataloader1 dataloader2 dataloader3
I want to backpropagation by averaging the loss values of each dataloader.
Are there any possible methods or examples??
Each data loader has different image size and batch size, so concatDataset cannot be used.
If there is a way, please comment.
Thank you. |
st176866 | Solved by mrshenli in post #6
Yep, here is a starter example: Distributed Data Parallel — PyTorch 1.8.0 documentation
You can replace the torch.randn(20, 10).to(rank) random input tensor by input and labels from a dataloader example
Here is a complete list of DDP tutorials: PyTorch Distributed Overview — PyTorch Tutorials 1.8.… |
st176867 | The process I think is as follows.
GPU1 GPU2 GPU3
model model model
dataloader1 dataloader2 dataloader3
This is possible and is actually the recommended use case for DDP. For dataloader, you can use the DistributedSampler 1 and set num_replicas properly to match DDP world size.
I want to backpropagation by averaging the loss values of each dataloader.
This is a bit tricky. With DDP, outputs and losses are local to each process. DDP synchronizes models by synchronizing gradients. More details are available in this PyTorch Distributed paper.
Each data loader has different image size and batch size, so concatDataset cannot be used.
I see. Independent dataloaders could still work with DDP, but you will need to be aware of the implications on model accuracy. Is this the reason why you have to average loss instead of gradients? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.