id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st177268 | I ran into an issue related to the DataParallel class I’m not sure how to solve. Here’s a minimal example:
import torch
import torch.nn as nn
from torch.nn import functional as F
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
class Model(nn.Module):
def __init__(self, input_size, output_size):
super(Model, self).__init__()
self.fc = nn.Linear(input_size, output_size)
'''
def forward(self, input, param):
x = self.fc(input)
return x
'''
def forward(self, input, param):
w, b = param
print("Shape of w: ", w.shape)
output = F.linear(input, w, b)
return output
input = torch.randn(10, 3).cuda()
w, b = torch.randn(4, 3).cuda(), torch.randn(4).cuda()
model = Model(3, 4)
model.to(device)
# Single GPU
output = model(input, [w, b])
print("Outside: input size", input.size(), "output_size", output.size())
# DataParallel
model = nn.DataParallel(model)
output = model(input, [w, b])
print("Outside: input size", input.size(), "output_size", output.size())
Let’s say somehow I need to redefine the forward method with nn.functional calls, so every forward propagation comes with newly-defined input parameters. The problem is when I try to parallelize the model, the DataParallel class seems to split up not only the data but also my input parameters, causing above output sizes to be diffferent.
This is what I get with 2 GPUs available:
Shape of w: torch.Size([4, 3])
Outside: input size torch.Size([10, 3]) output_size torch.Size([10, 4])
Shape of w: torch.Size([2, 3])
Shape of w: torch.Size([2, 3])
Outside: input size torch.Size([10, 3]) output_size torch.Size([10, 2])
How can I fix this?
I’m doing this for a MAML implementation and this seems the only way to do it, as load_state_dict() aren’t able to perserve previous computational graphs. |
st177269 | dataparallel will scatter/split input and args[w, b] to different devices, but replicate the model.parameters().
Why do you need to put [w,b] as args? model.parameters() will include them, right? |
st177270 | Thank you for your response. In the MAML setting, we need two sets of parameters to be preserved, so I use one of them as model.parameters() and the other loaded with nn.functional operations.
Because eventually the loss comes from forward propagations through both parameters, the computational graphs need to be preserved as well, so it can’t be done with load_state_dict operations, or storing them in separate models.
This is related to the discussion here: Gradient computation in meta-learning algorithms 2 |
st177271 | I got this error when using initialization torch.distirbuted.
File "run_classifier_gcn.py", line 393, in <module>
main()
File "run_classifier_gcn.py", line 309, in main
rank=args.local_rank, world_size=args.world_size)
File "/home/xdwang/ml4h/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 422, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/home/xdwang/ml4h/lib/python3.7/site-packages/torch/distributed/rendezvous.py", line 119, in _tcp_rendezvous_handler
raise _error("rank parameter missing")
ValueError: Error initializing torch.distributed using tcp:// rendezvous: rank parameter missing
Where actually I have add the rank parameter when initialization.
dist.init_process_group(backend=args.dist_backend, init_method='tcp://{}:{}'.format(ip_address, args.port),
rank=args.local_rank, world_size=args.world_size) |
st177272 | xdwang0726:
rank parameter missing
‘rank parameter missing’ ==> do you want to check whether the ‘args.local_rank’ is valid? |
st177273 | I’m building an NLP application that with a dataloader that builds batches out of sequential blocks of text in a file. I have been using an IterableDataset since my text file won’t fit into memory. However, when I use with with DistributedDataParallel, the dataloader is replicated across processes and each GPU ends up with the same batch of data. How can I give each GPU a different batch of data to take advantage of distributed training?
Note: My dataloader can load ~300 batches/second on a single and each GPU takes ~2 seconds to process a batch, so dataloader speed should not be a significant limiting factor even if batches were sent in serial to different GPUs |
st177274 | How did you verify that all the gpus are using the same batch. Thats not how DDP works, it will take different chunks automatically.
https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html 127
This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged. |
st177275 | I met a similar problem recently, and I think the batches should be the same across different GPUs according to the source code.
If you look at the function DistributedSampler which we use in DDP, the chunking function is done by this class. However, if you look at the source code of Dataloader, sampler will not affect the behavior of data fetching of iterable datasets.
see line 34 in https://github.com/pytorch/pytorch/blob/8fd9fe93be08c9f0ab81507081fac1387a4aca56/torch/utils/data/_utils/fetch.py#L18 72 |
st177276 | @bsridatta I verified that my data was being replicated across batches by actually printing them out. I’m not sure, but this problem may be a product of using pytorch-lightning, which makes a copy of the dataloader for each GPU.
In any case, I was able to fix the problem by creating an array of pointers to the start of each training example in my file using an approach similar to the one used here 79. This allowed me to quickly sample random training contexts from my text file without needing to read it into memory. |
st177277 | Hello @kartch, thanks a lot for the explaining the workaround. You maybe right about pytorch-lightning, had few crazy issues, some of the backdraws of abstraction I guess. |
st177278 | @VitalyFedyunin @SimonW I’m wondering if we officially support using DistributedSampler with IterableDataset? |
st177279 | In your dataset class, you can take in a shard ID to shard the dataset properly. Then using the distributed training rank as the shard ID should work. |
st177280 | I had a similar use case and ended up implementing an IterableDataset that handles both multiprocessing via DataLoader and distributed training.
I shared my code here 245.
But I have to say that I turned away from that approach in the end, as it is relatively inflexible and turned the IterableDataset simply into an indexable one. |
st177281 | I have some questions on Data parallel broadcasting. Dose it broadcast layer by layer or gather all parameters of the whole model? I can find some C++ codes of it in “/torch/csrc/cuda/comm.cpp”, such as : 1609934675304981×608 87.2 KB
1609934720816660×688 82 KB
I have two GPUs, one called GPU0 which is the main GPU and another is GPU1.
I have added some logs in the python files(torch/nn/parallel/comm.py). The results show that it broadcasts layer by layer.
If it broadcasts layer by layer, another question is that does the GPU1 receive the next layer’s parameters from GPU0 and make computation at the same time?
Thanks for your attention and answers ! |
st177282 | looking at ‘torch/nn/parallel/replicate.py’, computation starts after all layers are coalesced and replicated on all devices |
st177283 | Oh! Thanks a lot! I’m clear now. 十分感谢!!
The “params” in function “replicate” are the layers of my model, and the “buffers” is the buffer, but I still not know what “modules” means in function “replicate” of ‘torch/nn/parallel/replicate.py’. Can you help me ? |
st177284 | The number of “modules” is only one more than the “params”, and is the “for key, param in module._parameters.items():” loop really the place to replicate for every layer? |
st177285 | Hi all! I think the “DistributedDataParallel” automatically average the gradient when calling “loss.backward()”. But is it possible to first compute the local gradient of the parameters, then do some modification to the local gradient, and finally average the gradient among the workers?
Thanks! |
st177286 | @anxu tensor.register_hook(customHook) may work for your case, you need to write customHook to modify grad of the tensor |
st177287 | Hi Yanli,
I am not sure whether tensor.register_hook will work, but the documentation mentioned that,
Forward and backward hooks defined on module and its submodules won’t be invoked anymore, unless the hooks are initialized in the forward() method.
Besides I need to first collect the whole gradient and then do some modification. Now I am turning to torch.distributed.all_reduce, but it will be easier if there is a way to do this via DistributedDataParallel. |
st177288 | Hi, anxu @anxu , the “DistributedDataParallel” automatically average the gradient when calling “loss.backward()”,
But I didn’t find the corresponding script in pytorch source code, Do you know where it is ? |
st177289 | Hi, Yanli @Yanli_Zhao , the “DistributedDataParallel” automatically average the gradient when calling “loss.backward()”,
But I didn’t find the corresponding script in pytorch source code, Do you know where it is ? |
st177290 | DDP averages gradients by all-reducing them across participating processes (see https://pytorch.org/docs/stable/_modules/torch/distributed/distributed_c10d.html#all_reduce 53). Some specific bits that include gradient averaging can be found in the allReduce calls here: https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/reducer.cpp 23 |
st177291 | @Yanli_Zhao’s solution works great. You can register the hook either before or after DDP’ing the model. Though the docs say that hooks are removed, that’s either not actually the case or it doesn’t apply to hooks on the tensors themselves.
Here’s some demo code:
from torch.nn.parallel import DistributedDataParallel as DDP
from torch import nn
import torch
import os
import torch.distributed as dist
import torch.multiprocessing as mp
def setup(rank, world_size):
"""Setup code comes directly from the docs:
https://pytorch.org/tutorials/intermediate/ddp_tutorial.html
"""
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.manual_seed(42)
def cleanup():
dist.destroy_process_group()
def pre_average(g):
print(f'Pre-DDP hook ({g.device}): {g[0, 0]}')
def post_average(g):
print(f'Post-DDP hook ({g.device}): {g[0, 0]}')
def worker(rank, world_size):
# Set up multiprocessing stuff
setup(rank, world_size)
# Create a trivial model
model = nn.Linear(1, 1, bias=False).to(rank)
torch.nn.init.constant_(model.weight, 1.)
# Create some trivial data.
# Gradients for x = (1, 2) should be (2, 8)
x = torch.tensor([rank+1]).float().to(rank)
# Register a hook before and after DDP'ing the model
model.weight.register_hook(pre_average)
model = DDP(model, device_ids=[rank])
model.module.weight.register_hook(post_average)
# Backprop!
l = model(x).pow(2).sum()
l.backward()
# Check what's left in the gradient tensors
print(f'Final ({x.device}): {model.module.weight.grad[0, 0]}')
cleanup()
if __name__ == '__main__':
world_size = 2
mp.spawn(worker,
args=(world_size,),
nprocs=world_size,
join=True)
Run from the terminal, this should print
Pre-DDP hook (cuda:0): 2.0
Post-DDP hook (cuda:0): 2.0
Pre-DDP hook (cuda:1): 8.0
Post-DDP hook (cuda:1): 8.0
Final value (cuda:0): 5.0
Final value (cuda:1): 5.0 |
st177292 | Hi @anxu, I did some test. Tensor.register_hook() does work in DDP. I first wrap my model with DDP, then I modify the grad manually with a custom function in the forward code where I construct my model:
image761×24 6.1 KB
Here’s the gradients after calling loss.backward():
If don’t modify the grad:
And if modify the grad:
You can see the gradients are just 3 times after being modified, which means .register_hook() works well in DDP. |
st177293 | Hi, there.
I have implemented a Cifar10 classifier using the Data Parallel of Pytorch, and then I changed the program to use the Distributed Data Parallel. I was surprised at that the program has become very slow. Using 8 GPUs (K80) with a batch size of 4096, the Distributed Data Parallel program spends 47 seconds to train a Resnet 34 model for one epoch, while the Data Parallel program took only 32 seconds.
I run the program on a cloud environment with 8 vCPU with 52GBytes of memory, and it does not seem to be a data transfer problem. So, I roughly measured the time spent for each task within the DP and DDP processes. The results are shown below.
DP
DDP
In the above screen shot, the left most number is value of loss and other numbers are execution time of each task in seconds. The “out” represents the forward path and “back” represents the backward. As you can see, DDP takes more than twice of the computation time compared to DP for both the forward and backward path. I do not understand why this happens.
I suppose that this post 18 discusses the same issue, and it seems that the issue has been addressed.
However, it still happens in my program. The torch version of the program is 1.4.0. Should I update the version to solve the problem? Or, should I use Apex Distributed Data Parallel? |
st177294 | TT_YY:
Using 8 GPUs (K80) with a batch size of 4096, the Distributed Data Parallel program spends 47 seconds to train a Resnet 34 model for one epoch, while the Data Parallel program took only 32 seconds.
Does this mean each DDP process is consuming 4096 samples per iteration and the DP process is consuming 4096 * 8 = 32768 samples?
I suppose that this post 25 discusses the same issue, and it seems that the issue has been addressed.
For the post you mentioned, it is only true for BERT models and it has been addressed in PyTorch v1.6.
BTW, how did you initialize DDP? Could you please share a repro? |
st177295 | Thank you for your reply.
Does this mean each DDP process is consuming 4096 samples per iteration and the DP process is consuming 4096 * 8 = 32768 samples?
No, I’m talking about the global batch size, which means a DDP process consumes 512 samples per iteration.
BTW, how did you initialize DDP? Could you please share a repro?
OK, I will try to provide a short version that can reproduce the performance shortly, since the original program is very long because of automation and visualization. Thank you. |
st177296 | I share here two lists of codes to reproduce the issue. The first one is a program with DP, which is provided for comparison. The second one is with DDP, which takes longer for forward and backward path than DP.
DP
""" Training Resnet34 for Cifar10 by Data Parallel """
from __future__ import print_function
import torch
import torch.nn as nn
import torch.optim as optim
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
import sys
import time
import argparse
from models import *
from sync_batchnorm import convert_model, DataParallelWithCallback
def main() :
parser = argparse.ArgumentParser(description='PyTorch CIFAR10 Training')
parser.add_argument('--net', default='res34')
parser.add_argument('--batch_size', default=4096)
parser.add_argument('--optimizer', default="Adam")
parser.add_argument('--epochs', default=2)
parser.add_argument('--n_nodes', default=1)
parser.add_argument('--nr', default=0)
args = parser.parse_args()
if torch.cuda.is_available() :
args.n_gpus = torch.cuda.device_count()
print(args.n_gpus, " GPU(s) available")
print(torch.cuda.get_device_name(0))
else :
print("GPU is NOT available.")
sys.exit()
print("Total batch size = ", args.batch_size)
print("Batch size = ", int(args.batch_size / args.n_gpus), "/ GPU")
print("Optimizer = ", args.optimizer)
train(args)
print()
# Training
def train(args):
epochs = args.epochs
batch_size = args.batch_size # total batch_size.
n_gpus = args.n_gpus
worker = 8
if args.net=='res18':
net = ResNet18()
elif args.net=='res34':
net = ResNet34()
elif args.net=='res50':
net = ResNet50()
elif args.net=='res101':
net = ResNet101()
print("Model = ", net.__class__.__name__)
print()
d_list = list(range(n_gpus))
net = convert_model(net).cuda() # Convert BatchNorm into SyncBatchNorm
net = DataParallelWithCallback(net, device_ids = d_list) # Data Parallel
cudnn.benchmark = True
criterion = nn.CrossEntropyLoss()
if args.optimizer == "Adam" :
optimizer = optim.Adam(net.parameters())
elif args.optimizer == "SGD" :
optimizer = optim.SGD(net.parameters(), lr = 0.1)
transform_list = [
transforms.RandomChoice([
transforms.RandomCrop(32, padding=4),
transforms.RandomResizedCrop(32, scale=(0.7, 1.0), ratio = (1.0, 1.0)),
]),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(degrees = 20),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
transform_train = transforms.Compose(transform_list)
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=worker)
for epoch in range(epochs):
print()
print("epoch : ",epoch + 1, " / ", epochs)
net.train()
""" ------- Training loop -------- """
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs, targets = inputs.to('cuda'), targets.to('cuda')
message = ""
t0 = time.time()
optimizer.zero_grad()
t1 = time.time()
message += " zero grad: {0:.5f}".format(t1 - t0)
outputs = net(inputs)
t2 = time.time()
message += " out: {0:.5f}".format(t2 - t1)
loss = criterion(outputs, targets)
t3 = time.time()
message += " loss: {0:.5f}".format(t3 - t2)
loss.backward()
t4 = time.time()
message += " back: {0:.5f}".format(t4 - t3)
loss_val = optimizer.step(loss.item) # loss value is given through optimizer.
t5 = time.time()
message += " step: {0:.5f}".format(t5 - t4)
print("{0:.6f}".format(loss_val) + message)
if __name__ == '__main__':
main()
DDP
""" Training Resnet34 for Cifar10 by Distributed Data Parallel """
from __future__ import print_function
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
import sys
import os
import time
import argparse
from models import *
from sync_batchnorm import convert_model, DataParallelWithCallback
def main() :
parser = argparse.ArgumentParser(description='PyTorch CIFAR10 Training')
parser.add_argument('--net', default='res34')
parser.add_argument('--batch_size', default=4096)
parser.add_argument('--optimizer', default="Adam")
parser.add_argument('--epochs', default=1)
parser.add_argument('--n_nodes', default=1)
parser.add_argument('--nr', default=0)
args = parser.parse_args()
if torch.cuda.is_available() :
args.n_gpus = torch.cuda.device_count()
print(args.n_gpus, " GPU(s) available")
print(torch.cuda.get_device_name(0))
# for DDP
args.world_size = args.n_gpus * args.n_nodes
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '8888'
else :
print("GPU is NOT available.")
sys.exit()
print("Total batch size = ", args.batch_size)
args.batch_size = int(args.batch_size / args.world_size) # for DDP
print("Batch size = ", args.batch_size, "/ GPU")
print("Optimizer = ", args.optimizer)
""" Distributed Data Parallel (DDP)"""
mp.spawn(train, nprocs=args.n_gpus, args=(args,))
print()
# Training
def train(gpu, args):
rank = args.nr * args.n_gpus + gpu
dist.init_process_group(
backend='nccl',
init_method='env://',
world_size=args.world_size,
rank=rank
)
epochs = args.epochs
batch_size = args.batch_size # batch_size is per GPU size.
torch.manual_seed(0)
if args.net=='res18':
net = ResNet18()
elif args.net=='res34':
net = ResNet34()
elif args.net=='res50':
net = ResNet50()
elif args.net=='res101':
net = ResNet101()
if rank == 0 :
print("Model = ", net.__class__.__name__)
print()
torch.cuda.set_device(gpu)
net = torch.nn.SyncBatchNorm.convert_sync_batchnorm(net)
net = net.cuda(gpu)
criterion = nn.CrossEntropyLoss().cuda(gpu)
if args.optimizer == "Adam" :
optimizer = optim.Adam(net.parameters())
elif args.optimizer == "SGD" :
optimizer = optim.SGD(net.parameters(), lr = 0.1)
net = nn.parallel.DistributedDataParallel(net, device_ids=[gpu])
transform_list = [
transforms.RandomChoice([
transforms.RandomCrop(32, padding=4),
transforms.RandomResizedCrop(32, scale=(0.7, 1.0), ratio = (1.0, 1.0)),
]),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(degrees = 20),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
transform_train = transforms.Compose(transform_list)
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train)
train_sampler = torch.utils.data.distributed.DistributedSampler(
trainset,
num_replicas = args.world_size,
rank = rank
)
trainloader = torch.utils.data.DataLoader(trainset, batch_size = batch_size,
shuffle=False, num_workers=0,
pin_memory = False, sampler=train_sampler)
for epoch in range(epochs):
if rank == 0 :
print()
print("epoch : ",epoch + 1, " / ", epochs)
net.train()
""" ------- Training loop -------- """
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs = inputs.cuda(non_blocking=True)
targets = targets.cuda(non_blocking=True)
message = ""
t0 = time.time()
optimizer.zero_grad()
t1 = time.time()
message += " zero grad: {0:.5f}".format(t1 - t0)
outputs = net(inputs)
t2 = time.time()
message += " out: {0:.5f}".format(t2 - t1)
loss = criterion(outputs, targets)
t3 = time.time()
message += " loss: {0:.5f}".format(t3 - t2)
loss.backward()
t4 = time.time()
message += " back: {0:.5f}".format(t4 - t3)
loss_val = optimizer.step(loss.item) # loss value is given through optimizer.
t5 = time.time()
message += " step: {0:.5f}".format(t5 - t4)
if rank == 0 :
print("{0:.6f}".format(loss_val) + message)
dist.destroy_process_group()
if __name__ == '__main__':
main()
Please let me know if something is wrong. Thank you. |
st177297 | Hey @TT_YY, at a quick glance, I noticed that you are using time.time() to measure the time consumption. This does not work for CUDA ops, as they return immediately after the op inserted into the CUDA stream before they are actually done. You will need to create CUDA events and then use the elapsed_time 8 API. |
st177298 | The code snippet in this comment 14 can serve as an example. Search for torch.cuda.Event. |
st177299 | This does not work for CUDA ops, as they return immediately after the op inserted into the CUDA stream before they are actually done.
Ok, I didn’t know the detail that you explained, but I meant to do a rough estimate so I thought that’s good enough. Nevertheless, the result matches the feeling or actual time spent for one epoch of training, as you can see if you run it. The DDP program takes 47 sec. while DP takes 32 sec. in my environment. |
st177300 | Hey @TT_YY, I took a closer look at the code and noticed that you converted BatchNorm to SyncBatchNorm for DDP, which might be the source of the slowness. If you look at SyncBatchNorm's implementation (see below), it launches its own communication, which is not handled by DDP. This additional comm leads to ~10% slowdown in your program when running on 2 GPUs. When I use BatchNorm instead of SyncBatchNorm, DDP is faster than DP. In general, when comparing DDP and DP speed, we need to make sure that they run the same model.
github.com
pytorch/pytorch/blob/573940f8d71de45356b1e6c851f876a32cb8a0ac/torch/nn/modules/_functions.py#L79 6
self.needs_input_grad[0],
self.needs_input_grad[1],
self.needs_input_grad[2]
)
if self.needs_input_grad[0]:
# synchronizing stats used to calculate input gradient.
# TODO: move div_ into batch_norm_backward_elemt kernel
num_channels = sum_dy.shape[0]
combined = torch.cat([sum_dy, sum_dy_xmu], dim=0)
torch.distributed.all_reduce(
combined, torch.distributed.ReduceOp.SUM, process_group, async_op=False)
sum_dy, sum_dy_xmu = torch.split(combined, num_channels)
divisor = count_tensor.sum()
mean_dy = sum_dy / divisor
mean_dy_xmu = sum_dy_xmu / divisor
# backward pass for gradient calculation
grad_input = torch.batch_norm_backward_elemt(
grad_output,
saved_input,
This is how I measure the latency.
# run one iteration to warm up
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
loss_val = optimizer.step(loss.item)
# measure latency of the second iteration
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
loss_val = optimizer.step(loss.item)
end.record()
torch.cuda.synchronize()
print(f"world size = {args.world_size}, batch size = {batch_size}, latency = {start.elapsed_time(end)}")
I tried to run the DDP script with the following configs on two GPUs:
Run as is
world size = 2, batch size = 2048, latency = 506.9587707519531
world size = 2, batch size = 2048, latency = 506.40606689453125
Comment out the following line, as SyncBatchNorm has its own way to communicate buffers, which can e slower.
#net = torch.nn.SyncBatchNorm.convert_sync_batchnorm(net)
world size = 2, batch size = 2048, latency = 456.42352294921875
world size = 2, batch size = 2048, latency = 457.8104248046875
Made the following edits and set args.n_gpus = 1. So the program runs DataParallel.
#net = torch.nn.SyncBatchNorm.convert_sync_batchnorm(net)
...
#net = nn.parallel.DistributedDataParallel(net, device_ids=[gpu])
net = nn.parallel.DataParallel(net)
world size = 1, batch size = 4096, latency = 496.3483581542969 |
st177301 | Thank you for your detailed analysis.
In general, when comparing DDP and DP speed, we need to make sure that they run the same model.
I have converted BatchNorm into SyncBatchNorm in DP too, as you can find “convert_model()” in the above code list of DP.
As you pointed out, removing convert_model() from the DP program significantly improves the performance. (2500msec. / itr -> 1500), However, I could not see such a difference of latency for the DDP program. The improvement observed in my experiment is less than 4%, with a batch size of 4096 and 8 GPUs. I used the same time measurement method as you did.
I can tolerate the 4% difference if I can make my DDP program faster than the DP.
DP with convert_model() -------------------------- DP without convert_model()
DDP with convert_sync_batchnorm() --------- DDP without convert_sync_batchnorm()
I use convert_model(), which converts BatchNorm into a SyncBatchNorm for DP, but it is different from the torch version for DDP. The torch.nn.SyncBatchNorm.convert_sync_batchnorm() supports only DDP.
By the way, I wonder why the latency in your experiment is one digit lower than mine. Are you using the same model (resnet34) and CIfar10?
If there is an example program for image classification using DDP, I will be curious to see the latency. I am trying to test my original optimizer with a practical settings. So far, the test comparing it with Adam has been successful in terms of the number of steps to reach a target accuracy. Now I have to improve the wall clock time and trying to find a way to scale the speed with the batch size, like the experiment you have shown. However, DDP is still not working in my program for that purpose now. |
st177302 | @mrshenli,
I have been trying to reproduce your results, but, for some reason, my experiment of DDP with two GPUs end up with about 4000 msec, unlike your results of about 500 msec.
I also tried DP with the same settings and the time was 1100 msec. The experiment indicates that DP is faster than DDP for a batch size of 2048 / GPU.
I used VGG11 model for the experiment, because K80 cannot accept 2048 data of Cifar10 with a large model like Resnet34 in its memory. I tried even a smaller model, but still DP is faster than DDP.
Would you please specify the model and data that you used to produce the results in your former post?
Are the GPUs that you used utilize NVLink?
Thank you for your cooperation. |
st177303 | TT_YY:
By the way, I wonder why the latency in your experiment is one digit lower than mine. Are you using the same model (resnet34) and CIfar10?
As I don’t have a models package locally, I replaced that with torchvision.models, and then replaced all ResNet* with reset*. I am not sure if that’s also what you use.
Would you please specify the model and data that you used to produce the results in your former post?
I used the default model type in your code, so it should be torchvision.models.resnet34() after my replacement, and was using the same data loader.
Are the GPUs that you used utilize NVLink?
Yes, the comm should go through nvlink
GPU0 GPU1 CPU Affinity
GPU0 X NV4 12-23
GPU1 NV4 X 12-23
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.116.00 Driver Version: 418.116.00 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro GP100 Off | 00000000:81:00.0 Off | 0 |
| 26% 31C P0 30W / 235W | 10MiB / 16278MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Quadro GP100 Off | 00000000:82:00.0 Off | 0 |
| 26% 31C P0 30W / 235W | 10MiB / 16278MiB | 0% Default |
+-------------------------------+----------------------+----------------------+ |
st177304 | @mrshenli,
Thank you for sharing your GPU spec. That’s what I should have asked in the first place. GP100 far exceeds K80 in almost all the aspects of the performance. In addition, in my platform, K80 does not use NVLink. (Maybe it originally does not support it.) I suppose that the source of difference is the spec difference in the performance of DDP.
On the other hand, I still don’t understand why DP is faster than DDP with the same GPUs in my environment. My guess is that the DP directly performing all-reduce by a high spec CPU can be faster than the DDP performing all-reduce by gpu0, which communicates through PCI bus and memory on board. But, I 'm not sure.
Thank you very much. |
st177305 | TT_YY:
My guess is that the DP directly performing all-reduce by a high spec CPU can be faster than the DDP performing all-reduce by gpu0, which communicates through PCI bus and memory on board.
DP uses replicate, scatter, and gather operations, which are basically cudaMemcpy under the hood. I suspect cudaMemcpy can be faster than allreduce operations used by DDP on some hardware. |
st177306 | Hi I am also facing a similar problem. Did u able to find the rootcause and solve the problem? |
st177307 | The default value of dataloader multiprocessing_context seems to be “spawn” in a spawned process on Unix. I will get OOM unless I set multiprocessing_context="fork" explicitly.
It doesn’t behave as documentation says:
On Unix, fork() is the default multiprocessing 3 start method. Using fork() , child workers typically can access the dataset and Python argument functions directly through the cloned address space. |
st177308 | Do you have any logs indicating spawn is being used? Also, by any chance are you on MacOS? (Asking because spawn is the default on MacOS). |
st177309 | Running on Ubuntu.
I use torch.multiprocessing.spawn(main, ...) to run DDP training. The dataloaders are initialized in main(). The memory usage without expilict setting multiprocessing_context is out of limit, as well as setting multiprocessing_context="spawn". However, setting multiprocessing_context="fork" makes memory usage much smaller, like 25G/ 63G. The OOM happens at enumerate(dataloader). |
st177310 | I tried running a simple script that mimics your described code, but I am not able to reproduce the issue - this might be due to having different environments, etc. Do you mind creating a GitHub issue in the PyTorch repo and filling out the detailed environment description there (PT version, CUDA, NCCL version, etc.)? That way, we can try to reproduce this issue in an environment similar to yours. Thanks! |
st177311 | I have a script that is set to be deterministic using the following lines:
seed = 0
torch.manual_seed(seed)
np.random.seed(seed)
random.seed(seed)
I don’t use any non-deterministic algorithm. I use 5 machines with 8 gpus each. I use DDP with NCCL backend with one process per gpu. I always check the first loss value at the first feedforward to check the determinism. For the same script I used to get a different value when I started to run the script after that. So for the first run let’s say I get the loss value of 1.52 but any run after that gives me a different loss value that is always constant now. So for example, I get the loss value of 1.45 in run 2, 3, 4, 5, etc. I checked all my conda envs and I did not see any changes in the libraries. Is this normal? Is determinism possible with DDP? What do you think can be the cause of the first run giving a different loss value? I have read something in this forum that I might need to fix NCCL rings but I do not know how!
I have also checked the data stream and it hasn’t changed between runs from the beginning! |
st177312 | There can certainly be additional non-determinism at the NCCL level. Here is a GH thread with some discussion on this: how to avoid the precision loss(float32) caused by the gradient accumulation of Ring Allreduce in the case of ddp · Issue #48576 · pytorch/pytorch · GitHub 40.
Essentially you can try using the NCCL_RINGS environment variable to fix the rings. Depending on your compute setup, if the network topology changes between runs, there may be additional differences in terms of the allreduce configuration used by NCCL, and I would suggest reaching out to the NCCL team to see how it can be made completely deterministic. |
st177313 | I’ll echo what Natalia mentioned on the GH issue - it’s not guaranteed that the collectives (specifically allreduce) in both NCCL and Gloo will produce exactly the same results. In fact, there may be slight variations in different runs with the NCCL backend if there are slight changes in the network, topology, ring configuration, etc.
Both NCCL and Gloo support several different allreduce implementations, which can further be configured extensively using environment variables. These libraries choose an implementation based on the topology and other factors, so it is completely possible that they choose different implementations/configurations which leads to small differences in the allreduce results. |
st177314 | I am coding pytorch. Between the torch inference code, I add some peripheral code for my own interest. This code works fine, but it is too slow. The reason might be for-iteration. So, i need parallel and fast way of doing this.
It is okay to do this in tensor, numpy, or just python array.
I made a function named ‘selective_max’ to find maximum value in arrays. But the problem is that I don’t want a maximum among the whole arrays, but among specific candidates which is designated by mask array. Let me show the gist of this function(below shows the code itself)
input
x [batch_size , dim, num_points, k] : x is a original input, but this becomes [batch_size, num_points, dim, k] by ‘x.permute(0,2,1,3)’.
‘batch_size’ is a well-known definition in the deep learning society. In every mini batch, there is many points. And a single point is represented by ‘dim’ length feature. Each feature element, there is ‘k’ potential candidates which is target of ‘max function’ later.
mask [batch_size, num_points, k] : This array is similar with ‘x’ without ‘dim’. Its element is either ‘0’ or ‘1’. So, I use this as a mask signal, like do max operation only on ‘1’ masked value.
please see the code below with this explanation. I use 3 for-iteration. Lets say we target a specific batch and a specific point. For a specific batch and a specific point, ‘x’ has [dim, k] array. And mask has [k] array which consists of either ‘0’ or ‘1’. So, I extract the non-zero index from [k] array and use this for extracting specific elements in ‘x’ dim by dim(‘for k in range(dim)’).
toy example
Let’s say we are in the second for-iteration. So, we now have [dim, k] for ‘x’ and [k] for ‘mask’. For this toy example, i presume k=3 and dim=4. x = [[3,2,1],[5,6,4],[9,8,7],[12,11,10], k=[0,1,1]. So, output would be [2,6,8,11], not [3, 6, 9, 12].
previous try
I try { mask.repeat(0,0,1,0) *(element-wise mul) x } and do the max operation. But, ‘0’ might the max value, because the x might have minus values in all array. So, this would result in wrong operation.
Thank you in advance.
def selective_max2(x, mask): # x : [batch_size , dim, num_points, k] , mask : [batch_size, num_points, k]
batch_size = x.size(0)
dim = x.size(1)
num_points = x.size(2)
k = x.size(3)
device = torch.device('cuda')
x = x.permute(0,2,1,3) # : [batch, num_points, dim, k]
#print('permuted x dimension : ',x.size())
x = x.detach().cpu().numpy()
mask = mask.cpu().numpy()
output = np.zeros((batch_size,num_points,dim))
for i in range(batch_size):
for j in range(num_points):
query=np.nonzero(mask[i][j]) # among mask entries, we get the index of nonzero values.
for k in range(dim): # for different k values, we get the max value.
# query is index of nonzero values. so, using query, we can get the values that we want.
output[i][j][k] = np.max(x[i][j][k][query])
output = torch.from_numpy(output).float().to(device=device)
output = output.permute(0,2,1).contiguous()
return output``` |
st177315 | Hihi Hoho!
hohohihi:
previous try
I try { mask.repeat(0,0,1,0) *(element-wise mul) x } and do the max operation. But, ‘0’ might the max value, because the x might have minus values in all array. So, this would result in wrong operation.
Rather than use multiplication to set your masked elements to zero
(which, as you note, will be incorrect), use subtraction to set your
masked values to a large negative value.
Please see this “masked argmax” post:
Masked argmax in Pytorch?
Hello Dragon!
This won’t work if tensor t is negative (or, more precisely, if its
largest unmasked element is negative).
I would do this:
large = torch.finfo (t.dtype).max # assumes t is a kind of float
# assume msk has zeros where elements t should be masked out
# and ones where they should be kept
(t - large * (1 - msk) - large * (1 - msk)).argmax()
Best.
K. Frank
Best.
K. Frank |
st177316 | Hello, Frank. Thanks for reply. I see your point and it makes sense. let you know the result. |
st177317 | Hi everyone, I’m using nn.DataParallel to do multi GPU training. But it’s strange that I didn’t get any speedup on forward nor backward via DataParallel. Is there anything I miss when using DataParallel?
My code for profiling forward is as follows
import torch
import torch.nn as nn
import time
DIM = 128
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.preprocess = nn.Sequential(
nn.Linear(128, 4 * 4 * 4 * DIM),
nn.BatchNorm1d(4 * 4 * 4 * DIM),
nn.ReLU(True),
)
self.main_module = nn.Sequential(
nn.ConvTranspose2d(
4 * DIM, 2 * DIM, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(2 * DIM),
nn.ReLU(True),
nn.ConvTranspose2d(2 * DIM, DIM, kernel_size=4,
stride=2, padding=1),
nn.BatchNorm2d(DIM),
nn.ReLU(True),
nn.ConvTranspose2d(DIM, 3, kernel_size=4, stride=2, padding=1),
nn.Tanh(),
)
def forward(self, input):
output = self.preprocess(input)
output = output.view(-1, 4 * DIM, 4, 4)
output = self.main_module(output)
return output.view(-1, 3, 32, 32)
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.main_module = nn.Sequential(
nn.Conv2d(3, DIM, kernel_size=4, stride=2, padding=1),
nn.LeakyReLU(),
# 16x16
nn.Conv2d(DIM, 2 * DIM, kernel_size=4, stride=2, padding=1),
nn.LeakyReLU(),
# 8x8
nn.Conv2d(2 * DIM, 4 * DIM, kernel_size=4, stride=2, padding=1),
nn.LeakyReLU(),
# 4 x 4
)
self.linear = nn.Linear(4 * 4 * 4 * DIM, 1)
def forward(self, input):
output = self.main_module(input)
output = output.view(-1, 4 * 4 * 4 * DIM)
output = self.linear(output)
return output
if __name__ == "__main__":
torch.backends.cudnn.benchmark = True
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
print(torch.__version__)
batch_size = 256
print('====Single GPU test====')
D = Discriminator().to(device)
G = Generator().to(device)
data = (torch.rand((batch_size, 3, 32, 32), device=device) - 0.5) / 0.5
z = torch.randn((batch_size, 128), device=device)
for i in range(2):
torch.cuda.synchronize()
start = time.time()
loss = D(data) - D(G(z))
torch.cuda.synchronize()
end = time.time()
if i != 0:# skip reporting for the first iteration because of cudnn.benchmark
print('Iter: %d; Forward time cost: %.6fs' % (i, end - start))
print('====Two GPUs test====')
D2 = Discriminator().to(device)
G2 = Generator().to(device)
D2 = nn.DataParallel(D2, list(range(2)))
G2 = nn.DataParallel(G2, list(range(2)))
for i in range(2):
torch.cuda.synchronize()
start = time.time()
loss = D(data) - D(G(z))
torch.cuda.synchronize()
end = time.time()
if i != 0: # skip reporting for the first iteration because of cudnn.benchmark
print('Iter: %d; Forward time cost: %.6fs' % (i, end - start))
Output is
cuda:0
1.7.0+cu101
====Single GPU test====
Iter: 1; Forward time cost: 0.013395s
====Two GPUs test====
Iter: 1; Forward time cost: 0.012617s
DataParallel doesn’t seem to speed up the forward pass. |
st177318 | Hey @Hongkai_Zheng, it’s possible that DataParallel's overhead overshadows its benefits. In the forward function of DataParallel, it would replicate the model, scatter input, run parallel_apply, and gather outputs in every iteration. Besides, as DataParallel is multi-thread parallel, different threads need to compete for Python GIL. To verify if this is the case, you can try larger batch_size to make the per-GPU forward computation more expensive and check if parallelizing more expensive forward could make the speedup more visible. If tuning batch_size is not a good option for you, I would suggesting using DistributedDataParallel. See this overview: PyTorch Distributed Overview — PyTorch Tutorials 1.7.1 documentation 8 |
st177319 | Hi Shen, thanks for your explanation. Indeed, benefits outweigh overheads when batch size goes to 512, which is not a practical option for me though.
Regarding Pytorch DDP, my code currently relies on autograd.grad() to compute Hessian vector product. But as is said in DDP doc, DDP doesn’t work with autograd.grad(). Is there any way to do Hessian vector product so that we can utilize Pytorch distributed parallel training? It seems to me that nn.DataParallel is the only parallel technique I can use so far. I’m also wondering how much difference is there between DDP and DataParallel regarding training speedup. |
st177320 | cc autograd expert @albanD for autograd and Hessian vector product questions
I’m also wondering how much difference is there between DDP and DataParallel regarding training speedup.
This depends on things like model sizes, number of GPUs, GPU interconnects, etc. DDP can avoid replicating models in every iteration and avoid GIL contention. These overhead might weigh differently in different applications. |
st177321 | Hi,
You can use a function similar to the following to do the same thing as autograd.grad but with backward (require nightly build as of writting).
It will be more expensive than a vanilla autograd.grad but should be fairly similar.
def my_autograd_grad(outputs, inputs, grad_outputs, create_graph=False):
# Save existing .grad if any
grads = []
for i in inputs:
grads.append(i.grad)
del i.grad
# Do the backward
autograd.backward(outpus, grad_outputs, inputs=inputs, create_grad=create_graph)
# Get the result
res_grad = tuple(i.grad for i in inputs)
# Restore previous gradients
for i, g in zip(inputs, grads):
i.grad = g
return res_grad |
st177322 | Cool! Minor questions: why do we need to delete i.grad before backward() and restore it afterwards? Is this where “more expensive” comes from? |
st177323 | Oh, is that because we do a backward() to compute gradient before Hessian vector product, and i.grad are kept so that DDP can gather gradient across the devices? |
st177324 | That is because the backward accumulate into the .grad field. But you might already need these for other reasons so you have to save/restore them. |
st177325 | I have a problem with this code piece
import os
import torch
class Dataset(torch.utils.data.Dataset):
arg = {'batch_size': 1}
def __init__(self, arg):
print('__init__')
self.arg.update(arg)
print(self.arg)
def _worker_init_fn(self, *args):
print('worker init')
print(self.arg)
def get_dataloader(self):
return torch.utils.data.DataLoader(self, batch_size=None,
num_workers=3,
worker_init_fn=self._worker_init_fn,
pin_memory=True,
multiprocessing_context='spawn')
def __getitem__(self, idx):
return 0
def __len__(self):
return 5
def main():
dataloader = Dataset({'batch_size': 2}).get_dataloader()
for _ in dataloader:
pass
if __name__ == '__main__':
main()
Basically I want the workers to have {'batch_size': 2}, but actually they have {'batch_size': 1}. The code print the following:
__init__
{'batch_size': 2}
worker init
{'batch_size': 1}
worker init
{'batch_size': 1}
worker init
{'batch_size': 1}
How can I make the workers to have the correct batch_size? |
st177326 | chaonan99:
self.arg.update(arg)
Solved. If I add self.arg = self.arg after this line it works. |
st177327 | Set multiprocessing_context to fork works fine on linux.
I guess the reason is that the arg is a class variable (rather than an instance variable) and mutuable. As spawn pickles instances to use in new process, the old class variable is inaccessible in dataloader workers. And it makes sense that multiprocessing_context=fork works fine, for forked processes share some resources. |
st177328 | Hello everyone,
I am currently implementing my research idea in the following picture:
image1630×569 88.9 KB
Based on the information on asynchronous execution 1, I know I don’t need to worry too much about the “forward” function. However, is it that possible to use " torch.cuda.stream" could be faster than without using it? I hope someone can answer my question. Thank you very much.
H
To clarify my question, I display two sample implementations (incomplete code to clarifying my question).
without " torch.cuda.stream"
class PointConvResNet(nn.Module):
def __init__(self):
self.spixelnet = SpixelNet()
self.layer1 = ResNet().layer1
self.layer2 = self._make_layer(block, 128, layers[1])
self.layer3 = self._make_layer(block, 256, layers[2])
self.layer4 = self._make_layer(block, 512, layers[3])
self.avgpool = nn.AdaptiveAvgPool1d((1,))
self.fc = nn.Linear(512 * block.expansion, num_classes)
def forward(self, images):
superpixels = self.spixelnet(images)
x = self.conv1(images)
x = self.bn1(x)
x = self.relu(x)
x = self.layer1(x)
xy = convert_xy(superpixels, self.imsize[0], self.imsize[1])
points = convert_points(x, superpixels, self.imsize[0], self.imsize[1])
xy, points = self._forward_impl(xy, points, self.wsize[0], self.layer2)
xy, points = self._forward_impl(xy, points, self.wsize[1], self.layer3)
xy, points = self._forward_impl(xy, points, self.wsize[2], self.layer4)
x = self.avgpool(points)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
2 use " torch.cuda.stream"
class PointConvResNet(nn.Module):
def __init__(self):
self.spixelnet = SpixelNet()
self.layer1 = ResNet().layer1
self.layer2 = self._make_layer(block, 128, layers[1])
self.layer3 = self._make_layer(block, 256, layers[2])
self.layer4 = self._make_layer(block, 512, layers[3])
self.avgpool = nn.AdaptiveAvgPool1d((1,))
self.fc = nn.Linear(512 * block.expansion, num_classes)
self.stream1 = torch.cuda.Stream()
self.stream2 = torch.cuda.Stream()
def forward(self, images):
with torch.cuda.stream(stream1):
superpixels = self.spixelnet(images)
with torch.cuda.stream(stream2):
x = self.conv1(images)
x = self.bn1(x)
x = self.relu(x)
x = self.layer1(x)
xy = convert_xy(superpixels, self.imsize[0], self.imsize[1])
points = convert_points(x, superpixels, self.imsize[0], self.imsize[1])
xy, points = self._forward_impl(xy, points, self.wsize[0], self.layer2)
xy, points = self._forward_impl(xy, points, self.wsize[1], self.layer3)
xy, points = self._forward_impl(xy, points, self.wsize[2], self.layer4)
x = self.avgpool(points)
x = torch.flatten(x, 1)
x = self.fc(x)
return x |
st177329 | weikunhan:
However, is it that possible to use " torch.cuda.stream" could be faster than without using it?
Yep, using multiple streams could be faster, if operators from one stream cannot fully utilize all computing resources on the GPU. To check if it is the case for your application, you can measure the total execution time using either CUDA events elapsed_time or torch.cuda.synchronize() with time.time().
However, in your second forward function, there are potential unsafe usage of streams. The line below consumes the value of superpixels on the default stream, but the preceding assignment superpixels = self.spixelnet(images) was on stream1. In this case, its possible that the default stream reads superpixels before stream1 writes it.
convert_xy(superpixels, self.imsize[0], self.imsize[1])
The same thing also applies to x. You can use wait_stream to synchronize streams. |
st177330 | mrshenli:
_xy(superpixels, self.imsize[0], self.imsize[1])
Thanks for helping. wait_stream 1 is necessary in my case. I could use torch.cuda.synchronize() to check which way is better for implementation. |
st177331 | I am trying to get an output from two different pre-trained models using the same input in parallel.
I tried threading and multiprocessing, in threading, the code gets slower, in multiprocessing, the functions responsible for running the pre-trained models do not fire.
The code used is:
from multiprocess import Pool, Process, set_start_method, Queue
from threading import Thread
output1parallel = None
output2parallel = None
def getOutput1(q):
print("IM HERE 1\n")
global loader, resnet18, output1parallel
for i,batch in enumerate(loader):
currentBatch = batch.cuda()
resnet18 = resnet18.cuda()
output1parallel = resnet18(currentBatch).cpu()
del currentBatch
q.put('hello')
def getOutput2(q):
print("IM HERE 2\n")
global loader, densenet,output, output2parallel
for i,batch in enumerate(loader):
currentBatch = batch.cuda()
densenet = densenet.cuda()
output2parallel = densenet(currentBatch).cpu()
del currentBatch
q.put('hello')
if __name__ == '__main__':
set_start_method('spawn', force=True)
densenet.share_memory()
resnet18.share_memory()
start = time.time()
q = Queue()
p1 = Process(target = getOutput1, args=(q,) )
p2 = Process(target = getOutput2, args=(q,) )
p1.start()
p2.start()
print(p1, p1.is_alive())
print(p2, p2.is_alive())
p1.join()
p2.join()
print("Time for parallel implementation: {}".format(time.time() - start)) |
st177332 | Hey @Mamdouh_Aljoud, please use torch.multiprocessing instead. See examples here: Multiprocessing best practices — PyTorch 1.7.0 documentation 7
Besides, I would recommend first try using multiple CUDA streams in the same process, using one stream for each model. See examples here: CUDA semantics — PyTorch 1.7.0 documentation 5 |
st177333 | I am hyperparameter searching and want to utilize all GPU’s with DistributedDataParallel. Let’s say all my distributed training code is in main.py. I want to test different hyperparmeters in a bash script like this so that I can train both models at once:
python main.py --param1 45 --param2 20 &
python main.py --param1 33 --param2 41
Should I expect any funny business from DistributedDataParallel training like this? |
st177334 | Assuming you main.py script assigns different MASTER_PORT for different experiments (so that processes in the sample experiment can successfully rendezvous).
It should work with GLOO backend. But if you are using NCCL backend, it could hang, because NCCL requires using one communicator per device at a time. If there are multiple processes operating on the same CUDA device, DDP instances in different processes might launch AllReduce concurrently. If you are lucky and the experiments do not hang, the result should still be correct I think. |
st177335 | Hi Everyone, first post
I am working on a robotics project where a Rl Agent (DDPG/SAC/…) interacts with many environments, which are run on multiple processors. The responses by these environments are collected in a batch wise manner in each run of the main loop of the program. In each of these loops, after processing the environments responses, agent.train() is called once.
My problem is, that if i increase the number of environments, the agent would collect more experience which is created by actions from a more novice agent than with less environments, i.e. the ratio of steps/unit_of_training increases. A remedy for this would be using more GPU’s per unit_of_training, thereby increasing my sweeps over the replay buffer and bringing that ratio down again, in particular i want to spawn n workers, each having their own slice of the replay buffer sample, which then collectively update either my loss val or gradients (I think this is dependent on the type of optimizer).
Is it possible to use the DistributedDataParallel (DDP) class with Reinforcement Learning, where we need the multiprocessing workers only for training after interaction with the environments?
Do I need to choose a specific optimizer (e.g. I’ve seen SGD used a lot in conjunction with DDP) or can i choose any Optimizer?
Furthermore, I’m having trouble with the various tutorials online, stating that i should use the DistributedSampler class. How can I integrate a normal rl replay buffer with this, is it possible to update the DistributedSampler memory at runtime or use DDP without DistributedSampler?
All the best
Edit: the weight update is not touched by distributed gradients: Comparison Data Parallel Distributed data parallel - #4 by mrshenli |
st177336 | Hey @Thunfisch, I am not familiar with RL, and cannot comment on the optimizer part, but I can share some pointers on using DDP/RPC for RL.
also cc @VitalyFedyunin for DataLoader/Sampler
DDP applications usually have local loss tensors, and only gradients are synchronized during the backward pass. If you need to compute a global loss, you can combine DDP with RPC, using DDP to synchronize gradients and using RPC to calculate global loss and properly propagate gradients back to each DDP instance. Here are pointers:
RPC RL example: Implementing Batch RPC Processing Using Asynchronous Executions — PyTorch Tutorials 1.7.1 documentation 12
Combine DDP with RPC: Combining Distributed DataParallel with Distributed RPC Framework — PyTorch Tutorials 1.7.1 documentation 8 |
st177337 | Hi there,
Have a question regarding how to leverage torch for general tensor operations (e.g., matmul, cdist, etc.) other than deep learning. So I do not have a training process but a simple calculation.
For instance, I would like to calculate the pairwise distance of two large matrices (100,000 samples, 128 dimensions) with four GPUs (cuda:0,1,2,3). A single GPU does not have enough memory for doing so.
A = torch.randn(100000, 128).cuda()
B = torch.randn(100000, 128).cuda()
pdist = torch.nn.PairwiseDistance(p=2)
pairwise_distance = pdist(A, B)
My questions are:
how to easily split the task to multiple GPUs (just like joblib with native Python)?
how to do the calculation in parallel (I could retrieve the gpu id and split the matrix into multiple fold with a for loop)
does pytorch multiprocessing also handle data split with multiple GPU? I am afraid that is not the case.
Thanks for the help and happy new year! |
st177338 | Hey @yzhao062
how to easily split the task to multiple GPUs (just like joblib with native Python)?
Not sure if this is helpful. The code below is how DataParallel parallelizes work on multiple GPUs using multi-threading. But matmul is more complicated than data parallel. You will need to implement your own parallel version if you would like to parallelize one matmul operation across multiple GPUs.
github.com
pytorch/pytorch/blob/963f7629b591dc9750476faf1513bc7f1fb4d6de/torch/nn/parallel/parallel_apply.py#L23-L88 6
def parallel_apply(modules, inputs, kwargs_tup=None, devices=None): r"""Applies each `module` in :attr:`modules` in parallel on arguments contained in :attr:`inputs` (positional) and :attr:`kwargs_tup` (keyword) on each of :attr:`devices`. Args: modules (Module): modules to be parallelized inputs (tensor): inputs to the modules devices (list of int or torch.device): CUDA devices :attr:`modules`, :attr:`inputs`, :attr:`kwargs_tup` (if given), and :attr:`devices` (if given) should all have same length. Moreover, each element of :attr:`inputs` can either be a single object as the only argument to a module, or a collection of positional arguments. """ assert len(modules) == len(inputs) if kwargs_tup is not None: assert len(modules) == len(kwargs_tup) else: kwargs_tup = ({},) * len(modules)
This file has been truncated. show original
You can also use torch.multiprocessing and use shared_memory 2 to share tensors.
how to do the calculation in parallel (I could retrieve the gpu id and split the matrix into multiple fold with a for loop).
It depends on which operator you need. Some operators like add and abs can be easily parallelized, and you can use the same strategy as used by DataParallel to achieve that. Other operators will be harder, especially if they require cross-device communication/data-sharing during the computation.
does pytorch multiprocessing also handle data split with multiple GPU? I am afraid that is not the case.
PyTorch supports splitting a tensor in one process, and then share each split with a different process with torch.multiprocessing and shared_memory tensors, but the difficult part will be how to implement multi-device computation. |
st177339 | Hello everyone. I wondering if someone know of a work around for the following issue (or if it’s a bug). I have a function defined in a superclass that calls the forward of a custom module. When I call this function in the sub class in a data parallel setup, the custom module is on the wrong device. Is this supposed to happen?
Minimal Example
import torch
import torch.nn as nn
class SuperClass(nn.Module):
def __init__(self):
super(SuperClass, self).__init__()
self.fc2 = nn.Linear(100, 1)
self.mid = nn.Linear(100, 100)
self.f = self.my_layer_func
def my_layer_func(self, x):
return self.mid(x)
class SubClass(SuperClass):
def __init__(self):
super(SubClass, self).__init__()
def forward(self, x):
t = self.f(x)
d = self.fc2(t)
return d
print("Devices", torch.cuda.device_count())
mod = SubClass()
mod = nn.DataParallel(mod)
mod = mod.to(0)
input = torch.randn(2, 100, device=0)
out = mod(input)
Errors
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/dfs/scratch0/lorr1/py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/dfs/scratch0/lorr1/py3.8/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/dfs/scratch0/lorr1/py3.8/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/dfs/scratch0/lorr1/py3.8/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/dfs/scratch0/lorr1/py3.8/lib/python3.8/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File "/dfs/scratch0/lorr1/py3.8/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/dfs/scratch0/lorr1/py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "<stdin>", line 5, in forward
File "<stdin>", line 8, in my_layer_func
File "/dfs/scratch0/lorr1/py3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/dfs/scratch0/lorr1/py3.8/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/dfs/scratch0/lorr1/py3.8/lib/python3.8/site-packages/torch/nn/functional.py", line 1690, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: Expected tensor for 'out' to have the same device as tensor for argument #2 'mat1'; but device 0 does not equal 1 (while checking arguments for addmm)
If I instead change the my_layer_func to take as input the mid argument and pass that in during the forward of the subclass (i.e., self.f(x, self.mid)), things work fine. I’d rather not have to do that if possible.
Any suggestions? |
st177340 | lorr1:
t = self.f(x)
Question: Can’t you directly call self.my_layer_func(x)? |
st177341 | Hi, I have an interactive pipe1 -> pipe2-> NN workflow which is explained here
I want to parallelize this in a distributed memory system which has 2 GPUs per node
I want to put one pipe1 -> pipe2-> NN apparatus per process (rank) and map one rank per GPU.
As a result I will have two (pipe1 -> pipe2-> NN)s per node
I am using this example 1
I am sending you my file so you can see everything there, but the error occurs in the following lines of code
210 # For distributed training, wrap the model with torch.nn.parallel.DistributedDataParallel.
211 if args.distributed:
212 model = DDP(model, device_ids=[args.gpu], output_device=args.gpu)
213 if args.verbose:
214 print('Since we are in a distributed setting the model is replicated here in rank {}' .format(args.local_rank))
the output from the run is the following:
please ignore the three first ERRORS since they are related to containerization issues I guess
Singularity> python3.8 -m torch.distributed.launch --nproc_per_node=2 Contrastive_Learning.py --b 128 -v /projects/neurophon/
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
distributed is True, then rank number 1 is mapped in device number 1
Using device cuda:1 in rank number 1
distributed is True, then rank number 0 is mapped in device number 0
Using device cuda:0 in rank number 0
function_f created from rank 1
function_f created from rank 0
function_g created from rank 1
SimCLR_Module created from rank 1
function_g created from rank 0
SimCLR_Module created from rank 0
pipe1 built by rank number 1 in 1.4155216217041016 seconds
Initialating fixation
pipe2 built by rank number 1 in 0.0034644603729248047 seconds
pipe1 built by rank number 0 in 1.418199062347412 seconds
Initialating fixation
pipe2 built by rank number 0 in 0.003275632858276367 seconds
Traceback (most recent call last):
File "Contrastive_Learning.py", line 294, in <module>
main()
File "Contrastive_Learning.py", line 212, in main
model = DDP(model, device_ids=[args.gpu], output_device=args.gpu)
File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 335, in __init__
self._ddp_init_helper()
File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 441, in _ddp_init_helper
self.reducer = dist.Reducer(
RuntimeError: CUDA error: the launch timed out and was terminated
terminate called after throwing an instance of 'dali::CUDAError'
what(): CUDA runtime API error cudaErrorLaunchTimeout (6):
the launch timed out and was terminated
Traceback (most recent call last):
File "Contrastive_Learning.py", line 294, in <module>
main()
File "Contrastive_Learning.py", line 212, in main
model = DDP(model, device_ids=[args.gpu], output_device=args.gpu)
File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 335, in __init__
self._ddp_init_helper()
File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 441, in _ddp_init_helper
self.reducer = dist.Reducer(
RuntimeError: CUDA error: the launch timed out and was terminated
terminate called after throwing an instance of 'dali::CUDAError'
what(): CUDA runtime API error cudaErrorLaunchTimeout (6):
the launch timed out and was terminated
Traceback (most recent call last):
File "/usr/local/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/site-packages/torch/distributed/launch.py", line 261, in <module>
main()
File "/usr/local/lib/python3.8/site-packages/torch/distributed/launch.py", line 256, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command '['/usr/local/bin/python3.8', '-u', 'Contrastive_Learning.py', '--local_rank=1', '--b', '128', '-v', '/projects/neurophon/']' died with <Signals.SIGABRT: 6>.
Singularity> exit
Basically the error is on line 212
What am I doing wrong?
Thanks!
Here I provide the complete code
import argparse
import sys
import os
import torch
import torch.optim as optim
import torch.distributed.autograd as dist_autograd
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.optim import DistributedOptimizer
from torch.distributed.rpc import RRef
from time import time
sys.path.append('SimCLR/NVIDIA DALI')
import NVIDIA_DALI_Pipelines as NDP
sys.path.append('SimCLR/ResNet')
import ResNet as rn
sys.path.append('SimCLR/MLP')
import multilayerPerceptron as mlp
sys.path.append('SimCLR')
import SimCLR
def parse():
parser = argparse.ArgumentParser(prog='Contrastive_Learning',
description='This program executes the Contrastive Learning Algorithm using foveated saccades')
parser.add_argument('data', metavar='DIR', default='/projects/neurophon', type=str,
help='path to the MSCOCO dataset')
parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('--epochs', default=90, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
help='manual epoch number (useful on restarts)')
parser.add_argument('-b', '--batch-size', default=256, type=int,
metavar='N', help='mini-batch size per process (default: 256)')
parser.add_argument('--lr', '--learning-rate', default=0.1, type=float,
metavar='LR', help='Initial learning rate. Will be scaled by <global batch size>/256: args.lr = args.lr*float(args.batch_size*args.world_size)/256. A warmup schedule will also be applied over the first 5 epochs.')
parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
help='momentum')
parser.add_argument('--temperature', default=0.05, type=float, metavar='T',
help='SimCLR temperature')
parser.add_argument('--weight-decay', '--wd', default=1e-4, type=float,
metavar='W', help='weight decay (default: 1e-4)')
parser.add_argument('--print-freq', '-p', default=10, type=int,
metavar='N', help='print frequency (default: 10)')
parser.add_argument('--resume', default='', type=str, metavar='PATH',
help='path to latest checkpoint (default: none)')
parser.add_argument('--dali_cpu', action='store_true',
help='Runs CPU based version of DALI pipeline.')
parser.add_argument('--prof', default=-1, type=int,
help='Only run 10 iterations for profiling.')
parser.add_argument('--deterministic', action='store_true')
parser.add_argument("--local_rank", default=0, type=int)
parser.add_argument('-t', '--test', action='store_true',
help='Launch test mode with preset arguments')
parser.add_argument('-v', '--verbose', action='store_true',
help='provides additional details as to what the program is doing')
args = parser.parse_args()
return args
def main():
global args
args = parse()
if not len(args.data):
raise Exception("error: No data set provided")
args.distributed = False
if 'WORLD_SIZE' in os.environ:
args.distributed = int(os.environ['WORLD_SIZE']) > 1
args.gpu = 0
args.world_size = 1
if args.distributed:
args.gpu = args.local_rank
torch.cuda.set_device(args.gpu)
torch.distributed.init_process_group(backend='nccl', init_method='env://')
args.world_size = torch.distributed.get_world_size()
if args.verbose:
print('distributed is True, then rank number {} is mapped in device number {}' .format(args.local_rank, args.gpu))
args.total_batch_size = args.world_size * args.batch_size
# Set the device
device = torch.device('cpu' if args.dali_cpu else 'cuda:' + str(args.gpu))
if args.verbose:
print('Using device {} in rank number {}' .format(device, args.local_rank))
# Set fuction_f
function_f = rn.ResNet.ResNet18()
function_f.to(device)
if args.verbose:
print('function_f created from rank {}' .format(args.local_rank))
# Set function_g
function_g = mlp.MLP(512*4*4, 1024, 128)
function_g.to(device)
if args.verbose:
print('function_g created from rank {}' .format(args.local_rank))
# Set SimCLR model
img_size = (30,30)
model = SimCLR.SimCLR_Module(args.temperature, function_f, function_g, args.batch_size, img_size, device)
model.to(device)
if args.verbose:
print('SimCLR_Module created from rank {}' .format(args.local_rank))
# # Set optimizer
# args.lr = args.lr*float(args.batch_size*args.world_size)/256.
# optimizer = optim.SGD(model.parameters(), args.lr,
# momentum=args.momentum,
# weight_decay=args.weight_decay)
# Optionally resume from a checkpoint
if args.resume:
# Use a local scope to avoid dangling references
def resume():
if os.path.isfile(args.resume):
print("=> loading checkpoint '{}'" .format(args.resume))
checkpoint = torch.load(args.resume, map_location = lambda storage, loc: storage.cuda(args.gpu))
args.start_epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
print("=> loaded checkpoint '{}' (epoch {})"
.format(args.resume, checkpoint['epoch']))
else:
print("=> no checkpoint found at '{}'" .format(args.resume))
resume()
path = args.data
os.environ['DALI_EXTRA_PATH']=path
test_data_root = os.environ['DALI_EXTRA_PATH']
file_root = os.path.join(test_data_root, 'MSCOCO', 'cocoapi', 'images', 'val2014')
annotations_file = os.path.join(test_data_root, 'MSCOCO', 'cocoapi', 'annotations', 'instances_val2014.json')
# This is pipe1, using this we bring image batches from MSCOCO dataset
pipe1 = NDP.COCOReader(batch_size=args.batch_size,
num_threads=args.workers,
device_id=args.local_rank,
file_root=file_root,
annotations_file=annotations_file,
shard_id=args.local_rank,
num_shards=args.world_size,
dali_cpu=args.dali_cpu)
start = time()
pipe1.build()
total_time = time() - start
if args.verbose:
print('pipe1 built by rank number {} in {} seconds' .format(args.local_rank, total_time))
# This is pipe2, which is used to augment the batches brought by pipe1 utilizing foveated saccades
images = NDP.ImageCollector()
fixation = NDP.FixationCommand(args.batch_size)
pipe2 = NDP.FoveatedRetinalProcessor(batch_size=args.batch_size,
num_threads=args.workers,
device_id=args.local_rank,
fixation_information=fixation,
images=images,
dali_cpu=args.dali_cpu)
start = time()
pipe2.build()
total_time = time() - start
if args.verbose:
print('pipe2 built by rank number {} in {} seconds' .format(args.local_rank, total_time))
# For distributed training, wrap the model with torch.nn.parallel.DistributedDataParallel.
if args.distributed:
model = DDP(model, device_ids=[args.gpu], output_device=args.gpu)
if args.verbose:
print('Since we are in a distributed setting the model is replicated here in rank {}' .format(args.local_rank))
if __name__ == '__main__':
main() |
st177342 | Perhaps you are running into this issue: https://forums.developer.nvidia.com/t/xid-8-in-various-cuda-deep-learning-applications-for-nvidia-gtx-1080-ti/66433 29?
It seems like one of the underlying causes of that error is the cuda kernel taking too long due to the GPU being used for other processes (like driving the display), which starves your training process and eventually aborts it. Is that forum post applicable here? |
st177343 | Thank you @osalpekar, I don’t think so. I am runing this on a dedicated node in a cluster. No display, nobody is using such node except me. |
st177344 | Is your code working without DDP and without DALI?
If so, have you tried the use case without one of these packages?
Also, which PyTorch, DALI, CUDA versions and which GPU are you using? |
st177345 | Thank you @ptrblck
My code is working only withour DDT (when I comment out the code below)
210 # For distributed training, wrap the model with torch.nn.parallel.DistributedDataParallel.
211 if args.distributed:
212 model = DDP(model)
213 # model = DDP(model, device_ids=[args.gpu], output_device=args.gpu)
214 if args.verbose:
215 print('Since we are in a distributed setting the model is replicated here in rank {}' .format(args.local_rank))
This is the output without DDT
Singularity> python3.8 -m torch.distributed.launch --nproc_per_node=2 Contrastive_Learning.py --b 128 -v /projects/neurophon/
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
distributed is True, then rank number 1 is mapped in device number 1
Using device cuda:1 in rank number 1
distributed is True, then rank number 0 is mapped in device number 0
Using device cuda:0 in rank number 0
function_f created from rank 0
function_g created from rank 0
SimCLR_Module created from rank 0
function_f created from rank 1
function_g created from rank 1
SimCLR_Module created from rank 1
pipe1 built by rank number 0 in 3.062119245529175 seconds
Initialating fixation
pipe2 built by rank number 0 in 0.0029039382934570312 seconds
pipe1 built by rank number 1 in 2.993931531906128 seconds
Initialating fixation
pipe2 built by rank number 1 in 0.003136157989501953 seconds
Singularity> exit
This is the output when I comment out all that has to do with DALI and uncomment DDT section
Singularity> python3.8 -m torch.distributed.launch --nproc_per_node=2 Contrastive_Learning.py --b 128 -v /projects/neurophon/
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
distributed is True, then rank number 1 is mapped in device number 1
Using device cuda:1 in rank number 1
distributed is True, then rank number 0 is mapped in device number 0
Using device cuda:0 in rank number 0
function_f created from rank 0
function_f created from rank 1
function_g created from rank 0
SimCLR_Module created from rank 0
function_g created from rank 1
SimCLR_Module created from rank 1
Traceback (most recent call last):
File "Contrastive_Learning.py", line 295, in <module>
main()
File "Contrastive_Learning.py", line 213, in main
model = DDP(model, device_ids=[args.gpu], output_device=args.gpu)
File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 335, in __init__
self._ddp_init_helper()
File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 441, in _ddp_init_helper
self.reducer = dist.Reducer(
RuntimeError: CUDA error: the launch timed out and was terminated
NCCL error in: /pytorch/torch/lib/c10d/../c10d/NCCLUtils.hpp:69, unhandled cuda error, NCCL version 2.4.8
Traceback (most recent call last):
File "Contrastive_Learning.py", line 295, in <module>
main()
File "Contrastive_Learning.py", line 213, in main
model = DDP(model, device_ids=[args.gpu], output_device=args.gpu)
File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 335, in __init__
self._ddp_init_helper()
File "/usr/local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 441, in _ddp_init_helper
self.reducer = dist.Reducer(
RuntimeError: CUDA error: the launch timed out and was terminated
NCCL error in: /pytorch/torch/lib/c10d/../c10d/NCCLUtils.hpp:69, unhandled cuda error, NCCL version 2.4.8
Traceback (most recent call last):
File "/usr/local/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/site-packages/torch/distributed/launch.py", line 261, in <module>
main()
File "/usr/local/lib/python3.8/site-packages/torch/distributed/launch.py", line 256, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command '['/usr/local/bin/python3.8', '-u', 'Contrastive_Learning.py', '--local_rank=1', '--b', '128', '-v', '/projects/neurophon/']' died with <Signals.SIGABRT: 6>.
Singularity> exit
Now let’s go to versions
In the container I instal the following version of dali
# nvidia dali
pip3.8 --no-cache-dir --disable-pip-version-check install --extra-index-url https://developer.download.nvidia.com/compute/redist nvidia-dali-cuda100
Also:
Singularity> cat /etc/centos-release
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
CentOS Linux release 7.8.2003 (Core)
Singularity> python3.8
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
Python 3.8.3 (default, Oct 16 2020, 20:24:57)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
1.6.0
>>>
Singularity> nvidia-smi
ERROR: ld.so: object '/soft/buildtools/trackdeps/${LIB}/trackdeps.so' from LD_PRELOAD cannot be preloaded: ignored.
Tue Oct 27 12:46:37 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64.00 Driver Version: 440.64.00 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 On | 00000000:07:00.0 Off | 0 |
| N/A 27C P8 33W / 149W | 70MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla K80 On | 00000000:08:00.0 Off | 0 |
| N/A 21C P8 29W / 149W | 69MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1181 G /usr/bin/Xorg 68MiB |
| 1 1181 G /usr/bin/Xorg 68MiB |
+-----------------------------------------------------------------------------+
Singularity> |
st177346 | @dariodematties To simplify the investigation here, can you check if simple torch CUDA operations work fine (ex: creating tensors on GPUs and performing some basic ops on them)? In addition to this, testing NCCL collective operations like allreduce independently would be a good idea to see if there is something related to NCCL here.
model = DDP(model)
In this case, I’m assuming the model is actually on CPU and that’s why its working? |
st177347 | Thank you very much @pritamdamania87
I tested it changing the backend in torch.distributed.init_process_group(backend='nccl', init_method='env://')
from nccl to gloo and it worked
As far as I could see here 4 there are several communication operations in gloo which are not available on GPUs.
On gloo we have just broadcast and all_reduce available on GPU. I do not know if this will affect the performance of the code in some way regarding DDP.
From DDP documentation I interpret that a Reducer uses allreduce operation and then all gradients are averaged in each replicated model in each process.
Yet, I am not sure how using gloo instead of nccl may affect performance
Thanks! |
st177348 | Yet, I am not sure how using gloo instead of nccl may affect performance
NCCL is much more performant than GLOO when dealing with GPU tensors (we’ve seen a difference of 1.5x - 1.8x in some cases). If performance is important for you, you should try to stick with NCCL and get it to work. |
st177349 | Thank you @pritamdamania87!
At least I know where is the problem now and can start making some tests before implementing larger models
Surely I will ask the support in the cluster since this seems to be a deeper issue from the packages |
st177350 | @dariodematties So if we use DALI with distributed computing, we should use gloo? Any update with nccl? |
st177351 | I do not think so @Johnson_Mark, I guess it depends on your container configuration (if you are using containerization) and on the nvidia libaries intalled in the machine but I am not sure.
The truth is that I have used nccl with DALI in another machine with another container |
st177352 | Hi, I find it quite difficult to use DDP to train a model with an additional loss function outside the forward function.
Training Procedure
The model (M, based on ProxylessNAS) has two sets of parameters,
neural network weights W,
architecture parameters (operator weights) A,
The steps to update A are,
randomly sample a sub-network Msub, with parameters Wsub, based on the probability matrix A
loss1 = Msub(data)
loss2 is directly calculated from A, e.g. Latency(A) = 3xA01 + 2xA02
loss = f(loss1, loss2); loss.backward()
update A…
The Problem
I set find_unused_parameters=True and it raises an error during backward propagation.
RuntimeError: Expected to mark a variable ready only once. This error is caused by use of a module parameter outside the `forward` function. The return value of the `forward` function is inspected by the distributed data parallel wrapper to figure out if any of the module's parameters went unused. If this is the case, it knows they won't receive gradients in a backward pass. If any of those parameters are then used outside `forward`, this error condition is triggered. You can disable unused parameter detection by passing the keyword argument `find_unused_parameters=False` to `torch.nn.parallel.DistributedDataParallel`.
The problem is since only a part of the model (Msub) is used in each iteration, DDP won’t get gradients of those parameters not belong to Wsub. If I set find_unused_parameters=False, it will crash in next forward pass.
Does anyone have any idea to solve the problem? |
st177353 | In this case, I’m assuming that A is a part of the network M, although during backwards, since according to DDP, A went unused, it is marked as ready for reduction, but then we attempt to re-mark A’s parameters when it gets grads in the backwards pass.
Is it possible to create a wrapper network that wraps steps 2 and 3 into a single module and returns both loss1 and loss2/the values needed to compute it? That seems like it would avoid the double mark issue.
Also, the version of PyTorch you are using and a script to repro the issue would also be valuable. |
st177354 | In this case, I’m assuming that A is a part of the network M, although during backwards, since according to DDP, A went unused, it is marked as ready for reduction, but then we attempt to re-mark A’s parameters when it gets grads in the backwards pass.
Yes, this is the cause. Thanks for the explanation.
Is it possible to create a wrapper network that wraps steps 2 and 3 into a single module and returns both loss1 and loss2/the values needed to compute it? That seems like it would avoid the double mark issue.
That’s a good idea. Although it means a lot of work .
I’m using pytorch 1.4. The code is a little bit messy, I’ll try to make a demo if above suggestion not works. |
st177355 | I trained a 3D unet model by Pytorch 1.1.0 Distributed Data Parallel. The test result is good. But after I update Pytorch to V1.7.0. Same code different result. Anyone can help me with that? Is it a syncbatchnorm problem or what?
Even if I finetune the pretrained model(on 1.1.0). Both losses during training and validation increased a lot using v1.7.0.
Below is the training epoch loss of two version. |
st177356 | Are you only seeing the increase in the losses while using DDP or also while training the model on a single GPU? |
st177357 | I only tested on DDP. Because my input data is too large to put in single GPU. So I have to use DDP with syncbatchnorm.
P.S. Could it be a cuda version problem? I used Pytorch1.7+cu101, however my server still using cuda10. |
st177358 | It’s unclear where the difference is coming from and I would recommend to check the loss curves on a single GPU first (by reducing the batch size or slimming down the model).
The next steps depend on the outcome of this run (e.g. using DDP again and removing SyncBN etc.). |
st177359 | The input data is too large that it can only fit into GPU with batchsize=1. This is why I have to choose syncbatchnorm with DDP to simulate larger batchsize. By the way, groupnorm took much more GPU memory(Why I didn’t use it.). Do you have any idea where should I check first except for try single GPU? |
st177360 | Can you give me more details? Like an example?
P.S.
Even though I trained on 4 GPU(simulate bs=4) still converge much slower than before(2 GPU, simulate bs=2) |
st177361 | I have also met this problem when I tried to reproduce GLOW using this code, GitHub - rosinality/glow-pytorch: PyTorch implementation of Glow 9 with torch 1.7+cu101. It works ok with DP. |
st177362 | Hi. I’m currently working with facebook’s Detectron2 framework based on pytorch 1.7.0, and I notice the training is using gloo by default, which is possibly prone to causes the runtime error: connection closed by peer. Now I want to give nccl a try, and what should I do to enforce this? |
st177363 | Hi @Hawk,
You can simply pass in backend="nccl" into dist.init_process_group to enable training with the NCCL backend (assuming that NCCL is installed on your system and pytorch is installed with NCCL support). |
st177364 | Hawk:
notice the training is using gloo by default
That’s not precise. Detectron2 uses both NCCL and Gloo under the hood. |
st177365 | The problem crashed with the initialization error. From the call stack, it seems like it crashed when it tries to release some resource. The whole program is kinds of large. The basic logic is that, it first calls train() which creates the model, and run the training. Then, it calls the predict(), which also creates the model and run the prediction. The program is running distributed in 8 GPUs. Each GPU only predicts part of the test files and then rank=0 will concatenate those files.
The crash is at the prediction phase. Only one GPU crashes with this error and all other GPUs can finish the predictions well.
Any idea on the potential issues?
pytorch = 1.7.1
cuda = 10.2
image3568×768 337 KB |
st177366 | To be clear, you’re not seeing any crash during training, however there is a crash at inference time? Do you have a script to reproduce this issue? |
st177367 | yes. no issue at training. it crashes only on inference. The code is kind of large and it is hard to extract. One strange thing is that if i skip the training and run the inference only, it runs well with loading previously trained model. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.