id
stringlengths
3
8
text
stringlengths
1
115k
st176868
Thanks for the answer. sorry. I didn’t understand your answer to “Is this the reason why you have to average loss instead of gradients?” My question doesn’t seem to be clear. To be more simply, I want to training one model using multiple dataloader. All I want to do is update using the average of the losses calculated from multiple dataloader. I just want to use various loss values using various image size and batch size. Please tell me what you want to know specifically about my question.
st176869
I just want to use various loss values using various image size and batch size. If this is all you need, you don’t need to average loss. You can let each DDP process creates its own independent data loader and then produce a local loss. Then, during the backward pass, DDP will synchronize gradients for you.
st176870
Yep, here is a starter example: Distributed Data Parallel — PyTorch 1.8.0 documentation 3 You can replace the torch.randn(20, 10).to(rank) random input tensor by input and labels from a dataloader example 1 Here is a complete list of DDP tutorials: PyTorch Distributed Overview — PyTorch Tutorials 1.8.0 documentation 1 It should work, but will have different accuracy implications to different applications, depending on your model, data, batch size, loss function, etc. I would suggest taking a look at the DDP paper or at lest the design notes and verify whether DDP’s gradient averaging algorithm is OK for your application.
st176871
Hi everyone. I am currently using DDP (NCCL backend) to train a network on a machine with 8 GPUs. I do a validation pass after each epoch, but I don’t want to do the same validation step on all 8 GPUs. So in order to only use one GPU for validation I am using torch.distributed.barrier(). But the process seems to hang up once it reaches the barrier statement. Here is an example of the training loop: for epoch in range(opt['epochs']): #[1] model.train() #[2] for batch_i, (imgs, targets) in enumerate(dataloader): #[3] imgs = Variable(imgs.cuda(gpu)) targets = Variable(targets.cuda(gpu), requires_grad=False) loss, outputs = model(imgs, targets) loss.backward() optimizer.step() optimizer.zero_grad() scheduler.step() if epoch % opt['evaluation_interval'] == 0 && gpu==0: print("\n---- Evaluating Model ----") evaluation_metrics = evaluate(model) #[4] I have tried to put the barrier statement in four different places (maked in the code as comments) and no matter where I put it, the code hangs once it reaches that point. For the cases (1,2) the code executes well on the first pass, but after validation it hangs. For the case (3) the code never reaches that point after the validation pass. For the case (4) once the validation is done, it also hangs. I have also tested running the validation on all GPUs without using the barrier, and it does not hang. Does anyone have any idea on why this is happenning? I read this other two posts: 1 1, 2. But I think that their problem is not similar to mine. Any help would be very appreciated! Thanks!
st176872
Solved by Manuel_Alejandro_Dia in post #8 As @rvarm1 suggested in the Github issue, the problem is solved by using the local model when running the validation, not the DDP one. So instead of using: evaluation_metrics = evaluate(model) I should use: evaluation_metrics = evaluate(model.module) Thanks!
st176873
Thanks for reporting this issue! If barrier() is indeed not working properly, it seems like this is a bug and it would be great to create an issue over at http://github.com/pytorch/pytorch/issues/ 19 with an example code snippet that reproduces the issue. Although I’m a little confused, while the barrier at any of your points should work, I’m not sure how it helps you use only one GPU for validation? A barrier will just block all processes until all processes have entered the barrier.
st176874
rvarm1: Although I’m a little confused, while the barrier at any of your points should work, I’m not sure how it helps you use only one GPU for validation? A barrier will just block all processes until all processes have entered the barrier. I am also confused about this. My thought process is just that it seems like a waste of power to do the same validation step on all GPUs at the same time. Right now, my validation is coded to be done by a single process (and by consequence a single GPU), so there wouldn’t be any performance gain running it across multiple GPUs. I will try to reproduce the error and post an issue on Monday, since currently I don’t have access to the machine. I guess in the end I will have to just run validation on all GPUs.
st176875
What important information would be recommended for me to put on the issue @rvarm1 ? Thanks!
st176876
It would be best if you could provide a minimal example script in which you call barrier() as you expect to do here and it fails in the issue description. Thank you!
st176877
This is resolved, please see discussion in Using torch.distributed.barrier() makes the whole code hang · Issue #54059 · pytorch/pytorch · GitHub 205
st176878
As @rvarm1 suggested in the Github issue, the problem is solved by using the local model when running the validation, not the DDP one. So instead of using: evaluation_metrics = evaluate(model) I should use: evaluation_metrics = evaluate(model.module) Thanks!
st176879
I’m using DDP for training (using DDP wrapper around the model) and when I spawn n jobs I see the process on with nvidia-smi/gpustat but then there are (n-1) other GPU processes that are also created. I’m guessing that these are for communication between gpu0 and the other processes. I don’t see anything in the documentation or this forum about that. Below is output from gpustat for a 2 GPU job. The 2 GPU process using memory 9763 are the model and then there is 565M size process on gpu0. The latter process seems to be about the same size for any model. ddp_gpustat1094×200 34.8 KB So is this normal or is it an incorrect setup? Thanks.
st176880
Could you please check the process ids? Each DDP process is supposed to only work on one device. Suppose the expected case is process rank 0 on cuda:0 and process rank1 on cuda:1. It’s likely that somehow process rank1 also created a CUDA context on cuda:0. The size of 565M also looks like a CUDA context. If it’s indeed the case, there are a few actions can help avoid such problem: Run torch.cuda.set_device(rank) on each process to properly set the current device before running any CUDA ops. If 1 still does not solve the problem, you can set CUDA_VISIBLE_DEVICES env var to make sure that process 0 only sees gpu0 and process 1 only sees gpu1.
st176881
Hi, Is there a dmo example given for using torch.nn.distributed ? I am trying to test whether my University’s cluster working correctly or not?
st176882
These docs 5 give you an overview of the different distributed implementations and link to examples.
st176883
Hi all, is there a way to specify a list of GPUs that should be used from a node? The documentation 5 only shows how to specify the number of GPUs to use: python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE ... This was already asked in this thread 5 but not answered. Cheers!
st176884
Solved by ptrblck in post #2 You can make certain GPUs visible via CUDA_VISIBLE_DEVICES=1,3,7 python -m ..., which would map GPU1, GPU3, and GPU7 to cuda:0, cuda:1, cuda:2 inside the script and execute the workload (DDP in your case) only on these devices.
st176885
You can make certain GPUs visible via CUDA_VISIBLE_DEVICES=1,3,7 python -m ..., which would map GPU1, GPU3, and GPU7 to cuda:0, cuda:1, cuda:2 inside the script and execute the workload (DDP in your case) only on these devices.
st176886
Hi, My system is RTX 2080Ti * 8 and it was Turing architecture, So I have to use ncu instead of nvprof. When I running the PyTorch with metric of ncu, If i just running the one GPU, they profile the kernel exactly what I want to. But if I running on the multi-GPU, it may be called ncclAllReduce, they cannot profile and stop before the start the PyTorch imagenet. Can i ask why it cannot profile the imagent in multi-GPU, or recommend any else profiler… I want to know how to profile in ncu or nvprof. Below figure are screen shot about stopped imagenet in multi-GPU. image755×176 65.4 KB Thanks
st176887
Oh @ptrblck Thanks. Hmm, If I used nsight, it shows the cache hit rate at multi-GPU? And also, I hope collective communication also will be profiled.
st176888
I have set up my model with DistributedDataParallel, which is working well (i.e. it runs). The problem is that during early epochs some processes compute infinite/nan losses. Can I simply omit the call to backward() in those cases or will this confuse the synchronization that’s happening under the hood?
st176889
All calls to backward() should be independent across successive calls, but not independent with respect to different processes on the same call. In order to arbitrarily skip backwards, you can use the no_sync context manager: https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html?highlight=no_sync#torch.nn.parallel.DistributedDataParallel.no_sync However, be aware that this must be used across all ranks otherwise there will be synchronization issues. If you only need to skip backwards a few times early in the training, you could also consider having all processes agree to skip backward() or not: should_skip_backwards = ... # each process computes this should_skip_bwd_list = [None for _ in range(nranks)] torch.distributed.all_gather_object(should_skip_bwd_list, should_skip_backwards) globally_skip_bwd = any(should_skip_bwd_list) # all ranks agree
st176890
I am using Dataparallel module to train my network. However, I am facing the gpu device error when the MultivariateNormal distribution module is used. with my tensors. # Using DataParallel prior = MultivariateNormal(torch.zeros(dim).to(device), torch.eye(dim).to(device)) model = Flow(dim, prior, n_block) model = model.to(device) if use_cuda and torch.cuda.device_count()>1: model = DataParallel(model, device_ids=[0, 1, 2, 3]) # Calculating log probability of my Multivariate distribution logprob = prior.log_prob(z).view(x.size(0), -1).sum(1) # Error File "/data/saandeepaath/flow_based/modules/flows.py", line 95, in forward logprob = self.prior.log_prob(z).view(x.size(0), -1).sum(1) File "/home/saandeepaath/.local/lib/python3.7/site-packages/torch/distributions/multivariate_normal.py", line 207, in log_prob diff = value - self.loc RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
st176891
Solved by ptrblck in post #6 Thanks for the update. The workaround is invalid, as you would call the underlying model on the default device and would skip the nn.DataParallel wrapper. The device mismatch is raised, since torch.distributions do not have a to() method and are not registered as modules. Here is a minimal code s…
st176892
Your code snippet is unfortunately not executable and doesn’t use the model at all. Could you add the missing parts so that we could reproduce this issue and debug it, please?
st176893
Certainly. Sorry for missing out the details. Here is my full code. One strange thing is I was executing my code now to make sure I provided the right one and it worked without issues. I am certain the same code threw the error message with CUDA devices the last time I ran. Below classes is for my model. The Flow() class creates the entire model. import torch from torch import nn from sys import exit as e class SimpleNet(nn.Module): def __init__(self, inp, parity): super(SimpleNet, self).__init__() self.net = nn.Sequential( nn.Linear(inp//2, 256), nn.LeakyReLU(True), nn.Linear(256, 256), nn.LeakyReLU(True), nn.Linear(256, inp//2), nn.Sigmoid() ) self.inp = inp self.parity = parity def forward(self, x): z = torch.zeros_like(x) x0, x1 = x[:, :, ::2, ::2], x[:, :, 1::2, 1::2] if self.parity % 2: x0, x1 = x1, x0 z1 = x1 log_s = self.net(x1) t = self.net(x1) s = torch.exp(log_s) z0 = (s * x0) + t if self.parity%2: z0, z1 = z1, z0 z[:, :, ::2, ::2] = z0 z[:, :, 1::2, 1::2] = z1 logdet = torch.sum(torch.log(s), dim = 1) return z, logdet def reverse(self, z): x = torch.zeros_like(z) z0, z1 = z[:, :, ::2, ::2], z[:, :, 1::2, 1::2] if self.parity%2: z0, z1 = z1, z0 x1 = z1 log_s = self.net(z1) t = self.net(z1) s = torch.exp(log_s) x0 = (z0 - t)/s if self.parity%2: x0, x1 = x1, x0 x[:, :, ::2, ::2] = x0 x[:, :, 1::2, 1::2] = x1 return x class Block(nn.Module): def __init__(self, inp, n_blocks): super(Block, self).__init__() parity = 0 self.blocks = nn.ModuleList() for _ in range(n_blocks): self.blocks.append(SimpleNet(inp, parity)) parity += 1 def forward(self, x): logdet = 0 out = x xs = [out] for block in self.blocks: out, det = block(out) logdet += det xs.append(out) return out, logdet def reverse(self, z): out = z for block in self.blocks[::-1]: out = block.reverse(out) return out class Flow(nn.Module): def __init__(self, inp, prior, n_blocks): super(Flow, self).__init__() self.prior = prior self.flow = Block(inp, n_blocks) def forward(self, x): z, logdet = self.flow(x) logprob = self.prior.log_prob(z).view(x.size(0), -1).sum(1) #Error encountered here return z, logdet, logprob def reverse(self, z): x = self.flow.reverse(z) return x def get_sample(self, n): z = self.prior.sample(sample_shape = torch.Size([n])) return self.reverse(z) I define my prior variable and instantiate the model as follows and train it on MNIST dataset def startup(opt, use_cuda): torch.manual_seed(1) device = "cuda" if not opt.no_cuda and torch.cuda.is_available() else "cpu" kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {} inpt = 100 dim = 1 img_size = 28 n_block = 4 epochs = 50 lr = 0.01 wd=1e-3 old_loss = 1e6 best_loss = 0 batch_size = 128 prior = MultivariateNormal(torch.zeros(img_size).to(device), torch.eye(img_size).to(device)) #MNIST transform = transforms.Compose([transforms.ToTensor()]) train_dataset = MNIST(root=opt.root, train=True, transform=transform, \ download=True) train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, **kwargs) model = Flow(dim, prior, n_block) model = model.to(device) if use_cuda and torch.cuda.device_count()>1: model = DataParallel(model, device_ids=[0, 1, 2, 3]) optimizer = optim.Adam(model.parameters(), lr) scheduler = StepLR(optimizer, step_size=1, gamma=0.7) t0 = time.time() for epoch in range(epochs): model.train() train_data(opt, model, device, train_loader, optimizer, epoch) scheduler.step() print(f"time to complete {epochs} epoch: {time.time() - t0} seconds") Training is done the usual way with some preprocessing def preprocess(x): x = x * 255 x = torch.floor(x/2**3) x = x/32 - 0.5 return x for b, (x, _) in enumerate(train_loader): optimizer.zero_grad() x = x.to(device) x = preprocess(x) x = x.view(x.size(0), -1) z, logdet, logprob = model.module(x) # Encounters the original error (I have skipped the rest of the steps as the program stops here)
st176894
saandeep_aathreya: One strange thing is I was executing my code now to make sure I provided the right one and it worked without issues. I am certain the same code threw the error message with CUDA devices the last time I ran. This would make it quite impossible to debug. Let me know, if you could come up with a code snippet to reproduce this issue.
st176895
I identified the root cause. The main difference was how I was calling the forward method. I simply changed from z, logdet, logprob = model.forward(x) to z, logdet, logprob = model.module.forward(x) Below is my complete code to reproduce the issue. Line 2 of the forward method of the Flow class causes the error. I am exploring more on how calling forward with module resolves the issue but it would be great if you could help me understand and provide more clarity as it gets a little confusing for me to work with MultiVariateNormal and GPUs. Thank you so much. ERROR File "/flows.py", line 91, in forward logprob = self.prior.log_prob(z).view(x.size(0), -1).sum(1) #Error encountered here File "/home/saandeepaath/.local/lib/python3.7/site-packages/torch/distributions/multivariate_normal.py", line 207, in log_prob diff = value - self.loc RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! flows.py (Model classes) import torch from torch import nn from sys import exit as e class SimpleNet(nn.Module): def __init__(self, inp, parity): super(SimpleNet, self).__init__() self.net = nn.Sequential( nn.Conv2d(inp, 32, 3, 1, 1), nn.Tanh(), nn.Conv2d(32, 64, 3, 1, 1), nn.Tanh(), nn.Conv2d(64, 32, 3, 1, 1), nn.Tanh(), nn.Conv2d(32, inp, 3, 1, 1), nn.Tanh(), ) self.inp = inp self.parity = parity def forward(self, x): z = torch.zeros_like(x) x0, x1 = x[:, :, ::2, ::2], x[:, :, 1::2, 1::2] if self.parity % 2: x0, x1 = x1, x0 z1 = x1 log_s = self.net(x1) t = self.net(x1) s = torch.exp(log_s) z0 = (s * x0) + t if self.parity%2: z0, z1 = z1, z0 z[:, :, ::2, ::2] = z0 z[:, :, 1::2, 1::2] = z1 logdet = torch.sum(torch.log(s), dim = 1) return z, logdet def reverse(self, z): x = torch.zeros_like(z) z0, z1 = z[:, :, ::2, ::2], z[:, :, 1::2, 1::2] if self.parity%2: z0, z1 = z1, z0 x1 = z1 log_s = self.net(z1) t = self.net(z1) s = torch.exp(log_s) x0 = (z0 - t)/s if self.parity%2: x0, x1 = x1, x0 x[:, :, ::2, ::2] = x0 x[:, :, 1::2, 1::2] = x1 return x class Block(nn.Module): def __init__(self, inp, n_blocks): super(Block, self).__init__() parity = 0 self.blocks = nn.ModuleList() for _ in range(n_blocks): self.blocks.append(SimpleNet(inp, parity)) parity += 1 def forward(self, x): logdet = 0 out = x xs = [out] for block in self.blocks: out, det = block(out) logdet += det xs.append(out) return out, logdet def reverse(self, z): out = z for block in self.blocks[::-1]: out = block.reverse(out) return out class Flow(nn.Module): def __init__(self, inp, prior, n_blocks): super(Flow, self).__init__() self.prior = prior self.flow = Block(inp, n_blocks) def forward(self, x): z, logdet = self.flow(x) logprob = self.prior.log_prob(z).view(x.size(0), -1).sum(1) #Error encountered here return z, logdet, logprob def reverse(self, z): x = self.flow.reverse(z) return x def get_sample(self, n): z = self.prior.sample(sample_shape = torch.Size([n])) return self.reverse(z) Define multivariate distribution and instantiate the model import time import torch from torch import optim from torch.distributions import MultivariateNormal from torchvision.datasets import MNIST from torchvision import transforms from torch.utils.data import DataLoader from torch.optim.lr_scheduler import StepLR from torch.nn.parallel import DataParallel def startup(opt, use_cuda): torch.manual_seed(1) device = "cuda" if not opt.no_cuda and torch.cuda.is_available() else "cpu" kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {} inpt = 100 dim = 1 img_size = 28 n_block = 9 epochs = 5 lr = 0.01 wd=1e-3 old_loss = 1e6 best_loss = 0 batch_size = 128 prior = MultivariateNormal(torch.zeros(img_size).to(device), torch.eye(img_size).to(device)) #MNIST transform = transforms.Compose([transforms.ToTensor()]) train_dataset = MNIST(root=opt.root, train=True, transform=transform, \ download=True) train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, **kwargs) model = Flow(dim, prior, n_block) model = model.to(device) if use_cuda and torch.cuda.device_count()>1: model = DataParallel(model, device_ids=[0, 1, 2, 3]) optimizer = optim.Adam(model.parameters(), lr) scheduler = StepLR(optimizer, step_size=1, gamma=0.7) t0 = time.time() for epoch in range(epochs): model.train() train_data(opt, model, device, train_loader, optimizer, epoch) scheduler.step() print(f"time to complete {epochs} epoch: {time.time() - t0} seconds") Training step def preprocess(x): x = x * 255 x = torch.floor(x/2**3) x = x/32 - 0.5 return x def train_data(opt, model, device, train_loader, optimizer, epoch): for b, (x, _) in enumerate(train_loader): optimizer.zero_grad() x = x.to(device) x = preprocess(x) z, logdet, logprob = model.forward(x) # Causes error. Change to model.module.forward(x) to resolve (rest of the step skipped as the above line is where my code throws error)
st176896
Thanks for the update. The workaround is invalid, as you would call the underlying model on the default device and would skip the nn.DataParallel wrapper. The device mismatch is raised, since torch.distributions do not have a to() method and are not registered as modules. Here is a minimal code snippet to reproduce this issue: class MyModel(nn.Module): def __init__(self, prior): super(MyModel, self).__init__() self.prior = prior def forward(self, x): y = self.prior.log_prob(x) return y img_size = 1 device = 'cuda:0' prior = torch.distributions.MultivariateNormal(torch.zeros(img_size).to(device), torch.eye(img_size).to(device)) model = MyModel(prior) print(model.prior) model.to('cuda:1') print(model.prior) x = torch.randn(1).to('cuda:1') out = model(x) print(out) As you can see, even after calling model.to('cuda:1') model.prior is still on cuda:0 and the forward pass will raise the same error. It also seems to be a known issue so feel free to add your use case to it. I wanted to suggest the same workaround from this post, i.e. to register the loc and scale as buffers and recreate the distribution in the forward method.
st176897
Thank you for your response. I was able to bypass this using the solution from this 2 post.
st176898
I have my model which uses DataParallel whose checkpoint is saved as below model_single = MyModel() model = nn.DataParallel(model_single) model = model.to(device) torch.save(model.state_dict(), checkpoint_path) If I try to load this model using the below code, I get the error model = MyModel() checkpoint = torch.load(checkpoint_path, map_location="cpu") model.load_state_dict(checkpoint) #ERROR HERE RuntimeError: Error(s) in loading state_dict for Glow: Missing key(s) in state_dict: "blocks.0.flows.0.actnorm.loc"..., "blocks.3.prior.conv.weight", "blocks.3.prior.conv.bias". Unexpected key(s) in state_dict: "module.blocks.0.flows.0.actnorm.loc...,"" My model initialization parameters are the same as they were during training. I have searched for solutions in pytorch discussion which mostly suggests to save the model using model.module.state_dict() instead. Is there any other way I could load the model without having to train it again and save it differently? Please let me know if you require my complete code for better analysis. Thanks in advance.
st176899
Seems like you have an extra ‘module.’ in your saved model. in the example above you are saving DataParallel model and loading it into MyModel, the similar issue was discussed here github.com/pytorch/pytorch torch.save() and nn.DataParallel() 3 opened Jul 5, 2018 closed Jul 9, 2018 linzhiqiu As pointed out in this post, when model is trained using DataParallel(), the saved model state_dict will have prefix "module." before...
st176900
agolynski: in Thanks for your response. I was able to solve the issue by passing the loaded model through DataParallel again. model_single = MyModel() #solution model = nn.DataParallel(model_single) model = model.to(device) #end checkpoint = torch.load(checkpoint_path, map_location="cpu") model.load_state_dict(checkpoint) #ERROR HERE
st176901
My RAM usage keeps on increasing after first epoch. RAM remains at 30% around 12GB usage during first epoch of train and validation. But at second epoch it keeps on rising to 100% 62GB and then the process is killed. The entire time GPU memory remains constant. Only RAM increases. It gives the following warning after process is killed: /home/msi_55/Sowmen_2016331055/sowmen_conda_rootenv/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 6 leaked semaphores to clean up at shutdown len(cache)) All solutions talk about detaching tensors from the computation graph. I’ve done that but it still didn’t solve the problem. This is my training code: def train(name, df, patch_size, VAL_FOLD=0, resume=False): encoder = SRM_Classifer(encoder_checkpoint='weights/Changed classifier+COMBO_ALL_FULLSRM+ELA_[08|03_21|22|09].h5', freeze_encoder=True) model = UnetPP(encoder, num_classes=1, sampling=config.sampling, layer='end') SRM_FLAG=1 train_geo_aug = albumentations.Compose( [ albumentations.HorizontalFlip(p=0.5), albumentations.VerticalFlip(p=0.5), albumentations.RandomRotate90(p=0.1), albumentations.ShiftScaleRotate(shift_limit=0.01, scale_limit=0.04, rotate_limit=35, p=0.25), ], additional_targets={'ela':'image'} ) normalize = { "mean": [0.4535408213875562, 0.42862278450748387, 0.41780105499276865], "std": [0.2672804038612597, 0.2550410416463668, 0.29475415579144293], } transforms_normalize = albumentations.Compose( [ albumentations.Normalize(mean=normalize['mean'], std=normalize['std'], always_apply=True, p=1), albumentations.pytorch.transforms.ToTensorV2() ], additional_targets={'ela':'image'} ) # -------------------------------- CREATE DATASET and DATALOADER -------------------------- train_dataset = DATASET( dataframe=df, mode="train", val_fold=VAL_FOLD, test_fold=TEST_FOLD, patch_size=patch_size, resize=256, transforms_normalize=transforms_normalize, geo_augment=train_geo_aug ) train_loader = DataLoader(train_dataset, batch_size=config.train_batch_size, shuffle=True, num_workers=16, pin_memory=True, drop_last=False) valid_dataset = DATASET( dataframe=df, mode="val", val_fold=VAL_FOLD, test_fold=TEST_FOLD, patch_size=patch_size, resize=256, transforms_normalize=transforms_normalize, ) valid_loader = DataLoader(valid_dataset, batch_size=config.valid_batch_size, shuffle=True, num_workers=16, pin_memory=True, drop_last=False) optimizer = optim.Adam(model.parameters(), lr=config.learning_rate, weight_decay=config.weight_decay ) model = nn.DataParallel(model) model.to(device) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( optimizer, patience=config.schedule_patience, mode="min", factor=config.schedule_factor, ) criterion = losses.DiceLoss(mode='binary', log_loss=True, smooth=1e-7) es = EarlyStopping(patience=15, mode="min") start_epoch = 0 for epoch in range(start_epoch, config.epochs): print(f"Epoch = {epoch}/{config.epochs-1}") print("------------------") # if epoch == 2: # model.module.encoder.unfreeze() train_metrics = train_epoch(model, train_loader, optimizer, criterion, epoch, SRM_FLAG) valid_metrics = valid_epoch(model, valid_loader, criterion, epoch) scheduler.step(valid_metrics["valid_loss_segmentation"]) print( f"TRAIN_LOSS = {train_metrics['train_loss_segmentation']}, \ TRAIN_DICE = {train_metrics['train_dice']}, \ TRAIN_JACCARD = {train_metrics['train_jaccard']}," ) print( f"VALID_LOSS = {valid_metrics['valid_loss_segmentation']}, \ VALID_DICE = {valid_metrics['valid_dice']}, \ VALID_JACCARD = {valid_metrics['valid_jaccard']}," ) es( valid_metrics["valid_loss_segmentation"], model, model_path=os.path.join(OUTPUT_DIR, f"{name}_[{dt_string}].h5"), ) if es.early_stop: print("Early stopping") break def train_epoch(model, train_loader, optimizer, criterion, epoch, SRM_FLAG): model.train() segmentation_loss = AverageMeter() targets = [] outputs = [] for batch in tqdm(train_loader): images = batch["image"].to(device) elas = batch["ela"].to(device) gt = batch["mask"].to(device) optimizer.zero_grad() out_mask = model(images, elas) loss_segmentation = criterion(out_mask, gt) loss_segmentation.backward() optimizer.step() if SRM_FLAG == 1: bayer_mask = torch.zeros(3,3,5,5).cuda() bayer_mask[:,:,5//2, 5//2] = 1 bayer_weight = model.module.encoder.bayer_conv.weight * (1-bayer_mask) bayer_weight = (bayer_weight / torch.sum(bayer_weight, dim=(2,3), keepdim=True)) + 1e-7 bayer_weight -= bayer_mask model.module.encoder.bayer_conv.weight = nn.Parameter(bayer_weight) # ---------------------Batch Loss Update------------------------- segmentation_loss.update(loss_segmentation.detach().item(), train_loader.batch_size) with torch.no_grad(): out_mask = torch.sigmoid(out_mask).squeeze(1) out_mask = out_mask.cpu().detach() gt = gt.cpu().detach() targets.extend(list(gt)) outputs.extend(list(out_mask)) gc.collect() print("~~~~~~~~~~~~~~~~~~~~~~~~~") dice, _ = seg_metrics.dice_coeff(outputs, targets) jaccard, _ = seg_metrics.jaccard_coeff(outputs, targets) print("~~~~~~~~~~~~~~~~~~~~~~~~~") train_metrics = { "train_loss_segmentation": segmentation_loss.avg, "train_dice": dice.item(), "train_jaccard": jaccard.item(), "epoch" : epoch } return train_metrics def valid_epoch(model, valid_loader, criterion, epoch): model.eval() segmentation_loss = AverageMeter() targets = [] outputs = [] example_images = [] image_names = [] with torch.no_grad(): for batch in tqdm(valid_loader): images = batch["image"].to(device) elas = batch["ela"].to(device) gt = batch["mask"].to(device) out_mask = model(images, elas) loss_segmentation = criterion(out_mask, gt) # ---------------------Batch Loss Update------------------------- segmentation_loss.update(loss_segmentation.item(), valid_loader.batch_size) out_mask = torch.sigmoid(out_mask).squeeze(1) out_mask = out_mask.cpu().detach() gt = gt.cpu().detach() targets.extend(list(gt)) outputs.extend(list(out_mask)) print("~~~~~~~~~~~~~~~~~~~~~~~~~") dice, best_dice = seg_metrics.dice_coeff(outputs, targets) jaccard, best_iou = seg_metrics.jaccard_coeff(outputs, targets) print("~~~~~~~~~~~~~~~~~~~~~~~~~") valid_metrics = { "valid_loss_segmentation": segmentation_loss.avg, "valid_dice": dice.item(), "valid_jaccard": jaccard.item(), "epoch" : epoch } return valid_metrics How to solve this error? I’ve detached everything and tried everything. But I don’t understand why 1st epoch is ok but memory increases at 2nd epoch.
st176902
Could you try to narrow down the issue a bit further by removing specific parts from your code and checking, if the memory is still increasing? E.g. you could start with the (metric) logging, then the transformations, then using a single worker etc.
st176903
Okay found the error. It was happening in the lines targets.extend(list(gt)) outputs.extend(list(out_mask)) As all predictions are stored in the ram, the space gets exhausted. But still, this doesn’t explain why the 1st epoch runs without any problems.
st176904
I found that when I build a model like model = nn.Sequential(*[nn.Linear(2000, 2000).to(rank) for _ in range(20)]) torch.cuda.synchronize() print_peak_memory("Max memory allocated after creating local model", rank) # construct DDP model ddp_model = DDP(model, device_ids=[rank]) print_peak_memory("Max memory allocated after creating DDP", rank) the memory will definitely double. And this is quite unacceptable when training a large-scale model. Are there any solutions to help with this? PS: the code is from ZeRO 2
st176905
Solved by H-Huang in post #2 Hello! Yes, the memory required in this example will double when using DDP. This is because the world_size is 2 and the purpose of DDP is for data parallelism (same model, multiple data) where the model architecture is copied across multiple machines or GPUs to handle data in parallel. In the case …
st176906
Hello! Yes, the memory required in this example will double when using DDP. This is because the world_size is 2 and the purpose of DDP is for data parallelism (same model, multiple data) where the model architecture is copied across multiple machines or GPUs to handle data in parallel. In the case of a large-scale model that does not fit on a single machine or GPU, you should look into Distributed RPC 1. RPC allows for model parallelism (split model, same data) where you can split the model across machines and use RPC to communicate between the different layers. The RPC framework also handles autograd and optimizer steps internally.
st176907
I’m using ZeroRedundancyOptimizer in torch1.8. And I notice that in the step function of ZeRO, there is a update_param_groups before the self.optim.step(). I wonder if this function broadcast the gradients that the self.optim uses to calculate the new parameters. I don’t see any docstring explains where the gradients go. Looking forward to any replies, please~~
st176908
Solved by rvarm1 in post #3 Exactly, regardless of the optimizer, before step() is called it is guaranteed that all grad communication has taken place and each replica has allreduced gradients computed in the parameter’s .grad fields.
st176909
Ummm, I found the key. DDP has already done the parameters and gradients communications
st176910
Exactly, regardless of the optimizer, before step() is called it is guaranteed that all grad communication has taken place and each replica has allreduced gradients computed in the parameter’s .grad fields.
st176911
Hi, a question about the appropriate ordering of defining optimizer in DPP + amp scenario. Define optimizer before DPP ( torch.nn.parallel.DistributedDataParallel) : optimizer = torch.optim.AdamW(model.parameters(), lr=0.01) model, optimizer = amp.initialize(model, optimizer, opt_level='O2') model = DDP(model) define optimizer after DPP : model = DDP(model) optimizer = torch.optim.AdamW(model.parameters(), lr=0.01) model, optimizer = amp.initialize(model, optimizer, opt_level='O2') The official video classification script adopts the first logic. Yet I see some blog says that you should initialize an optimizer over a DPP model when you use DPP (though no amp context involved in that article). I wonder which one is correct. or dosen’t matter, both are fine. Thanks.
st176912
DDP does not change model.parameters(), and the optimizer works entirely locally, so defining the optimizer before or after wrapping model with DDP should not make a difference.
st176913
I have a dataset that I’m feeding into a NN, but the dataset is larger than the available memory on my machine (training on single CPU with 24 cores and 32 GB memory). I’m trying to load the data in batches and train in parallel, but for some reason all of the memory for all of the batches is being used at once, and the system crashes (If my total dataset is 40 GB, instead of loading (40GB/nbatches) * nprocesses, it loads 40GB at once). I’ve tried various implementations of DDP, torch.multiprocessing, torch.distributed, and DataParallel, but I haven’t been able to figure it out. The code is essentially: class Dataset(torch.utils.data.Dataset): 'Characterizes a dataset for PyTorch' def __init__(self): 'Initialization' self.srcDir = os.getcwd() + '/batches' self.data = os.listdir(self.srcDir) def __len__(self): return len(self.data) def __getitem__(self, index): fname = os.path.join(self.srcDir, 'batches_{}.pckl'.format(index)) with open(fname, 'rb') as f1: X = pickle.load(f1) def parallelFit(): from torch.nn.parallel import DistributedDataParallel as DDP train_dataset = Dataset() train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) batches = torch.utils.DataLoader(dataset=train_dataset, sampler=train_sampler) ddp_model = DDP(self.model, device_ids=[]) optimizer = LBFGS(ddp_model.parameters()) optimizer.step() ###I see the same issue if I don't use DDP and if I don't use DistributedSampler Then everything is run with: processes = [] for rank in range(num_processes): p = Process(target=calc.parallelFit, args=(partitions.use(rank))) p.start() processes.append(p) for p in processes: p.join() I’ve been reading through the forums and the documentation, but I feel like I’m just missing something.Preformatted text
st176914
Based on your __getitem__ it seems like you have pre-batched the data and now loading the files individually. The pickle.load() is a little suspicious because it might use a lot of memory. Have you tried reading from one source and setting the batch_size parameter on torch.utils.DataLoader instead? Also it would be helpful to see how you are using the data loader during the training loop.
st176915
Thanks for the reply! The data is pre-batched because setting up our training data is pretty complex and requires some pre-processing. Mainly there was no easy way to save our input to one file without running out of memory. For smaller training sets I’ve tried saving everything to one file and then loading with different batch sizes in DataLoader. I seem to get the same behavior either way. As for pickle.load(), it does have some excess memory usage, but the excess memory seems to always be a constant factor. After loading the batches as shown above, the training loop is more or less: inside of parallelFit(): ` batches = torch.utils.DataLoader(dataset=train_dataset, sampler=train_sampler) optimizer = LBFGSScipy(ddp_model.parameters(), max_iter=maxEpochs, logger=logger, rank=rank def closure(): loss = 0 energyloss = 0 forceloss = 0 energyRMSE = 0 forceRMSE = 0 batchid = 0 for batch in batches: epoch = 0 #print('allFPs', batch.allElement_fps) predEnergies, predForces = self.model(batch.allElement_fps, batch.dgdx, batch) loss += criterion(predEnergies, predForces, batch.energies, batch.forces, natomsEnergy = batch.natomsPerimageEnergy, natomsForce = batch.natomsPerimageForce) lossgrads = torch.autograd.grad(loss, self.model.parameters(), retain_graph = True, create_graph=False) for p, g in zip(self.model.parameters(), lossgrads): #if batchid == 0: p.grad = g #else: # p.grad += g batchid += 1 energyloss += criterion.energyloss forceloss += criterion.forceloss #if parallel: # average_gradients(self.model) # dist.all_reduce(loss, dist.ReduceOp.SUM) # dist.all_reduce(energyloss, dist.ReduceOp.SUM) # dist.all_reduce(forceloss, dist.ReduceOp.SUM) #if rank == 0: #logger.info('%s', "{:12d} {:12.8f} {:12.8f} {:12.8f}".format(epoch, loss.item(), energyRMSE, forceRMSE)) if epoch % self.logmodel_interval == 0: self.saveModel() # if parallel: # average_gradients(self.model) # dist.all_reduce(loss, dist.ReduceOp.SUM) # dist.all_reduce(energyloss, dist.ReduceOp.SUM) # dist.all_reduce(forceloss, dist.ReduceOp.SUM) energyRMSE = np.sqrt(energyloss.item()/self.nimages) forceRMSE = np.sqrt(forceloss.item()/self.nimages) if energyRMSE < self.energyRMSEtol and forceRMSE < self.forceRMSEtol: logger.info('Minimization converged') self.saveModel() io.saveFF(self.model, self.preprocessParas, filename="mlff.pyamff") sys.exit() return loss, energyRMSE, forceRMSE optimizer.step(closure) self.saveModel() sys.close() ` I left in some extra code that’s commented out. They’re just different ways i’ve tried to approach the problem. Left it in in case I was on the right path with them. Let me know if I should show any additional parts of the code.
st176916
What happens when we do not give a distributed sampler? Does it essentially iterate over all samples with as many ranks as we have?
st176917
Solved by rvarm1 in post #2 If the data is not sharded across different DDP ranks (i.e. with a distributed sampler or some custom sharding logic that you may have), then yes, DDP will use all samples on all ranks (in your example I guess there’s 2 ranks). This is why in general you want want to partition your data appropriate…
st176918
If the data is not sharded across different DDP ranks (i.e. with a distributed sampler or some custom sharding logic that you may have), then yes, DDP will use all samples on all ranks (in your example I guess there’s 2 ranks). This is why in general you want want to partition your data appropriately across ranks to ensure different model replicas get different data.
st176919
Hi, i am wondering in the case of connection break and message lost, for example the connection from master to worker process, 1. will torch handle reconnect? 2. if the connection got restored somehow, will torch retry to send the lost message and let the job keep running or just rely on TCP reliability and simply failed the job? Thanks!
st176920
Hi! Currently torch doesn’t handle node failures well. We rely on TCP reliability for node connectivity, but we also have some robustness mechanisms like rpc retries You can also look into TorchElastic — PyTorch/Elastic master documentation 3 to handle node failures.
st176921
Hello, My forward call returns a dictionary, out_dict, as follows: out_dict = {'main_predict_op': main_differentiable_op, 'secondary_predict_op': second_differentiable_op} It seems DDP does not like this and throws me the following warning: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). I have set find_unused_parameters=True. From what I understand, output tensor being encapsulated in a dictionary is not favorable for DDP. How would I go about this?
st176922
Solved by Rakshit_Kothari in post #2 Responding to anyone who faces this. The fact that a tensor is wrapped in a dictionary is not relevant. DDP requires that all output variables be used in the graph. Calling ‘unused_parameters’ option did pretty much nothing.
st176923
Responding to anyone who faces this. The fact that a tensor is wrapped in a dictionary is not relevant. DDP requires that all output variables be used in the graph. Calling ‘unused_parameters’ option did pretty much nothing.
st176924
Hi, the find_unused_params option in DDP will work if certain params are not used in forward pass, but then all params used in fwd pass get gradients in the backward pass. There is another case where all params are used in fwd pass but some don’t get grad in backwards, for example if you have: a, b = forward()our loss = a.sum() loss.backward() It seems like this is what may be happening in your training, but would need your training loop to verify this.
st176925
Hi everyone, I am in trouble since I don’t know how to manage evaluation phase with DistributedDataParallel model. In my evaluation loop I accumulate the correct predictions in order to compute the final accuracy per epoch. These predictions are stored inside a list of dictionaries. My model is wrapped in a Distributed DataParallel and so, each process will compute predictions on a separate portion of the dataset. Unfortunately, predictions are not tensors and so I cannot use the utilities provided in torch.distributed. I tried to save all the lists on disk and concatenate the results in the main process (rank == 0) but this method will not work in distributed scenario where I have multiple nodes. Do you know how to gather the list from all the different processes in order to compute the final accuracy per epoch?
st176926
Solved by pritamdamania87 in post #2 @Seo You can probably use gather_object to gather objects on a single rank which are not tensors.
st176927
@Seo You can probably use gather_object 23 to gather objects on a single rank which are not tensors.
st176928
Hi @pritamdamania87, thank you for your answer. I didn’t notice the gather_object because it is a new feature included in the current 1.8.0 torch nightly. Unfortunately, the gather_object doesn’t work with NCCL backend, so I have used the all_gather_object to broadcast the results to all the processes (and then use it only in the main one). I hope gather_object will be available for NCCL backend in the next release. Thank you!
st176929
Looks like gather is not supported in NCCL as a native API. Although I think we can support gather in PyTorch for the NCCL backend by using ncclSend/ncclRecv under the hood. cc @rvarm1 @Yanli_Zhao
st176930
That would be great, for the moment (1.8.0) gather_object still doesn’t work with NCCL backend.
st176931
Hi, I have trouble using multiple workers with DistributedDataParallel. If I set num_workers=0 + DDP everything works. If I set num_workers > 0 without DDP everything works. If I set num_workers > 0 with DDP I have the following error: Traceback (most recent call last): File "train_new.py", line 170, in <module> trainer.train() File "/home/matte/PhD/LV-LAB/ELVIS/elvis/trainers/distributed.py", line 38, in train mp.spawn(self._distributed_training, File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes while not context.join(): File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException: -- Process 0 terminated with the following error: Traceback (most recent call last): File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/home/matte/PhD/LV-LAB/ELVIS/elvis/trainers/distributed.py", line 71, in _distributed_training self.train_loop() File "/home/matte/PhD/LV-LAB/ELVIS/elvis/trainers/base.py", line 144, in train_loop for batch in self._trloader: File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 355, in __iter__ return self._get_iterator() File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 301, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 914, in __init__ w.start() File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 240, in reduce_tensor event_sync_required) = storage._share_cuda_() RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending. I tried to debug it without success. The only thing I know is that the error is caused when I do the first iteration on the dataloader. Anyway, the code crashes before entering the mydataset.__getitem(). Does someone of you have any idea how to understand what is going on?
st176932
Hi, Could you give an example that reproduces this issue? Seeing " ``` RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending. " makes me think that you are mixing multiprocessing and DDP somehow? maybe also see: https://pytorch.org/docs/stable/notes/cuda.html#cuda-nn-ddp-instead
st176933
After several hours of debug I have found out the potential problem. In my setup I have initialized my model, moved it on the GPU inside the master process and then re-used it in all the processes composing the DDP. Moving the creation of the model inside each single process (instead of doing it in the master one) solved the problem. I think that the major problem was moving it on the GPU before the creation of multiple processes. My supposition is that PyTorch cannot move the parameters from a process to another if they are CUDA tensor. So for all future readers, better to create your model after the mp.spawn command
st176934
Hi everyone! I just have a question regarding how can I make sure that the tensors will stay in the same device. A bit of context: I am training a YOLOv3-based detector and the code runs perfectly on one GPU. But I want to change it in order to use 2+ GPUs in the same machine using the nn.Dataparallel module in order to shorten training time. Wrapping the model for parallel GPUs and doing inference on it seems to be working well, but the problem arises when building the targets to calculate the loss. I use the following code to build the targets: def build_targets(pred_boxes, pred_cls, target, anchors, ignore_thres): BoolTensor = torch.cuda.BoolTensor if pred_boxes.is_cuda else torch.BoolTensor FloatTensor = torch.cuda.FloatTensor if pred_boxes.is_cuda else torch.FloatTensor nB = pred_boxes.size(0) nA = pred_boxes.size(1) nC = pred_cls.size(-1) nG = pred_boxes.size(2) # Output tensors obj_mask = BoolTensor(nB, nA, nG, nG).fill_(0) noobj_mask = BoolTensor(nB, nA, nG, nG).fill_(1) class_mask = FloatTensor(nB, nA, nG, nG).fill_(0) iou_scores = FloatTensor(nB, nA, nG, nG).fill_(0) tx = FloatTensor(nB, nA, nG, nG).fill_(0) ty = FloatTensor(nB, nA, nG, nG).fill_(0) tw = FloatTensor(nB, nA, nG, nG).fill_(0) th = FloatTensor(nB, nA, nG, nG).fill_(0) tcls = FloatTensor(nB, nA, nG, nG, nC).fill_(0) # Convert to position relative to box target_boxes = target[:, 2:6] * nG gxy = target_boxes[:, :2] gwh = target_boxes[:, 2:] # Get anchors with best iou ious = torch.stack([bbox_wh_iou(anchor, gwh) for anchor in anchors]) best_ious, best_n = ious.max(0) # Separate target values b, target_labels = target[:, :2].long().t() gx, gy = gxy.t() gw, gh = gwh.t() gi, gj = gxy.long().t() ############################################ # Problems from here in nn.Dataparallel ############################################ # Set masks obj_mask[b, best_n, gj, gi] = 1 noobj_mask[b, best_n, gj, gi] = 0 # Set noobj mask to zero where iou exceeds ignore threshold for i, anchor_ious in enumerate(ious.t()): noobj_mask[b[i], anchor_ious > ignore_thres, gj[i], gi[i]] = 0 # Coordinates tx[b, best_n, gj, gi] = gx - gx.floor() ty[b, best_n, gj, gi] = gy - gy.floor() # Width and height tw[b, best_n, gj, gi] = torch.log(gw / anchors[best_n][:, 0] + 1e-16) th[b, best_n, gj, gi] = torch.log(gh / anchors[best_n][:, 1] + 1e-16) # One-hot encoding of label tcls[b, best_n, gj, gi, target_labels] = 1 # Compute label correctness and iou at best anchor class_mask[b, best_n, gj, gi] = (pred_cls[b, best_n, gj, gi].argmax(-1) == target_labels).float() iou_scores[b, best_n, gj, gi] = bbox_iou(pred_boxes[b, best_n, gj, gi], target_boxes, x1y1x2y2=False) tconf = obj_mask.float() return iou_scores, class_mask, obj_mask, noobj_mask, tx, ty, tw, th, tcls, tconf When trying to run the training on dual GPUs, a device-side assertion is triggered when performing the line: obj_mask[b, best_n, gj, gi] = 1 This error seems to originate here and not before, I tested line by line the whole function, and executing this line always triggers the following device-side assertion error: /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [15,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [16,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [17,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [18,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [19,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [21,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [22,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [23,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [24,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [25,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [26,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [27,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [28,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [29,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1603729096996/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [30,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. Doing some debugging I found that in said line, sometimes tensors with different device IDs are used, which I think it’s the thing triggering the assertion error. My question here is if there is a way of making sure that the tensors will stay all in the same device somehow? Any help is appreciated. Thank you!
st176935
Solved by ptrblck in post #7 You could check the internal implementation and probably use the functional distributed API to send the appropriate splits to the desired device. From a general point of view: nn.DataParallel will split the tensors in dim0, so would it be possible to make sure the data and target tensors are constr…
st176936
The error is not pointing to a device mismatch, but to invalid indices. Make sure that all index tensors used in obj_mask[b, best_n, gj, gi] contain valid values for the shape of obj_mask. You could e.g. print the min. and max. values of them and compare it to the the obj_mask.shape.
st176937
Hello @ptrblck , thanks for the response! Could you please help me with clues on how to debug this? What I don’t understand is, why is this error being triggered only when I wrap the model under nn.Dataparallel ? and not when I run inference on a single GPU. If it is of any help, my only call to put the model for multi-GPU usage is a simple conditional: if torch.cuda.device_count()>1: # Multi-GPU Training model = nn.DataParallel(model) Thank you for any help that can be provided.
st176938
To debug this, I would try to check the obj_mask as well as the indices passed to it as described before. Based on the raised error, one (or multiple) indices seem to contain invalid values. I don’t know how e.g. b is created, but if it’s the “global” batch size, it would create an error, since each model replica in nn.DataParallel will use a split of the original input tensor.
st176939
EDIT: I added a clearer question as a reply to this comment. I decided to leave this just to give some context to the previously shared code and rationale on why the question I asked here would not help to solve the original question. Looking at the code, b refers to the batch to which each target belongs to. I think I found the problem, but I don’t have a clear idea on how to solve it. After more debugging, I understand that since the targets are being split, I needed to modify them somehow. The targets are built as follows, here image_number refers to the batch index for each box: [image_number, class_label, x, y, w, h] And since there can be different amounts of objects in each image, the target tensor size varies. In the targets, b is an array that can have values between [0, gB-1] where gB is the global batch size, and obj_mask second dimension will have a size of nB which is the batch size in the current GPU. In single GPU mode b is equal to nB and everything works fine. But when using multiple GPUs, the values of b can still be between [0, gB-1] where the obj_mask second dimension will have a size gB/2. Here is where the assertion error is being triggered. As a first solution, I thought to just check in which device the code is running. If it is running in the second GPU, just shift b's values by the batch size, here nB is the split batch size: if obj_mask.device.index > 0: for i in range(len(b[i])): if b[i] > nB-1: b[i] = target[i,0]-nB But the problem I encountered with this approach is that there is no way of knowing if one image may have many more objects than the others. So when target is split, this unbalance may result in b having a range of values [0, gB-1], and I couldn’t do the offset because then different batches may have the same index in the targets. Here is a pseudo-code example of one of the cases where this is true: #These are the global values gB = 16 target[:,0] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2, 2, 3, 4, 5, 5, 6, 7, 8, 8, 8, 8, 9, 9, 10, 10, 10, 11, 11, 11, 12, 12, 13, 13, 14, 14, 14, 15] #len(target[:,0]) = 67 #In GPU 0, after the split nB = 8 obj_mask = BoolTensor(nB, 3, 22, 22).fill_(0) b = target[:,0][ : len(target[:,0])//2] b = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] #len(b) = 33 obj_mask[b, 0, 0, 0] = 1 #In GPU 1, after the split nB = 8 obj_mask = BoolTensor(nB, 3, 22, 22).fill_(0) b = target[:,0][len(target[:,0])//2 : ] b = [0, 0, 0, 1, 1, 1, 2, 2, 3, 4, 5, 5, 6, 7, 8, 8, 8, 8, 9, 9, 10, 10, 10, 11, 11, 11, 12, 12, 13, 13, 14, 14, 14, 15] #len(b) = 34 obj_mask[b, 0, 0, 0] = 1 Here it can be seen that since image zero has the most amount of boxes, when the split is done, the range of values of b in GPU 1 will trigger the assertion. But if I apply the solution I came up with, the following will happen: #In GPU 1 b = [0, 0, 0, 1, 1, 1, 2, 2, 3, 4, 5, 5, 6, 7, 8, 8, 8, 8, 9, 9, 10, 10, 10, 11, 11, 11, 12, 12, 13, 13, 14, 14, 14, 15] if obj_mask.device.index > 0: for i in range(len(b[i])): if b[i] > nB-1: b[i] = target[i,0]-nB b = [0, 0, 0, 1, 1, 1, 2, 2, 3, 4, 5, 5, 6, 7, 0, 0, 0, 0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5, 6, 6, 6, 7] My new question now is, how could I evade this problem if the targets do not have always a specific shape? My main idea would be to modify the way the targets are built by creating them in with a shape of [gB, max_Boxes, 6], where I would have to check the maximum amount of boxes (max_Boxes) in a batch and fill with zeroes on the other ones so that the tensors can be appended to form the targets. I am not a fan of this approach since it would require me to modify a big amount of code, but I will do it if I have to. So if anyone could come up with a different strategy, it would be most appreciated! Thanks for taking the time of reading such long post.
st176940
Actually, after some thinking I realized that the problem is not about the size of the target tensor. What I think would help me solve this problem is to know if, is there any way I could override the split of some tensors done by nn.DataParallel? @ptrblck So that the corresponding targets for the images are sent to the specific GPU which is handling their input image.
st176941
You could check the internal implementation and probably use the functional distributed API to send the appropriate splits to the desired device. From a general point of view: nn.DataParallel will split the tensors in dim0, so would it be possible to make sure the data and target tensors are constructed in this way? Also, you might take a look at DistributedDataParallel using one process per GPU, which should be faster, and could even simplify your use case, as each process would load its data.
st176942
DistributedDataParallel was the solution, thank you so much for all of your help! I found this amazing tutorial that explains step by step how to use DistributedDataParallel: https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html 1 I hope it can help others. @ptrblck Is there any way I could add the tag #distributed to the original post? So it would be easier for people to use it?
st176943
Broadcasting one tensor to all processes in PyTorch is straightforward. But i got a tuple/list of tensor, How to broadcast them? broadcasting the tuple/list fails looping through the tuple/list and broadcast one at a time works fine for the first tensor, then it fails. Any idea how to broadcast a tuple/list of tensor in PyTorch?
st176944
I have been trying to the MoCo code from Facebook Research on a machine with 4 GPUs but have been consistently receiving SIGKILLs. If I run this command (which uses the nccl backend by default), python main_moco.py -a resnet50 --lr 0.015 --batch-size 128 --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --mlp --moco-t 0.2 --aug-plus --cos I get a SIGKILL as follows: File "main_moco.py", line 406, in <module> main() File "main_moco.py", line 130, in main mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args)) File "my_path/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "my_path/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes while not context.join(): File "my_path/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 107, in join (error_index, name) Exception: process 2 terminated with signal SIGKILL With some simple print statements, I have pinpointed this issue to the init_process_group call here 3. I also tried to run the code with the gloo backend. Now the init_process_group call succeeds, but my code fails with a SIGKILL just a little while later in the call to model.cuda() 2. To narrow in on the issue a little more, I tried running the example code in the Setup section of the PyTorch distributed applications tutorial. This code runs perfectly with a gloo backend, but when I replace it with an nccl backend, the code either hangs on the call to init_process_group or crashes with the following stack trace: File "multi_proc_test.py", line 17, in init_process dist.init_process_group(backend, rank=rank, world_size=size) File "my_path/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 455, in init_process_group barrier() File "my_path/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1960, in barrier work = _default_pg.barrier() RuntimeError: Broken pipe If it helps, I have confirmed that torch.distributed.is_nccl_available() returns true. Any ideas why the code is failing in these places when I use the gloo/nccl backends? Thank you in advance for your help!
st176945
The broken pipe error could be raised if one process died unexpectedly. You could rerun the minimal example with NCCL_DEBUG=INFO and check, if NCCL detects any errors.
st176946
Hi, I was running the imagenet in PyTorch example. Do I need the add code or give the option for using tensor core? or Pytorch use the tensor core as default mode? Thanks
st176947
Solved by Dwight_Foster in post #2 No you need to use the pytorch automatic mixed precision library here.
st176948
Besides what @Dwight_Foster mentioned, on Ampere GPUs the tensor cores will be used through TF32 11.
st176949
Hi all, I have encountered a few strange problems when I get started to use DDP in PyTorch. Exp. setting I used 2 NVIDIA GPUs in ONE machine as well as 14 threads on 4 CPUs. I spawned 2 processes each for a GPU in a program. Each program just allocates 1+G GMEM. Questions for help Though I have noticed to use map_location to load state_dict and so on, I found it is good for GMEM to be even across the GPUs and the somewhat strange thing was there were occasionally large fluctuations in GPU utilization, like one is 90+%, the other is ~20%, or the reverse reported. I thought DDP could lead to a more balanced load… Maybe it was because the CPUs were still in need so that the GPUs were far from busy, right? When I was trying to utilize the CPUs, I ran 3 programs on these 2 GPUs. However, it was found to get stuck at a time (in the middle process of training). It appeared to be all 3 programs reduced one of their spawned processes and only left the other on the same GPU. The CPUs were not in use but the GPU utilization shown by nvidia-smi was 100% though it was obvious that the processes were stuck. I am wondering was it the case of a deadlock in that 2 programs were the best practice in the case? (might be somewhat general) (suspect the stuck is caused by? val_loss, val_acc = self._run_epoch(...) dist.barrier() val_acc = torch.tensor(val_acc).unsqueeze(0).cuda() out_val_acc = [torch.zeros_like(val_acc) for _ in range(dist.get_world_size())] dist.all_gather(out_val_acc, val_acc) val_acc = torch.cat(out_val_acc).mean().item() # based on the exact division Thanks & Regards
st176950
Glory_Chen: Though I have noticed to use map_location to load state_dict and so on, I found it is good for GMEM to be even across the GPUs and the somewhat strange thing was there were occasionally large fluctuations in GPU utilization, like one is 90+%, the other is ~20%, or the reverse reported. I thought DDP could lead to a more balanced load… Maybe it was because the CPUs were still in need so that the GPUs were far from busy, right? What are the CPUs used for in your program? It could be possible that if CPUs were busy doing something and not pushing compute to GPUs, the GPU utilization might drop. Glory_Chen: When I was trying to utilize the CPUs, I ran 3 programs on these 2 GPUs. However, it was found to get stuck at a time (in the middle process of training). It appeared to be all 3 programs reduced one of their spawned processes and only left the other on the same GPU. The CPUs were not in use but the GPU utilization shown by nvidia-smi was 100% though it was obvious that the processes were stuck. I am wondering was it the case of a deadlock in that 2 programs were the best practice in the case? (might be somewhat general) I’m not sure I followed this completely, could you elaborate? What are these 3 programs running? Are you running DDP in 3 processes but across 2 GPUs? In general if GPUs are stuck at 100% util it probably indicates an issue where a NCCL collective op is stuck.
st176951
pritamdamania87: Glory_Chen: Though I have noticed to use map_location to load state_dict and so on, I found it is good for GMEM to be even across the GPUs and the somewhat strange thing was there were occasionally large fluctuations in GPU utilization, like one is 90+%, the other is ~20%, or the reverse reported. I thought DDP could lead to a more balanced load… Maybe it was because the CPUs were still in need so that the GPUs were far from busy, right? What are the CPUs used for in your program? It could be possible that if CPUs were busy doing something and not pushing compute to GPUs, the GPU utilization might drop. I used taskset command to specify the 14 threads on the CPUs to use. pritamdamania87: Glory_Chen: When I was trying to utilize the CPUs, I ran 3 programs on these 2 GPUs. However, it was found to get stuck at a time (in the middle process of training). It appeared to be all 3 programs reduced one of their spawned processes and only left the other on the same GPU. The CPUs were not in use but the GPU utilization shown by nvidia-smi was 100% though it was obvious that the processes were stuck. I am wondering was it the case of a deadlock in that 2 programs were the best practice in the case? (might be somewhat general) I’m not sure I followed this completely, could you elaborate? What are these 3 programs running? Are you running DDP in 3 processes but across 2 GPUs? In general if GPUs are stuck at 100% util it probably indicates an issue where a NCCL collective op is stuck. Every program uses mp.spawn to spawn 2 processes on the 2 GPUs. I ran 3 parallel programs. The command can be thought of, CUDA_VISIBLE_DEVICES=3,5 taskset -c 21-27,35-41 python xxx.py --world_size 2 --port 12355 CUDA_VISIBLE_DEVICES=3,5 taskset -c 21-27,35-41 python xxx.py --world_size 2 --port 12356 CUDA_VISIBLE_DEVICES=3,5 taskset -c 21-27,35-41 python xxx.py --world_size 2 --port 12357 Thanks.
st176952
Collapse with each other. Do not know why one of the 2 processes of the 2 programs run (30960, 29415) both disappear? It happens every time I run 2 parallel snippets… mp.spawn(launch, args=(args.world_size, args), nprocs=args.world_size, join=True)
st176953
For the second problem, maybe it is because of the reason said in Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.7.1 documentation 4. DDP processes can be placed on the same machine or across machines, but GPU devices cannot be shared across processes. and Distributed communication package - torch.distributed — PyTorch 1.7.1 documentation 2, If using multiple processes per machine with nccl backend, each process must have exclusive access to every GPU it uses, as sharing GPUs between processes can result in deadlocks.
st176954
I am just reporting what I figured out for who is encountering the similar problem to refer to. In my case, it is the inconsistency caused by lr_scheduler that one process exits normally (in the logic of the program) leaving the other hanging up to wait. So it just seems to get stuck… However, I would still like to use just one NCCL backbend in a GPU as a suggested practice.
st176955
Glory_Chen: For the second problem, maybe it is because of the reason said in Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.7.1 documentation. DDP processes can be placed on the same machine or across machines, but GPU devices cannot be shared across processes. Yes, the deadlock is most likely due to this. You should only use a single process per GPU.
st176956
Thanks bro. And if you have any insights or suggestions on how to balance/improve the GPU util to ~80-90% each stably, I would be appreciated.
st176957
Apart from running DDP are the GPUs doing any other processing? If not, what sort of processing are the 14 CPU threads doing? Are the CPU threads responsible for loading data onto the GPU? Tools like nvidia’s visual profiler might help in getting a better understanding of what exactly is happening on the GPU: Profiler :: CUDA Toolkit Documentation 6.
st176958
No, they do not. I think the CPUs were mainly working on image I/O loading and pre-processing (including data augmentation). But I found even if CPUs were not fully occupying all of the 14 threads (some are not 100% though htop), it often happened that one GPU util (either) is ~90%, the other is ~20% and reversed in a few second. I will take a look at the visual profiler. Thanks for the recommendation.
st176959
Hello, I am experiencing slower distributed training with the new PyTorch 1.7 built with CUDA 11.0 compared to 10.2! Has anyone benchmarked anything yet? I use the same script in two different environments one with CUDA 11.0 and the other with CUDA 10.2. The same script that takes 21 hrs for one epoch on CUDA 10.2 takes 24 hours on CUDA 11.0.
st176960
I guess the potential slowdown is not coming from distributed training (and thus NCCL) not from CUDA11, but might be coming from e.g. cudnn (which also depends on the device you are using). Are you only seeing the slowdown using DDP or also using a single device? The latter case would point towards my assumption. Could you give more information about your setup (GPU, model architecture) and also profile the training on a single device?
st176961
I’m having similar issue with pytorch 1.7 w/ CUDA 11.0 compared to CUDA 10.1. I’m using 2080Ti as GPU. Simple example that demonstrates this (only conv2d): import torch import torch.nn.functional as F x = torch.randn(10, 64, 128, 128).cuda() w = torch.randn(64, 64, 5, 5).cuda() start = torch.cuda.Event(enable_timing=True) end = torch.cuda.Event(enable_timing=True) # warmup y = [] for _ in range(10): y.append(F.conv2d(torch.randn_like(x), w)) # measure start.record() y = [] for _ in range(10): y.append(F.conv2d(torch.randn_like(x), w)) end.record() torch.cuda.synchronize() print('time = %.2f' % (start.elapsed_time(end),)) Results: # pytorch 1.7 w/ cuda 10.1 # time = 21.05 +/- 0.05 # pytorch 1.7 w/ cuda 11.0 # time = 25.40 +/- 0.05
st176962
Could you update to PyTorch 1.7.1 with CUDA11.0 and cudnn8.0.5 and recheck the performance, please?
st176963
Thanks for the quick response! I used 1.7.1 in my tests. Sorry for not providing patch-version in my previous comment. I’ve uploaded my collect-env logs to a relevant github issue: https://github.com/pytorch/pytorch/issues/47908#issuecomment-745140208 71
st176964
Interesting. Is the problem still present with Cuda 11.2? And have others been able to see the same problem?
st176965
I was using DP for training my models and later swtiched to DDP, however I noticed a significant performance drop after switching to DDP. I’ve double checked and made sure that data batches (size, sampling, random seeds, etc.) are consistent in two senarios, and have modified learning rate according to the “proportional to batch size” guideline as in the “Train ImageNet in 1 hour” paper. However I still got the performance drop with DDP. Is this expected? My understanding is that if we make sure the model sees the same data and learning rate (and of course start from the same initialization), DP and DDP training should produce the same model? Am I missing anything? Are there other factors that will like lead to differences, say, loss function, batch norms? Thanks!
st176966
Could you provide a script that can reproduce this issue? Are you sure that you used the same learning rate, loss function, optimizer, same number of epochs, etc? Both DP and DDP should be able to produce a model that can be also output by PyTorch w/o using DP or DDP, as long as the gradients are synced at every step. What about the performance w/o using DP or DDP?
st176967
What’s the loss function? If the loss function is not commutative, then it make result in a difference.