id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st176668 | prophet_zhan:
and I want to confirm there will be total 48g memory while I use nvlink to connect two 3090.
No, the devices should not show up as a single GPU with 48GB.
You can connect them via nvlink and use a data or model parallel approach. |
st176669 | Actually, I cannot see there is a single 48GB GPU in my server but can get the effection equals to 48GB on training. Such as I can set a higher batch_size without any other change, right? |
st176670 | prophet_zhan:
Such as I can set a higher batch_size without any other change, right?
I’m not sure what “without any other change” means, but you would either have to use a data parallel approach (which needs code changes) or a model parallel (also knows as model sharding), which also needs code changes.
But yes, you can use multiple devices in PyTorch. |
st176671 | Hi, sorry for my questions but I have some doubts about data parallel. Can I use data parallel on two nvidia RTX 3060 gpus, which are not compatible with nvlink? Or to use data parallel do I necessarily have to have GPU with nvlink or SLI? |
st176672 | You do not need nvlink to use data parallel, nvlink just makes the interconnect bandwidth between them faster, it is not required. |
st176673 | Yes, you should be able to use nn.DataParallel with any GPUs and with your 3060s also.
As @spacecraft1013 explained you won’t necessarily need nvlink, but it would speed up the p2p communication.
Also note, that we generally recommend to use DistributedDataParallel with a single process per device for the best performance. |
st176674 | Hi guys,
I wonder whether there is a way to disable multi-GPU Peer2Peer Access in Pytorch.
Thanks! |
st176675 | What is your use case? Are you using UVA with cudaMemcpyDeviceToDevice?
If you don’t want to use p2p access, you could copy the data to the host and from the host to another device. |
st176676 | I have two GPU cards, both 12 G.
During my training, batchsize is 1, the memory consumption of my model is about 7 G when on a single GPU card.
Now I use DDP to run my model on two GPU cards, I expect the memory consumption on 2 GPUs both are 3.5G, but in the fact, both of them are 7 G.
it’s normal? How should I do to achieve that?
Thanks |
st176677 | If a single sample uses approx. 7GB on one GPU, it would be the min. memory usage you could get using DDP, since the GPUs cannot process “half samples”.
Given that both devices use 7GB, I assume that both are processing a sample in your script. |
st176678 | Hi, All
I want to build a Pytorch operator by using NVSHMEM.
Is there any way that I can do that? Because when we build a standalone NVSHMEM application written in pure C++ and CUDA C, we need to use nvshmrun -n 2 to run that application. While in Pytorch, how could we achieve the same goal?
Thanks! |
st176679 | Has anyone encountered a similar problem?
The Python script could not exit by itself without Ctrl+C. I try to train a model on 2 GPU in a ubuntu server.
Training and evaluation are fine. But after “main()” finishing, the python process won’t terminate by itself. I checked the gpu( nvidia-smi ) and cpu usage ( htop ). The process still occupied those resources, which means the process group could not be kill by itself. And if I training with only one GPU, it’s fine.
Here’s my opinions:
group_process could not terminated for some reason? Except for initializaiton by torch.distributed.init_process_group(), I didn’t find any solutions in torch Documentation to termanate the group.
Deleting variable like model could not help it out.
Is this a bug for torch or python ?
Did I make mistakes in using multi-gpu? |
st176680 | Solved by Wenhao-Yang in post #4
Thanks for your advice!
I tried destroy_process_group() but it didn’t work.
In the end, I fixed it by deleting the following Potential conflict line:
torch.multiprocessing.set_sharing_strategy('file_system')
It seems that this line is compatible with
torch.distributed.init_process_group(backen… |
st176681 | From what I remember you have to destroy the distributed groups, so at the end of your code:
import torch.distributed as dist
dist.destroy_process_group() |
st176682 | Thanks for your advice!
I tried destroy_process_group() but it didn’t work.
In the end, I fixed it by deleting the following Potential conflict line:
torch.multiprocessing.set_sharing_strategy('file_system')
It seems that this line is compatible with
torch.distributed.init_process_group(backend="nccl", init_method='file:///home/xxx//sharedfile', rank=0, world_size=1)
instead of
torch.distributed.init_process_group(backend="nccl", init_method='tcp://localhost:32546', rank=0, world_size=1)
and it may cause the problems. |
st176683 | Sorry for missing code snippets. This is the first time to create a topic in the community. Here are my codes. And I have already find the potential conflict line:
# import ...
os.environ['CUDA_VISIBLE_DEVICES'] = “0,1”
def main():
train_loader = torch.utils.data.DataLoader(train_dir,
batch_size=args.batch_size,
shuffle=args.shuffle,
**kwargs)
# --------------------------------------------------- confilct line -------------------------------------------
torch.distributed.init_process_group(backend="nccl",
init_method='tcp://localhost:32546',
rank=0,
world_size=1)
# ----------------------------------- line without the problem ---------------------------------------
torch.distributed.init_process_group(backend="nccl", init_method='file:///home/xxx/sharedfile',
rank=0,
world_size=1)
model = DistributedDataParallel(model.cuda())
# train()...
# eval()...
if __name__ == '__main__':
main()
Thanks, anyway! |
st176684 | Hello,
I want to create a function to overwrite the forward and backward pass of a nn.Module. E.g. I load a ResNet or any other network, and I automatically change the forward and backward pass of all the layers by a custom Autograd function.
The code I have now is working in CPU and one GPU but not working when I extend to DataParallel.
It is distributing data and the model in different GPUs (Exactly like here: Issue for DataParallel · Issue #8637 · pytorch/pytorch · GitHub 1) and (DataParallel on modules with dynamically overwritten forwards 1)
Also I don’t know if that is the best way of overwriting the forward/backward pass.
I define a function to be applied to every model layer:
def override_backward(layer):
if isinstance(layer, nn.Conv2d) or isinstance(layer, nn.ConvTranspose2d):
def forward_conv(x):
if layer.bias is None:
return Conv2dFA.apply(x,
layer.weight,
layer.weight_fa,
None,
None,
layer.stride,
layer.padding,
layer.dilation,
layer.groups)
else:
return Conv2dFA.apply(x,
layer.weight,
layer.weight_fa,
layer.bias,
layer.bias_fa,
layer.stride,
layer.padding,
layer.dilation,
layer.groups)
layer.forward = forward_conv
The function Conv2dFA is:
class Conv2dFA(autograd.Function):
@staticmethod
def forward(context, input, kernels, kernels_fa, bias, bias_fa, stride, padding, dilation, groups):
context.stride, context.padding, context.dilation, context.groups = stride, padding, dilation, groups
context.save_for_backward(input, kernels, kernels_fa, bias, bias_fa)
output = torch.nn.functional.conv2d(input,
kernels,
bias=bias,
stride=stride,
padding=padding,
dilation=dilation,
groups=groups)
return output
@staticmethod
def backward(context, grad_output):
input, kernels, kernels_fa, bias, bias_fa = context.saved_tensors
grad_input = grad_kernels = grad_kernels_fa = grad_bias = grad_bias_fa = None
if context.needs_input_grad[0]:
grad_input = torch.nn.grad.conv2d_input(input_size=input.shape,
weight=kernels_fa,
grad_output=grad_output,
stride=context.stride,
padding=context.padding,
dilation=context.dilation,
groups=context.groups)
if context.needs_input_grad[1]:
grad_kernels = torch.nn.grad.conv2d_weight(input=input,
weight_size=kernels_fa.shape,
grad_output=grad_output,
stride=context.stride,
padding=context.padding,
dilation=context.dilation,
groups=context.groups)
if bias is not None and context.needs_input_grad[3]:
grad_bias = grad_output.sum(0).sum(2).sum(1)
# add the input in the stride gradient which is useless
# return grad_input, grad_kernels, grad_kernels_fa, grad_bias, grad_bias_fa, stride, padding, dilation, groups
return grad_input, grad_kernels, grad_kernels_fa, grad_bias, grad_bias_fa, None, None, None, None
And I apply it like this:
model_fa = resnet.resnet18()
model_fa.apply(override_backward)
Is there a way of doing the dynamic overwrite forward that will work with Data Parallel? I don’t want to create custom classes as I want this method to be applicable to every neural net.
Thanks in advance. |
st176685 | this post showed how to overwrite forward in layers How can I replace the forward method of a predefined torchvision model with my customized forward function? - #6 by Philipp_Friebertshau 4
hope it helps |
st176686 | I am currently studying distributed RPC for hybrid parallelism. From the documentation, I figured out RPC supports TensorPipe backend and it is a point-to-point communication. But for hybrid parallelism, I need all-to-all collective communication. Are there any ways for implementing hybrid parallelism with collective communication using distributed RPC?.
I kindly request anyone to provide a solution for this issue. |
st176687 | cc Luca @lcw
I think the main usecase of tensorpipe is not collectives, you can use other solutions, e.g. NCCL, GLOO, UCC.
Luca, are there plans for tensorpipe to be a backend for such collectives? |
st176688 | Yes, correct, we currently don’t provide a way to do collectives on top of RPC/TensorPipe. The rationale is that the “native” collective libraries (NCCL, Gloo, MPI) are already doing a much better job at this, hence we’re not optimizing TensorPipe and RPC for that use case. However you should be able to combine RPC with the collective libraries very easily. Here is a tutorial showing how to do so with DDP, but if you prefer to use the “lower-level” API that should work too: Combining Distributed DataParallel with Distributed RPC Framework — PyTorch Tutorials 1.8.1+cu102 documentation |
st176689 | I am having problem running training on Multiple GPUs on multiple node using DistributedDataParallel. I get RuntimeError: connect() timed out on Node 2. The code works fine when I am using just one Node and multiple GPUs on that Node. I am running my code in the docker image. I have pasted my code below and also the steps I use to run the training.
batch_loader.py:
from torch.utils import data
import random
import os
import numpy as np
import torch
class TrainFolder(data.Dataset):
def __init__(self, file):
super(TrainFolder, self).__init__()
self.images = []
fid = file
for x in fid:
if x == '':
continue
labelfile = x.replace("nonPR", "PR")
info = (x, labelfile)
self.images.append(info)
random.shuffle(self.images)
def __len__(self):
return len(self.images)
def __getitem__(self, index):
image_file, label_file = self.images[index]
img = np.load(image_file)
lab = np.load(label_file)
img = np.rollaxis(img, 2, 0)
lab = np.rollaxis(lab, 2, 0)
im = img.copy()
lb = lab.copy()
img = torch.from_numpy(im[:, :, :])
lab = torch.from_numpy(lb[:, :, :])
return img, lab
gan_network.py:
import math
import torch
import torch.nn as nn
def gen_initialization(m):
if type(m) == nn.Conv2d:
sh = m.weight.shape
nn.init.normal_(m.weight, std=math.sqrt(2.0 / (sh[0]*sh[2]*sh[3])))
nn.init.constant_(m.bias, 0)
elif type(m) == nn.BatchNorm2d:
nn.init.normal_(m.weight)
nn.init.normal_(m.bias)
class TripleConv(nn.Module):
def __init__(self, in_ch, out_ch):
super(TripleConv, self).__init__()
mid_ch = (in_ch + out_ch) // 2
self.conv = nn.Sequential(
nn.Conv2d(in_ch, mid_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.ReLU(),
nn.Conv2d(mid_ch, mid_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.ReLU(),
nn.Conv2d(mid_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.ReLU()
)
self.conv.apply(gen_initialization)
def forward(self, x):
return self.conv(x)
class Down(nn.Module):
def __init__(self, in_ch, out_ch):
super(Down, self).__init__()
self.triple_conv = TripleConv(in_ch, out_ch)
self.avg_pool_conv = nn.AvgPool2d(2, 2)
self.in_ch = in_ch
self.out_ch = out_ch
def forward(self, x):
self.cache = self.triple_conv(x)
pad = torch.zeros(x.shape[0], self.out_ch - self.in_ch, x.shape[2], x.shape[3], device=x.device)
x = torch.cat((x, pad), dim=1)
self.cache += x
return self.avg_pool_conv(self.cache)
class Center(nn.Module):
def __init__(self, in_ch, out_ch):
super(Center, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.ReLU()
)
self.conv.apply(gen_initialization)
def forward(self, x):
return self.conv(x)
class Up(nn.Module):
def __init__(self, in_ch, out_ch):
super(Up, self).__init__()
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear',
align_corners=True)
self.triple_conv = TripleConv(in_ch, out_ch)
def forward(self, x, cache):
x = self.upsample(x)
x = torch.cat((x, cache), dim=1)
x = self.triple_conv(x)
return x
class UNet(nn.Module):
def __init__(self, in_ch, first_ch=None):
super(UNet, self).__init__()
if not first_ch:
first_ch = 32
self.down1 = Down(in_ch, first_ch)
self.down2 = Down(first_ch, first_ch*2)
self.down3 = Down(first_ch*2, first_ch*4)
self.down4 = Down(first_ch*4, first_ch*8)
self.center = Center(first_ch*8, first_ch*8)
self.up4 = Up(first_ch*8*2, first_ch*4)
self.up3 = Up(first_ch*4*2, first_ch*2)
self.up2 = Up(first_ch*2*2, first_ch)
self.up1 = Up(first_ch*2, first_ch)
self.output = nn.Conv2d(first_ch, in_ch, kernel_size=3, stride=1,
padding=1, bias=True)
self.output.apply(gen_initialization)
def forward(self, x):
x = self.down1(x)
x = self.down2(x)
x = self.down3(x)
x = self.down4(x)
x = self.center(x)
x = self.up4(x, self.down4.cache)
x = self.up3(x, self.down3.cache)
x = self.up2(x, self.down2.cache)
x = self.up1(x, self.down1.cache)
x = self.output(x)
return x
pr_train_mp.py:
from configobj import ConfigObj
from tqdm import tqdm
import os
import gan_network
import glob
import random
import torch
from torch.utils.data import DataLoader
from torch.utils.data.dataset import random_split
from tensorboardX import SummaryWriter
from batch_loader import TrainFolder
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
import argparse
import torch.distributed as dist
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
def init_parameters():
tc, vc = ConfigObj(), ConfigObj()
tc.batch_size, vc.batch_size = 1, 1
tc.n_channels, vc.n_channels = 6, 6
tc.image_size, vc.image_size = 1024, 1024
return tc, vc
def train(gpu, args):
############################################################
rank = args.nr * args.gpus + gpu
dist.init_process_group(
backend='nccl',
init_method='env://',
world_size=args.world_size,
rank=rank
)
############################################################
torch.manual_seed(47)
torch.backends.cudnn.benchmark = True
netG = gan_network.UNet(6, first_ch=32)
torch.cuda.set_device(gpu)
netG.cuda(gpu)
optimizerG = optim.Adam(netG.parameters(), lr=1e-4, betas=(0.9, 0.999))
# Initialize BCELoss function
criterion = nn.MSELoss().cuda(gpu)
###############################################################
# Wrap the model
netG = nn.parallel.DistributedDataParallel(netG, device_ids=[gpu])
###############################################################
# Data loading code
train_samples = glob.glob('/home/data/nas/Processed_Data/training_data/phase_recovery/110920/npyfiles/size_1024/train/*_nonPR.npy')
valid_samples = glob.glob('/home/data/nas/Processed_Data/training_data/phase_recovery/110920/npyfiles/size_1024/valid/*_nonPR.npy')
random.shuffle(train_samples)
trainData = TrainFolder(train_samples)
validData = TrainFolder(valid_samples)
################################################################
train_sampler = torch.utils.data.distributed.DistributedSampler(
trainData, num_replicas=args.world_size, rank=rank)
valid_sampler = torch.utils.data.distributed.DistributedSampler(
validData, num_replicas=args.world_size, rank=rank)
################################################################
train_config, valid_config = init_parameters()
train_data_loader = DataLoader(dataset=trainData, num_workers=0, batch_size=train_config.batch_size,
drop_last=False, pin_memory=True, sampler=train_sampler)
valid_data_loader = DataLoader(dataset=validData, num_workers=0, batch_size=valid_config.batch_size,
drop_last=False, pin_memory=True, sampler=valid_sampler)
niter = 10000
for epoch in range(niter):
train_sampler.set_epoch(epoch)
valid_sampler.set_epoch(epoch)
netG.train()
for i, (images, labels) in enumerate(tqdm(train_data_loader)):
images = images.to(non_blocking=True)
labels = labels.cuda(non_blocking=True)
images = images.float()
labels = labels.float()
netG.zero_grad()
optimizerG.zero_grad()
output = netG(images)
errG_mse = criterion(output, labels)
errG_mse.backward()
optimizerG.step()
netG.eval()
with torch.no_grad():
for i, (images, labels) in enumerate(tqdm(valid_data_loader)):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
images = images.float()
labels = labels.float()
G_output = netG(images)
valid_errG_mse = criterion(G_output, labels)
if epoch % 3 == 0 and gpu == 0:
torch.save(netG.state_dict(), f'model/network_epoch{epoch}.pth')
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=1, type=int, metavar='N')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=0, type=int,
help='ranking within the nodes')
args = parser.parse_args()
#########################################################
args.world_size = args.gpus * args.nodes #
os.environ['MASTER_ADDR'] = 'a.b.c.d' #
os.environ['MASTER_PORT'] = '110' #
mp.spawn(train, nprocs=args.gpus, args=(args,)) #
#########################################################
if __name__ == '__main__':
main()
Steps followed to train:
Launch my docker image on Node 1 and Node 2 using this command: (docker image name is aqusens-train
sudo docker run --network=host -it --shm-size=32G --rm --runtime=nvidia -v /home:/home aqusens-train
On node 1 run the following command:
python3 pr_train_mp.py -n 2 -g 1 -nr 0
On node 2 run the following command:
python3 pr_train_mp.py -n 2 -g 1 -nr 1
Node 1 is the root machine. I can run the code individually on each node.
Input Image SIze: (NCHW) = (1,6,1024,1024)
Node 1 Environment::
Pytorch: 1.7.1 Ubuntu: ‘18.04’ cudnn: 7605 cuda: 10.2
Node 2 Environment::
Pytorch: 1.7.0 Ubuntu: ‘18.04’ cudnn: 7605 cuda: 10.2 |
st176690 | Solved by dnaik in post #7
This turned out to be a firewall issue. I had to allow to and from rules of the node 2 in the firewall settings of the node 1 (master node).
sudo ufw allow from ..*.0/24
sudo ufw allow to ..*.0/24 |
st176691 | Just to make sure on how you initialize process group, do you set
MASTER_ADDR to be address of your node 1 on both nodes?
(so node 2 needs to know address of node 1 before you launch it)
https://pytorch.org/docs/stable/distributed.html#environment-variable-initialization 2 |
st176692 | I have copied pr_train_mp.py file on both the nodes. MASTER_ADDR is same ip address on both nodes. Only -nr argument is different on both nodes. |
st176693 | ok, your MASTER_ADDR points to node 1.
could you check if node 2 can talk to node 1? and it can reach your specified port 110 (e.g. check for firewalls)? |
st176694 | I tried this method. I am running the commands on terminal window of node 2
serverfault.com
How to check a route exist between two hosts for a particular port?
networking, port, route
asked by
ohho
on 04:17AM - 07 Aug 13 UTC
$ ip route get 192.168.1.183 gave output
192.168.1.183 dev enp6s0 src 192.168.1.237 uid 1004
cache
$ telnet 192.168.1.183 110
Trying 192.168.1.183…
telnet: Unable to connect to remote host: Connection timed out
This means node 2 cannot connect to node 1. Is there a way to fix this ? |
st176695 | I think this should be configured in your cluster settings (e.g. AWS) and also docker containers should be given access to proper NIC interfaces.
I am not a docker expert, maybe check something like this
Docker Documentation – 12 Apr 21
Container networking 1
How networking works from the container's point of view |
st176696 | This turned out to be a firewall issue. I had to allow to and from rules of the node 2 in the firewall settings of the node 1 (master node).
sudo ufw allow from ..*.0/24
sudo ufw allow to ..*.0/24 |
st176697 | On LambdaLabs, I spin up a two-GPU machine. I run the simple example code from the pytorch docs 1
However, I can’t even get the simple example to run:
---------------------------------------------------------------------------
ProcessExitedException Traceback (most recent call last)
<ipython-input-1-e1523c2c83af> in <module>
34
35 if __name__=="__main__":
---> 36 main()
<ipython-input-1-e1523c2c83af> in main()
28 def main():
29 world_size = 2
---> 30 mp.spawn(example,
31 args=(world_size,),
32 nprocs=world_size,
~/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py in spawn(fn, args, nprocs, join, daemon, start_method)
228 ' torch.multiprocessing.start_process(...)' % start_method)
229 warnings.warn(msg)
--> 230 return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
~/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)
186
187 # Loop on join until it returns True or raises an exception.
--> 188 while not context.join():
189 pass
190
~/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py in join(self, timeout)
137 )
138 else:
--> 139 raise ProcessExitedException(
140 "process %d terminated with exit code %d" %
141 (error_index, exitcode),
ProcessExitedException: process 0 terminated with exit code 1
How can I get a simple DDP example to run? |
st176698 | This is the sample code:
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP
def example(rank, world_size):
# create default process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
# create local model
model = nn.Linear(10, 10).to(rank)
# construct DDP model
ddp_model = DDP(model, device_ids=[rank])
# define loss function and optimizer
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
# forward pass
outputs = ddp_model(torch.randn(20, 10).to(rank))
labels = torch.randn(20, 10).to(rank)
# backward pass
loss_fn(outputs, labels).backward()
# update parameters
optimizer.step()
def main():
world_size = 2
mp.spawn(example,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__=="__main__":
main()
I will note that it works if join=False. But why does this simple pytorch doc example not work as written? |
st176699 | turian:
mp.spawn(example,
args=(world_size,),
nprocs=world_size,
join=True)
Hi,
This works ok for me with join=True. Seems like your process 0 is dying for some reason, can you add logging to example function and see where is the problem?
(also seems like you aren’t using GPUs here) |
st176700 | @agolynski could you suggest how/where I should add logging?
Yes, I’m on a 2 GPU machine from LambdaLabs 1 if you want to try and replicate. (I upgrade torch to the latest release when I create the instance.)
Do I need to add anything extra to use the GPUs? |
st176701 | The code you have doesn’t use GPUs, it’s CPU only tensors.
I suggest add some print statements in your model before and after critical sections of the code, i.e.
dist.init_process_group(“gloo”, rank=rank, world_size=world_size)
forward pass
backward pass
optimizer.step()
and see which line causes your error. |
st176702 | I have about 10 GPU host indexes to be run on distributed mode. I need to use all the GPU machines available. But the problem is that torch.cuda.device_count() returns 1. I verified that the environment variable do have proper values ( 1,2,3,4,5,6,7,8,9,10 → indicating all 10 device indexes) .
Can anyone tell me whats going wrong here? Really appreciate your time. |
st176703 | Hi,
Just to clarify: you have 10 nodes and each node has a few GPUs (how many)? (torch.cuda.device_count returns number of GPU devices on a given machine) |
st176704 | There are 4 nodes and 10 GPU indices in total.
Two nodes have 3 GPU indices each.
Two nodes have 2 GPU indices each. |
st176705 | if everything is setup properly torch.cuda.device_count() should return 2 or 3 respectively, not 1 or 10.
What environment variables do you mean here? |
st176706 | I’m referring to environment variable: CUDA_VISIBLE_DEVICES
This is set to 1,2,3,4,5,6,7,8,9,10 and verified the same while debugging.
So, I’m not sure what is going wrong here. Device count is proper if I print out from console, but getting 1 on code execution. There’s nothing on the code that could mess up the device count.
Is there any way by which the device count gets modified? (For example, with the use of CUDA_VISIBLE_DEVICES)
if everything is setup properly torch.cuda.device_count() should return 2 or 3 respectively
Can you elaborate more on the setup? I’m trying to figure out what is causing the issue. |
st176707 | A couple of things: I think CUDA_VISIBLE_DEVICES is 0-based, so it should be set to something like “0, 1, …”
You have 4 machines with 3, 3, 2, 2 GPUs respectively, so CUDA_VISIBLE_DEVICES should be set on each of machines independently or you may just omit setting CUDA_VISIBLE_DEVICES and it should work as well (you’ll get all avail devices by default) |
st176708 | Hi everyone,
Is it possible to use NCCL backend for training of a DDP model and a new group with Gloo backend to do gather operation for cpu tensors?
I’ll try to illustrate my use case since there might be a cleaner/easier solution for it:
I have a DDP model, training it on N GPUs with nccl backend.
I have attached some gradient hooks on the weight param of some layers, and I am storing these gradients in all processes.
After some time, I would like to gather stored gradients from all processes in the main process to do some computation with it.
Since gather is not supported in nccl backend, I’ve tried to create a new group with gloo backend but for some reason the process hangs when it arrives at the: torch.distributed.gather(..., group=my_gloo_group).
Note: using all_gather in nccl is not an option because gradients are stored as cpu tensors. Using all_gather with gloo is not an option since storing world_size times this large “storage of gradients” is impossible. |
st176709 | Hi @agolynski ,
I think I’ve managed to solve the issue. I didn’t know that I have to call torch.distributed.gather(...) in non-master processes. So the fix for the issue was basically changing this code snippet:
if torch.distributed.get_rank() == 0:
gathered_data = [...]
torch.distributed.gather(tensor=my_tensor, gather_list=gathered_data, group=gloo_group_handle)
to:
if torch.distributed.get_rank() == 0:
gathered_data = [...]
torch.distributed.gather(tensor=my_tensor, gather_list=gathered_data, group=gloo_group_handle)
else:
torch.distributed.gather(tensor=my_tensor, group=gloo_group_handle)
I thought that non-master processes will somehow magically be pinged by the master process to send their tensors. |
st176710 | Thanks for the update!
Non master process needs to know which tensors to send to master hence you need to call gather there too |
st176711 | Using PyTorch, what is the difference between the following two methods in sending a tensor to GPU:
Method 1:
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X).cuda()
Method 2:
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X)
device = torch.device("cuda:0")
X = X.to(device)
Similarly, is there any difference in the same two methods above when applied to sending a model to GPU:
Method A:
gpumodel = model.cuda()
Method B:
device = torch.device("cuda:0")
gpumodel = model.to(device)
Many thanks in advance! |
st176712 | Solved by iacob in post #7
Their syntax varies slightly, but they are equivalent:
⠀
.to(name)
.to(device)
.cuda()
CPU
to('cpu')
to(torch.device('cpu'))
cpu()
Current GPU
to('cuda')
to(torch.device('cuda'))
cuda()
Specific GPU
to('cuda:1')
to(torch.device('cuda:1'))
cuda(device=1)
Note: the current cuda device … |
st176713 | There might be a difference, if you were resetting the default CUDA device via torch.cuda.set_device() as seen in this code snippet:
torch.cuda.set_device('cuda:1')
x = torch.randn(1).cuda()
print(x)
> tensor([0.9038], device='cuda:1') # uses the default device now
y = torch.randn(1).to('cuda:0')
print(y)
> tensor([-0.7296], device='cuda:0') # explicitly specify cuda:0 |
st176714 | Ok many thanks @ptrblck for the more detailed answer where the 2nd method is specifying which GPU device to use and the 1st method is just using the default GPU device. |
st176715 | Their syntax varies slightly, but they are equivalent 8:
⠀
.to(name)
.to(device)
.cuda()
CPU
to('cpu')
to(torch.device('cpu'))
cpu()
Current GPU
to('cuda')
to(torch.device('cuda'))
cuda()
Specific GPU
to('cuda:1')
to(torch.device('cuda:1'))
cuda(device=1)
Note: the current cuda device is 0 by default, but this can be set with torch.cuda.set_device(). |
st176716 | Hi Everyone.
I have an autoencoder which I am training using DDP. I wanted to try and improve the performance by using MKLDNN, i tried to convert the model to MKL DNN using the following lines but at runtime, i get an Assertion error. Is MKL DNN not supported for DDP? or am i doing something wrong? any help would be highly appreciated.
autoencoder = AutoEncoder(layers=layers)
autoencoderMKL = mkldnn_utils.to_mkldnn(autoencoder)
ddp_model = DDP(autoencoderMKL)
Error ###
File “/N/u2/p/pulasthiiu/git/deepLearning_MDS/nnprojects/Mnist/AutoEncodertDDPDataGenMKL.py”, line 156, in
main()
File “/N/u2/p/pulasthiiu/git/deepLearning_MDS/nnprojects/Mnist/AutoEncodertDDPDataGenMKL.py”, line 105, in main
ddp_model = DDP(autoencoderMKL)
File “/N/u2/p/pulasthiiu/python3.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py”, line 344, in init
assert any((p.requires_grad for p in module.parameters())), (
AssertionError: DistributedDataParallel is not needed when a module doesn’t have any parameter that requires a gradient.
Traceback (most recent call last):
File “/N/u2/p/pulasthiiu/python3.8/lib/python3.8/runpy.py”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/N/u2/p/pulasthiiu/python3.8/lib/python3.8/runpy.py”, line 87, in _run_code
exec(code, run_globals)
File “/N/u2/p/pulasthiiu/python3.8/lib/python3.8/site-packages/torch/distributed/launch.py”, line 260, in
main()
File “/N/u2/p/pulasthiiu/python3.8/lib/python3.8/site-packages/torch/distributed/launch.py”, line 255, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command ‘[’/N/u2/p/pulasthiiu/python3.8/bin/python3’, ‘-u’, ‘/N/u2/p/pulasthiiu/git/deepLearning_MDS/nnprojects/Mnist/AutoEncodertDDPDataGenMKL.py’, ‘-w’, ‘40’, ‘-ep’, ‘10’, ‘-bs’, ‘8000’, ‘-rc’, ‘1024’, ‘-ds’, ‘640000’, ‘-l’, ‘768x576x432x324’]’ returned non-zero exit status 1.
Best Regards,
Pulasthi |
st176717 | I don’t believe that to_mkldnn() modifies the underlying model, just the memory format of the tensors, please let me know if I am wrong. We need more info on the AutoEncoder model and what that looks like. Could you also include the code to the model? Does that have any parameters?
As a reference here is the line that is erroring out: pytorch/distributed.py at master · pytorch/pytorch · GitHub |
st176718 | Hi Huang,
Sorry about the late reply. It is a simple autoencoder, just have some logic to add layers when I specify the number of layers in the autoencoder (the code is below). Am I using the to_mkldnn function incorrectly?
Link to complete code:
Without MKL https://github.com/pulasthi/deepLearning_MDS/blob/master/nnprojects/Mnist/AutoEncodertDDPDataGen.py
With MKL: https://github.com/pulasthi/deepLearning_MDS/blob/master/nnprojects/Mnist/AutoEncodertDDPDataGenMKL.py 1
class AutoEncoder(nn.Module):
def __init__(self, **kwargs):
super().__init__()
inner_layers = kwargs["layers"]
encoder_layers = []
decoder_layers = []
num_layers = len(inner_layers) - 1
print(f"numlayers {num_layers}")
for x in range(num_layers):
encoder_layers.append(nn.Linear(in_features=inner_layers[x], out_features=inner_layers[x + 1]))
decoder_layers.append(
nn.Linear(in_features=inner_layers[num_layers - x], out_features=inner_layers[num_layers - x - 1]))
decoder_layers.append(nn.ReLU(True))
if (x == num_layers - 1):
encoder_layers.append(nn.ReLU(True))
else:
encoder_layers.append(nn.ReLU(True))
self.encoder = nn.Sequential(*encoder_layers)
self.decoder = nn.Sequential(*decoder_layers)
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x``` |
st176719 | Thanks for the model. I just verified that it is failing and mkldnn does change the model layers. I don’t have a lot of context on mkl dnn, but I created an issue on github to track this and loop in the right people, Support for mkldnn + ddp · Issue #56024 · pytorch/pytorch · GitHub. |
st176720 | @Pulasthi does it work if you try to convert model to mkl_dnn model and run local training without DDP? |
st176721 | I’m trying to parallelize a projected gradient descent attack (on a single node).
The models parameters and buffers do not change, but the input images do.
So there is no overhead of parameter/gradient synchronization between instances of the model running on different GPUs.
This should provide a good opportunity for a great deal of speed up, since there is no need to move around parameters/gradients.
What is the best way to implement this, so I can take full advantage of the opportunity (provided by not needing to sync model instances)
If I were to use DataParallel is there a way to tell it not to sync model after a forward/backward iteration? Or is it smart enough not to sync them if they haven’t changed?
Thanks in advance |
st176722 | if you do not want to sync gradients for the models, and you are using DistributedDataParallel(), you can try this:
ddp_model = DistributedDataParallel(model)
with ddp_model.no_sync():
training loop |
st176723 | When running the basic DDP (distributed data parallel) example from the tutorial here 2, GPU 0 gets an extra 10 GB of memory on this line:
ddp_model = DDP(model, device_ids=[rank])
What I’ve tried:
Setting the ‘CUDA_VISIBLE_DEVICES’ environment variable so that each subprocess can only see the GPU of its rank. Then I set rank = 0.
os.environ['CUDA_VISIBLE_DEVICES'] = str(rank)
rank = 0
setup(rank, world_size)
# etc.
This results in the error
File "/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/code/test.py", line 49, in demo_basic
setup(rank, world_size)
File "/code/test.py", line 29, in setup
dist.init_process_group("gloo", rank=rank, world_size=world_size)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
I also tried rewriting this to use the scripting version of DDP, and saw the same problem.
There is also nothing here involving a profiler, as described here, loading a model from disk, as described here, or calls to torch.cuda.empty_cache, as described here. My machine is a DGX-1 with 8 Tesla V100 GPUs. I tried this on another DGX-1 with the same result.
Any ideas what is consuming so much memory on GPU 0, and how to resolve it?
Screenshot of the problem:
Screenshot from 2021-04-14 00-01-09731×964 117 KB
Below is the full MWE, copied from the tutorial:
import os
import sys
import tempfile
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_fn, world_size):
mp.spawn(demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == "__main__":
run_demo(demo_basic, torch.cuda.device_count()) |
st176724 | Adding these two lines above the model initialization line solved the problem for me:
torch.cuda.set_device(rank)
torch.cuda.empty_cache()
Screenshot from 2021-04-14 10-56-30759×977 110 KB |
st176725 | When hard example mining, it is important to keep track of the data indices to be able to set the proper weights in either the loss function or the sampler. For this purpose, my Dataset outputs a dictionary including the index that outputs the sample, e.g., {'image_index': idx, 'image': image, 'target': target}.
The collate function then merges the indexes to a Double tensor, so far so good. When I am now evaluating the training set in multi gpu setting, I store the loss and the indices in two dictionaries (as data types are different) and attempt to merge these dictionaries across these GPUs to GPU 0 which I can then use to compute proper weights for the next epoch.
However, NCCL does not seem to support gather. I get RuntimeError: ProcessGroupNCCL does not support gather I could copy the data to the CPU before gathering and use a different process group with gloo, but preferable I would want to keep these tensors on the GPU and only copy to the CPU when the complete evaluation is done. Is there a way around this so I can gather anyway (or another approach)? |
st176726 | Could you accomplish that with all_gather? All GPUs will receive it, but you can process it only on rank-0 GPU. |
st176727 | Hi everyone,
I’d appreciate some help/suggestions on this issue:
For simplicity, let’s say I have a model with only Conv2d layer in it. I would like to store gradient of the weight param of Conv2d layer over time, so I’ve registered a hook on it. When one batch of data is processed and loss computed, backward pass will start gradient computation and my hook will be triggered. In the hook function, I can store the gradient (for example in a list).
Now my problem is to do the same thing but with DataParallel model. If my hooks looks like this:
def my_gradient_hook(grad):
print(grad.device)
the only print that I see is: cuda:0. What happened to prints/hooks on other GPUs, because each GPU should run a backward pass in parallel and compute gradients?
If I try the same thing with DDP, I get the desired behavior. Prints that I see are: cuda:0, cuda:1, cuda:2…
I guess it has something to do with multiprocessing, since DDP spawns multiple processes and DataParallel only one process with multiple threads. I guess I should somehow gather gradients from different threads in the hook function? |
st176728 | Hi,
Seems like this question had come up before. Seems like this limitation is intrinsic to DataParallel, e.g. see Yanli_Zhao answer here:
When I run model on multiple GPUs,register_hook is invalid distributed
I want to save gradients of internal variables through register_hook() or retain_grad().
When I run model on single GPU, it works.
But when I run model on multiple GPUs through wrapping model into nn.DataParallel, I find that it doesn’t work.
Can anyone help me?
Also see warnings here
github.com
pytorch/pytorch/blob/master/torch/nn/parallel/data_parallel.py#L67
Arbitrary positional and keyword inputs are allowed to be passed into
DataParallel but some types are specially handled. tensors will be
**scattered** on dim specified (default 0). tuple, list and dict types will
be shallow copied. The other types will be shared among different threads
and can be corrupted if written to in the model's forward pass.
The parallelized :attr:`module` must have its parameters and buffers on
``device_ids[0]`` before running this :class:`~torch.nn.DataParallel`
module.
.. warning::
In each forward, :attr:`module` is **replicated** on each device, so any
updates to the running module in ``forward`` will be lost. For example,
if :attr:`module` has a counter attribute that is incremented in each
``forward``, it will always stay at the initial value because the update
is done on the replicas which are destroyed after ``forward``. However,
:class:`~torch.nn.DataParallel` guarantees that the replica on
``device[0]`` will have its parameters and buffers sharing storage with
the base parallelized :attr:`module`. So **in-place** updates to the
parameters or buffers on ``device[0]`` will be recorded. E.g.,
:class:`~torch.nn.BatchNorm2d` and :func:`~torch.nn.utils.spectral_norm` |
st176729 | Hi, thanks a lot for the link. I have already seen this warning and discussion, and (if I understood correctly) they are focused on updating parameters/buffers on non-master GPU. I understand that all modifications will be discarded by destroying these replicas and I’m fine with that. My goal is not to update any states, it’s just to print gradients at each device. |
st176730 | I am implementing a model and my model + data does not fit on a single GPU. I am using DistributedDataParallel because the documentation recommends it over DataParallel.
My model appears to work now (it can overfit), but I am unsure about how I should use ReduceLROnPlateau.
Is it safe to simply call scheduler.step(validation_accuracy) regardless of rank? Or should I only call it on rank 0 and broadcast the resulting learning rate to the other processes (and how)? |
st176731 | I’ve currently implemented something like this, does this look correct?
dist.barrier() # Synchronize, making sure all processes have reached the end of this epoch.
acc_tensor = torch.tensor(val_acc)
dist.all_reduce(acc_tensor, op=ReduceOp.SUM)
scheduler.step(acc_tensor.item() / world_size)
In my case world size is the number of processes and the number of GPUs, so this averages the accuracy along all processes and uses it to update the scheduler.
I think this is correct, I am unsure if the barrier is needed. I assume so because otherwise some processes may be lagging behind and not have updated val_acc yet. |
st176732 | Hello
You’d need to initialize process group for DDP and it will synchronize workers for the first iteration.
Not sure why you are trying to call dist.all_reduce() here, normally DDP will handle model parameters for you.
Your scheduler is similar to optimizer in this context?
I think this is correct, I am unsure if the barrier is needed. I assume so because otherwise some processes may be lagging behind and not have updated val_acc yet.
You generally don’t need to synchronize your workers, e.g.
https://pytorch.org/tutorials/intermediate/ddp_tutorial.html
(barriers are only used to synchronize while checkpointing) |
st176733 | The scheduler is ReduceLROnPlateau 2, it is used to update the learning rate based on a metric (in my case validation accuracy).
Because val_acc is not a model parameter, I would assume it to be different on every process (because every process has its own mini-batch). Therefore, I need to synchronise it so every process changes the learning rate at the same time.
When I find some time, I will try and work out a minimum example. |
st176734 | Sorry, I think I misunderstood your question initially.
(thanks @rvarm1 for suggestion!)
Do you do validation on rank 0? If so, you can compute val_acc on rank 0 and broadcast it to all ranks at which point you can run scheduler.step independently, so all ranks are consistent. note that broadcast operation will synchronize the ranks for you.
(by @wayi)
maybe you can also consider using something like https://arxiv.org/pdf/2007.05105.pdf 4
and
GitHub - facebookresearch/fairscale: PyTorch extensions for high performance and large scale training. 4 |
st176735 | RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1614378083779/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, unhandled system error, NCCL version 2.7.8
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
/opt/conda/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:52: UserWarning: MASTER_ADDR environment variable is not defined. Set as localhost
warnings.warn(*args, **kwargs)
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/4
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/4
initializing ddp: GLOBAL_RANK: 2, MEMBER: 3/4
initializing ddp: GLOBAL_RANK: 3, MEMBER: 4/4
Traceback (most recent call last):
File "/ghome/luoxin/projects/liif-lightning-hydra/run.py", line 34, in main
return train(config)
File "/ghome/luoxin/projects/liif-lightning-hydra/src/train.py", line 78, in train
trainer.fit(model=model, datamodule=datamodule)
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 499, in fit
self.dispatch()
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 546, in dispatch
self.accelerator.start_training(self)
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 108, in start_training
mp.spawn(self.new_process, **self.mp_spawn_kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 2 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 157, in new_process
self.configure_ddp()
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 195, in configure_ddp
self._model = DistributedDataParallel(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 446, in __init__
self._sync_params_and_buffers(authoritative_rank=0)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 457, in _sync_params_and_buffers
self._distributed_broadcast_coalesced(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1155, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1614378083779/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, unhandled system error, NCCL version 2.7.8
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
I use pytorch official image pytorch/pytorch:1.8.0-cuda11.1-cudnn8-runtime, and based that installed pytorch-lightning to use multi-GPU, it seems a pytorch problem, how can I tackle this?
Full environment:
PyTorch version: 1.8.0
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 3090
GPU 1: GeForce RTX 3090
GPU 2: GeForce RTX 3090
GPU 3: GeForce RTX 3090
Nvidia driver version: 460.67
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] pytorch-lightning==1.2.5
[pip3] torch==1.8.0
[pip3] torchelastic==0.2.2
[pip3] torchmetrics==0.2.0
[pip3] torchtext==0.9.0
[pip3] torchvision==0.9.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.3.0 py38h54f3939_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.2 py38h54aff64_0
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] pytorch 1.8.0 py3.8_cuda11.1_cudnn8.0.5_0 pytorch
[conda] pytorch-lightning 1.2.5 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchmetrics 0.2.0 pypi_0 pypi
[conda] torchtext 0.9.0 py38 pytorch
[conda] torchvision 0.9.0 py38_cu111 pytorch |
st176736 | Solved by LuoXin-s in post #3
Yes, I did that and solved this issue simply use --ipc=host in my docker. |
st176737 | You could run the script with NCCL_DEBUG=INFO python script.py args to get more debug information from NCCL, which should also contain the root cause of this issue. |
st176738 | I’m try run
torch.distributed.init_process_group('nccl', world_size=2, rank=0, init_method='file://' + os.path.abspath('./dummy'))
inside docker container, that’s started with command:
docker run --runtime=nvidia --network="host" --shm-size 1g -v [from]:[to] -i -t [image] /bin/bash
But have RuntimeError:
RuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1
Can you help me please? |
st176739 | Hello! The timeout is during initialization so it seems that not all workers are joining the group. Two follow up questions:
You are using world_size=2 and you specify rank=0, are you also initializing another worker with rank=1? You can do this by creating multiple processes in a docker container, or running two docker containers and changing the code to use different ranks for each container.
If you are using multiple docker containers, since you are using the file init method, you should ensure that both containers can read/write to the same file. You can do this by using a shared volume between containers. |
st176740 | H-Huang:
You are using world_size=2 and you specify rank=0, are you also initializing another worker with rank=1?
Yes, I try make it with denoiser/executor.py at master · facebookresearch/denoiser · GitHub 4 and denoiser/distrib.py at master · facebookresearch/denoiser · GitHub 2
H-Huang:
If you are using multiple docker containers, since you are using the file init method, you should ensure that both containers can read/write to the same file. You can do this by using a shared volume between containers.
No, it’s simple case with only one docker launch. |
st176741 | Could you add logging and make sure that both your processes reach the same state before torch.distributed.init_process_group produces an error?
Also, could you make sure that both processes has w access to the directory that contains rendezvous_file and that the file does not exist before you run torch.distributed.init_process_group
https://pytorch.org/docs/stable/distributed.html#shared-file-system-initialization 11 |
st176742 | Hello
I’m newbie user on server
first of all, I introduce my server spec
Power : 1600Wx3
CPU : 2 x Intel® Xeon® Silver 4210R CPU @ 2.40GHz
GPU : TITAN RTX x5 + RTX3090 x3
Now i’m trying to my server for deeplearning(object detection)
i use EfficientDet pytorch version code 4
At first I had only TITAN RTX x5
i could train w/ All graphics card
but i bought another RTX3090 x3 and mount on the server yesterday
so i just changed ./projects/proeject.yml num_gpu=8
and then it did not train
i checked using nvidia-smi
but it did not changed after allocated almost 1.5GB each
and pytorch recognize RTX3090 for cuda:0 even though RTX3090 is not PCI bus number 0
RTX3090 PCI bus number=1,2,3
TITAN RTX PCI bus number=0,4,5,6,7
so I doubted 2 things
CUDA 11.1 does not recognize 2 types of graphic card(3090,titan)
==> but CUDA 11.1 can recognize 2types of graphic card according to NVIDIA Developer Forum
Dataparallel module does not have compatibility for 2 types of graphics card
How can i use all the GPU for one Task |
st176743 | CUDA11.1 does recognize Turing and Ampere GPUs and it seems that they are indeed used, but your code seems to hang?
nn.DataParallel should also be compatible with different architectures, but note that the slower GPUs would most likely be the bottleneck and the faster ones would have to wait.
jaejun:
and pytorch recognize RTX3090 for cuda:0 even though RTX3090 is not PCI bus number 0
That’s expected and you can change it via export CUDA_DEVICE_ORDER=PCI_BUS_ID. |
st176744 | Thanks for reply
Yes, my situation is hang
os.environ[“CUDA_DEVICE_ORDER”] = “PCI_BUS_ID”
os.environ[“CUDA_VISIBLE_DEVICES”] = “0,1,2,3,4,5,6,7”
I used upper code in my script
i waited more than 10m but it did not work
so if i wait more time, does it work??? |
st176745 | Could you try to see where it hangs, maybe add logging or see which GPUs are busy and idle during training?
You can also do an experiment where you run on 1 TITAN and 1 RTX3090 only and see if such training completes.
If the issue is some GPUs are faster and it leads to work imbalance as the result, maybe try to decrease batch size and see if it helps. |
st176746 | I am using a custom dataset and used custom data.Dataset class for loading it.
class MyDataset(data.Dataset):
def __init__(self, datasets, transform=None, target_transform=None):
self.datasets = datasets
self.transform = transform
def __len__(self):
return len(self.datasets)
def __getitem__(self, index):
image = Image.open(os.path.join(self.datasets[index][0]))
if self.transform:
image = self.transform(image)
return image, torch.tensor(self.datasets[index][1], dtype=torch.long)
When I started training on my 4 GPU machine, unlike the mentioned in Pytorch documentation, I found that DistributedDataParallel is slower than DataParallel!
I reviewed my code carefully and tried different configurations and batch sizes (especially the DataLoader num_workers) to see what makes DistributedDataParallel runs faster than DataParallel as expected, but nothing worked.
The only change I did that made DistributedDataParallel faster is loading the whole dataset into memory during initialization!
class Inmemory_Dataset(data.Dataset):
def __init__(self, datasets, transform=None, target_transform=None):
self.datasets = datasets
transform = transform
image_list = []
target_list = []
for i, data in enumerate(datasets):
image = Image.open(os.path.join(data[0]))
if transform:
image = transform(image)
image_list.append(image.numpy())
target_list.append(data[1])
self.images = torch.tensor(image_list)
self.targets = torch.tensor(target_list, dtype=torch.long)
def __len__(self):
return len(self.datasets)
def __getitem__(self, index):
return self.images[index], self.targets[index]
After this change, DistributedDataParallel became 30% faster. but I do not think this is how it should be. Because what if my dataset does not fit into memory?
Below I highlight the main parts where I setup the use for both DataParallel and DistributedDataParallel. Notice that the overall effictive batch size is the same in both cases.
DataParallel:
batch_size = 100
if torch.cuda.device_count() > 1:
print("Using DataParallel...")
model = nn.DataParallel(model)
batch_size = batch_size * torch.cuda.device_count()
DistributedDataParallel:
def train(gpu, args):
# print(args)
rank = args.nr * args.gpus + gpu
dist.init_process_group(backend='nccl', init_method='env://', world_size=args.world_size, rank=rank)
batch_size = 100
model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])
train_sampler = torch.utils.data.distributed.DistributedSampler(training_dataset,
num_replicas=args.world_size,
rank=rank)
training_dataloader = torch.utils.data.DataLoader(
dataset=training_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=1,
sampler=train_sampler,
pin_memory=True) |
st176747 | Hey @ammary-mo, how did you measure the delay? Since DataParallel and DistributedDataParallel are only involved in the forward and backward passes, could you please try using elapsed_time 3 to measure data loading, forward and backward delay breakdowns? See the following discussion. It’s possible that if multiple DDP processes try to read from the same file, contentions might lead to data loading perf regression. If that’s the case, the solution would be implementing a more performant data loder.
Issue with using DataParallel (includes minimal code) distributed
Can we profile how much of the 434s are spent in the forward pass when DP is not present? And how much of that is spent on GPU? This can be measured using elapsed_time . See this discussion.
Note that multi-thread cannot parallelize normal Python ops due to Python GIL, and the parallelism only kicks in when the execution does not require GIL (e.g., CPU/GPU ops that explicitly drops GIL).
cc @VitalyFedyunin @glaringlee for DataLoader and DataSampler. |
st176748 | Hey @mrshenli
Thanks for tuning in, I understand why you are asking to measure the forward pass for both DP and DDP. But if you think about it from the end-user perspective, I care more about the overall training performance (which involves both model sync and data batches loading). If DDP provides better time only in the forward pass but (somehow) wastes the time it saved in data loading, then, as overall, DP will be a better option! Do not you agree?! |
st176749 | Is it possible, that if your data is small enough to entirely fit into the memory, the DDP setup overhead is just increasing time on the task without any performance improvement? In other words: GPU utilization is small enough, you just can’t see the gains of using multiple GPUs |
st176750 | Why my DistributedDataParallel is slower than DataParallel if my Dataset is not loaded fully in memory distributed
Is it possible, that if your data is small enough to entirely fit into the memory, the DDP setup overhead is just increasing time on the task without any performance improvement? In other words: GPU utilization is small enough, you just can’t see the gains of using multiple GPUs
Good question @Alexey_Demyanchuk.
But my answer is no. My data is large to fit in GPU, that is why I started by loading it from disk. However, when I found that DDP is slower than DP, as I mentioned in the question, I started comparing both with different configurations to see what works. Eventually, the only change that made DDP faster is when I reduced my data size and loaded it into memory. I hope this clarifies the situation. |
st176751 | My suggestion would be to profile the pipeline. Is GPU utilization near 100% throughout training? If it is not, you could have some sort of preprocessing bottleneck (I/O or CPU bound). Does it make sense in your case? |
st176752 | Not sure why to check the GPU utilization? GPU utilization depends on multiple factors, like batch size and even image size.
Anyhow, I checked GPU utilization and it is low in all cases. but this is not due to data-loading bottleneck rather than because I am using small model and small data (MNIST-like data set) just to compare the performance. |
st176753 | @mrshenli, it seems you are right. It looks like a dataloading issue to me.
In DDP, with only two workers (num_workers=2), it is clear that data loading time is a bottleneck as one worker/batch is constantly taking more time to load than the other:
To solution is to increase the num_workers to the limit that hides the dataloading delay as in this post 2.
But it seems that this solution works well only with DataParallel, but not with DistributedDataParallel.
DataParallel num_workers=16
It is working fine because only the first batch takes 3 seconds and all consecutive batches takes almost no time
DistributedDataParallel num_workers=16
The first batch is taking 13 seconds to load, which is too much! and as I add more workers it takes even more.
My explanation is that the data sampler is adding more overhead in DistributedDataParallel which is not the case in DataParallel.
Below again is my data sampler code to check if there is any issue with it or potential enhancement:
batch_size = 100
model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])
train_sampler = torch.utils.data.distributed.DistributedSampler(training_dataset,
num_replicas=args.world_size,
rank=rank)
training_dataloader = torch.utils.data.DataLoader(
dataset=training_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=16,
sampler=train_sampler,
pin_memory=True)
I hope @VitalyFedyunin and @glaringlee can get back to us for advice about DataLoader and DataSampler. |
st176754 | hi community,
how could we know the nproc_per_node parameter in the code?
for example, running the following:
python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE
--nnodes=2 --node_rank=0 --master_addr="192.168.1.1"
--master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3
and all other arguments of your training script)
how to know the nproc_per_node, master_addr in the YOUR_TRAINING_SCRIPT.py code?
Thank you! |
st176755 | Hello! You would need to duplicate them and pass those in as your own arguments. Example:
command (single node, 2 GPUs):
python -m torch.distributed.launch --nproc_per_node=2
train_script.py --master_addr=localhost --nproc_per_node=2
train_script.py:
import argparse
parser = argparse.ArgumentParser()
# This is always passed in by default
parser.add_argument("--local_rank", type=int)
# These are your own arguments
parser.add_argument("--master_addr", type=str)
parser.add_argument("--nproc_per_node", type=int)
args = parser.parse_args()
print(args)
output:
Namespace(local_rank=0, master_addr='localhost', nproc_per_node=2)
Namespace(local_rank=1, master_addr='localhost', nproc_per_node=2)
We would be interested in hearing why you need the nproc_per_node and master_addr in your training script, generally just the rank is sufficient? |
st176756 | Hi Howard,
Thank you very much!
I want to implement some new asynchronous averaging algorithms for Federated learning using PyTorch. But, I am also new to the area. Maybe my implementation is not optimal. |
st176757 | I see! That makes sense, thank you. The launcher is not absolutely necessary but could be useful, here is the source code (it’s short) to glean some insight into what it is doing pytorch/launch.py at master · pytorch/pytorch · GitHub
For your use case, I would recommend looking into the RPC framework Distributed RPC Framework — PyTorch 1.8.1 documentation 1 if you haven’t already. |
st176758 | Yes. I have noticed this document. thank you very much?
BTW, is Pytorch team working on a general federated learning framework that supports flexible control on each client (processors, GPUs) and the way of aggregating their gradients or model parameters. |
st176759 | To my knowledge, there isn’t a project for a general federated learning framework. Feel free to start a new thread regarding this as others may have insight, it will also be useful for feature tracking purposes. |
st176760 | Make the output of a MLP network as a logits, and then calculate Categorical, then calculate the log_prob, what is the process of calculation and what values are involved in the calculation of the log_prob function? |
st176761 | This question is not related to the distributed package so please change the category.
But I think this is the logic for log_prob that you are looking for: pytorch/categorical.py at master · pytorch/pytorch · GitHub |
st176762 | I am trying to get NCCL backend working on my Ubuntu 20.04 system that has two Nvidia 2070S GPUs and runs Pytorch 1.8.
My test script is based on the Pytorch docs 2, but with the backend changed from "gloo" to "nccl".
When the backend is "gloo", the script finishes running in less than a minute.
$ time python test_ddp.py
Running basic DDP example on rank 0.
Running basic DDP example on rank 1.
real 0m4.839s
user 0m4.980s
sys 0m1.942s
However, when the backend is set to "nccl", the script gets stuck with the below output and never returns to the bash prompt.
$ python test_ddp.py
Running basic DDP example on rank 1.
Running basic DDP example on rank 0.
Same problem when disabling IB
$ NCCL_IB_DISABLE=1 python test_ddp.py
Running basic DDP example on rank 1.
Running basic DDP example on rank 0.
I’m using the packages:
pytorch 1.8.1
cudatoolkit 11.1.1
python 3.8.8
How can we fix the problem when using NCCL? Thank you!
Python code used for testing NCCL:
import os
import sys
import tempfile
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
def setup(rank, world_size):
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
# gloo: works
# dist.init_process_group("gloo", rank=rank, world_size=world_size)
# nccl: hangs forever
dist.init_process_group(
"nccl", init_method="tcp://10.1.1.20:23456", rank=rank, world_size=world_size
)
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_fn, world_size):
mp.spawn(demo_fn, args=(world_size,), nprocs=world_size, join=True)
if __name__ == "__main__":
run_demo(demo_basic, 2) |
st176763 | Solved by H-Huang in post #2
Thanks for posting your code and details!
I see that when you init the process group with NCCL you specify the init_method as “tcp” and provide a local IP and port. Can you ensure that those are reachable? Alternatively, since you are already setting the address and port as environment variables an… |
st176764 | Thanks for posting your code and details!
I see that when you init the process group with NCCL you specify the init_method as “tcp” and provide a local IP and port. Can you ensure that those are reachable? Alternatively, since you are already setting the address and port as environment variables and it works for gloo, you can remove the “init_method” parameter from init_process_group and it will default to use “env://” and that should work as well.
dist.init_process_group("nccl", rank=rank, world_size=world_size)
Here is the documentation for init_process_group: Distributed communication package - torch.distributed — PyTorch 1.8.1 documentation 2
Please let me know if this works. |
st176765 | Yes, you are right! I’ve got it running with NCCL by changing setup function as suggested
def setup(rank, world_size):
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
dist.init_process_group("nccl", rank=rank, world_size=world_size) |
st176766 | Yes, you are right! I’ve got it running with NCCL by changing setup function as suggested
def setup(rank, world_size):
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
dist.init_process_group("nccl", rank=rank, world_size=world_size)
However, is this now using TCP initialization? |
st176767 | Yes, the underlying communication uses TCP. During initialization it uses the variables you defined for addr, port, rank, and world_size to create TCPStore instances on all workers.
https://pytorch.org/docs/master/distributed.html#torch.distributed.Store 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.