id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st175868 | If I’m spawning 2 process on 1 machine with 2 GPUs. AFAIK, in each process, there will be a model replica during DDP construction. And the all_reduce synchronization happened during loss.backward(). So I’m wondering which of the 2 process is actually doing this reduction work? I can see that during all_reduce, we need to somehow communicate between the 2 processes. But is there like a ‘major’ process doing more work? Like doing the average of the gradients?
These are the Gloo allreduce algorithms gloo/allreduce.h at c22a5cfba94edf8ea4f53a174d38aa0c629d070f · facebookincubator/gloo · GitHub 6
I am using GLOO currently, I just checked in Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 2, the GLOO on GPU has limited functionality, just 2 supported functions: broadcast and all_reduce. Are these 2 functions enough for a basic DDP example on GPU to work fine?
Yes, those two functions are enough to implement a DDP algorithm. If you are doing distributed GPU training, it is recommended to use the NCCL backend.
More information on the distributed communication packages Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation.
Also, how could I check if my all_reduce is done on GPU, not on CPU?
For clarification, do you mean that the all_reduce algorithm is run on GPU?
And besides, could I use GLOO backend to launch 2 processes on CPU to do DDP?
Yes, initialize a process group before creating your DDP model.
A tutorial on DDP Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.9.0+cu102 documentation 3. |
st175869 | Thanks for quick reply.
gcramer23:
Yes, initialize a process group before creating your DDP model.
I have indeed followed the link you metioned. And to initialize a process group, I used the code:
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
But I can’t tell where is the part related to using CPU GLOO instead of GPU GLOO in this initialization. Do you mean when creating the model, just like what we do without DDP, don’t use model.to(device) to use make the model run on CPU?
For example:
setup(rank, world_size)
# create model and move it to CPU? with id rank
model = ToyModel()
ddp_model = DDP(model)
It seems like the device_ids argument is also not needed in DDP initialization, right? |
st175870 | But I can’t tell where is the part related to using CPU GLOO instead of GPU GLOO in this initialization. Do you mean when creating the model, just like what we do without DDP, don’t use model.to(device) to use make the model run on CPU?
https://pytorch.org/docs/stable/distributed.html
" torch.distributed supports three built-in backends, each with different capabilities. The table below shows which functions are available for use with CPU / CUDA tensors."
You simply initialize the backend. The model is on CPU unless moved to GPU.
It seems like the device_ids argument is also not needed in DDP initialization, right?
https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html 1
" * device_ids (list of python:int or torch.device) –CUDA devices. 1) For single-device modules, device_ids can contain exactly one device id, which represents the only CUDA device where the input module corresponding to this process resides. Alternatively, device_ids can also be None. 2) For multi-device modules and CPU modules, device_ids must be None.When device_ids is None for both cases, both the input data for the forward pass and the actual module must be placed on the correct device. (default: None)" |
st175871 | I see. Another question is regarding this piece of code:
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
The model has been designated to a GPU and also wrapped by DDP. But when we feed in data as in this line
outputs = ddp_model(torch.randn(20, 10))
Shouldn’t we use torch.randn(20, 10).to(rank) instead? |
st175872 | So, which all_reduce algorithm does gloo use? I saw there is ring, bcube, halving_doubling, etc. In addition, does the host do the all_reduce or directly on devices? |
st175873 | in default, gloo is using ring algorithm gloo/allreduce.cc at master · facebookincubator/gloo · GitHub 4. Gloo will do the all reduce on host, but NCCL will do the all reduce on devices |
st175874 | Hmm, but it seems that GLOO does support all_reduce op on CUDA tensors, right? So my question is why bother running the all_reduce on host (You have to cudaMemcpy the CUDA tensors to CPU and do the all_reduce there, and then transfer it back to GPU devices again)? Why not directly do all_reduce on GPU? It seems on GPU is faster, am I right? |
st175875 | My PyTorch model has the input contains a dict . That is:
class MyModel(nn.Module):
def __init__(self):
...
def forward(self, input1: Tensor, input2: dict):
...
I know in DataParallel or DistributedDataParallel, a function called scatter_kwargs will split inputs of type Tensor and replicate other types of data.
However, the keys of input2 are associated with the batch index, e.g. b[0] contains the data of the first dimension, which results in the original batch index no longer corresponding to the keys in input2 after calling scatter_kwargs.
So is there an easy way to divide the data in the dictionary along with the data in the dictionary? |
st175876 | Hey @Randool, as of PyTorch v1.9, DistributedDataParallel only supports Single-Program Single-Device mode, where each process exclusively works on a single model replica. In this case, DistributedDataParallel will not divide forward inputs, and instead will directly pass it to the wrapped local model. Will this be sufficient for your use case? |
st175877 | I have some buffers defined using self.register_buffer('x', ...) and I need it to be not broadcasted. However, I do not want to set broadcast_buffers=False because I want other behaviours to remain the same (e.g. running stats in BN being broadcasted).
Or should I define x in other ways? |
st175878 | Could you please elaborate on why you need to register buffer but not broadcast it? |
st175879 | I want to accumulate results (in different processes) every n forward passes, aggregate, do something and repeat.
Let say I store the result in self.x. If I broadcast self.x at the end of a forward pass. In the second pass, when I broadcast again, there will be double counting. |
st175880 | Suppose I have 8 GPUs and each running some operations that produce different outputs. After aggregating, I would like to modify a variable self.x with this aggregated results. From what I understand if I write self.x = aggregate, all 8 processes will run this line once, which means in total 8 times. Is that right? If it is, how do I control this to be run only in the slowest process, so that it is only run once.
*It probably doesn’t affect the result, but it just feels “ugly” to me. I am not sure if it is normal for this kind of code to be written when writing distributed code. Hopefully someone can advise. |
st175881 | You can synchronize the processes and then perform the variable update on your desired rank process.
Then you will need to distribute the aggregated value to the other ranks.
import torch
import torch.distributed as dist
RANK = 0 # your target rank
...
dist.barrier() # synchronize (wait for the slowest process)
if dist.get_rank() == RANK:
self.x = aggregate() |
st175882 | how to distribute to other ranks? i presume all ranks will see the same self.x? or will all process duplicate and keep their own version of self variables? |
st175883 | As all processes are independent from each other without communications, self.x should be explicitly handled for all processes with different ranks to have the same value.
You can refer to the collective functions in pytorch distributed docs.
You may choose to broadcast or all_reduce, etc. |
st175884 | Ok. A side question. Do you know if all_gather gathers tensors in the same order? I need to all gather 2 tensors and the they have to be all gathered in the same order so that item i in the first list can be matched to item i in the second list. |
st175885 | Hi, I was wondering what is the order of the list returned by torch.distributed.all_gather.
Is the tensor in position i coming from the torch.distributed.get_rank() = i? Thanks! |
st175886 | is it stated anywhere in the doc? how can i be sure of this? I need to all gather 2 lists and the they have to be all gathered in the same order so that item i in the first list can be matched to item i in the second list. |
st175887 | During training MNIST example dataset in PyTorch, I met the this RuntimeError on the Master node
File "./torch-dist/mnist-dist.py", line 201, in <module>
init_processes(args.rank, args.world_size, run, args.batch_size, backend=args.backend)
File "./torch-dist/mnist-dist.py", line 196, in init_processes
dist.init_process_group(backend=backend, world_size=world_size, rank=rank, init_method="env://")
File "/home/dl/anaconda2/envs/torch-dist-py3.6/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 354, in init_process_group
store, rank, world_size = next(rendezvous(url))
File "/home/dl/anaconda2/envs/torch-dist-py3.6/lib/python3.6/site-packages/torch/distributed/rendezvous.py", line 143, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, start_daemon)
RuntimeError: Address already in use
It seemed that multiprocessing with launch utility had problem
I ran the code by invoking the launch utility as documentation 167 suggested
python -m torch.distributed.launch --nproc_per_node=2 --nnode=2 --node_rank=0 --master_addr='10.0.3.29' --master_port=9901 ./torch-dist/mnist-dist.py |
st175888 | Solved by teng-li in post #7
@leo-mao, you should not set world_size and rank in torch.distributed.init_process_group, they are automatically set by torch.distributed.launch.
So please change that to dist.init_process_group(backend=backend, init_method=“env://”)
Also, you should not set WORLD_SIZE, RANK env variables in your … |
st175889 | Here is my code, hope it help
import argparse
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import torch.distributed as dist
import torch.utils.data
import torch.utils.data.distributed
import torch.optim as optim
import torch.distributed
from torchvision import datasets, transforms
from torch.autograd import Variable
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=256, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=5, metavar='N', help='number of epochs to train (default: 10)')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR', help='learning rate (default: 0.01)')
parser.add_argument('--momentum', type=float, default=0.5, metavar='M', help='SGD momentum (default: 0.5)')
parser.add_argument('--no-cuda', action='store_true', default=False, help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S', help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--backend', type=str, default='nccl')
parser.add_argument('--rank', type=int, default=0)
parser.add_argument('--world-size', type=int, default=1)
parser.add_argument('--local_rank', type=int)
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=0)
def average_gradients(model):
""" Gradient averaging"""
size = float(dist.get_world_size())
for param in model.parameters():
dist.all_reduce_multigpu(param.grad.data, op=dist.ReduceOp.SUM)
param.grad.data /= size
def summary_print(rank, loss, accuracy, average_epoch_time, tot_time):
import logging
size = float(dist.get_world_size())
summaries = torch.tensor([loss, accuracy, average_epoch_time, tot_time], requires_grad=False, device='cuda')
dist.reduce_multigpu(summaries, 0, op=dist.ReduceOp.SUM)
if rank == 0:
summaries /= size
logging.critical('\n[Summary]System : Average epoch time(ex. 1.): {:.2f}s, Average total time : {:.2f}s '
'Average loss: {:.4f}\n, Average accuracy: {:.2f}%'
.format(summaries[2], summaries[3], summaries[0], summaries[1] * 100))
def train(model, optimizer, train_loader, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
if args.cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if args.world_size > 1:
average_gradients(model)
if batch_idx % args.log_interval == 0:
print('Train Epoch {} - {} / {:3.0f} \tLoss {:.6f}'.format(
epoch, batch_idx, 1.0 * len(train_loader.dataset) / len(data), loss))
def test(test_loader, model):
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if args.cuda:
data, target = data.cuda(), target.cuda()
# Varibale(data, volatile=True)
data, target = Variable(data, requires_grad=False), Variable(target)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum')
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set : Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'
.format(test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
return test_loss, float(correct) / len(test_loader.dataset)
def config_print(rank, batch_size, world_size):
print('----Torch Config----')
print('rank : {}'.format(rank))
print('mini batch-size : {}'.format(batch_size))
print('world-size : {}'.format(world_size))
print('backend : {}'.format(args.backend))
print('--------------------')
def run(rank, batch_size, world_size):
""" Distributed Synchronous SGD Example """
config_print(rank, batch_size, world_size)
train_dataset = datasets.MNIST('../MNIST_data/', train=True,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]))
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset, num_replicas=world_size,
rank=rank)
kwargs = {'num_workers': args.world_size, 'pin_memory': True} if args.cuda else {}
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(datasets.MNIST('../MNIST_data/', train=False,
transform=transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size=args.test_batch_size, shuffle=True, **kwargs)
model = Net()
if args.cuda:
torch.cuda.manual_seed(args.seed)
torch.cuda.set_device(args.local_rank)
device = torch.device('cuda', args.local_rank)
model.cuda(device=device)
model = nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], output_device=args.local_rank)
cudnn.benchmark = True
else:
device = torch.device('cpu')
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
torch.manual_seed(args.seed)
tot_time = 0
first_epoch = 0
for epoch in range(1, args.epochs + 1):
train_sampler.set_epoch(epoch)
start_cpu_secs = time.time()
train(model, optimizer, train_loader, epoch)
end_cpu_secs = time.time()
# print('start_cpu_secs {}'.format())
print("Epoch {} of took {:.3f}s".format(
epoch, end_cpu_secs - start_cpu_secs))
tot_time += end_cpu_secs - start_cpu_secs
print('Current Total time : {:.3f}s'.format(tot_time))
if epoch == 1:
first_epoch = tot_time
test_loss, accuracy = test(test_loader, model)
if args.epochs > 1:
average_epoch_time = float(tot_time - first_epoch) / (args.epochs - 1)
print('Average epoch time(ex. 1.) : {:.3f}s'.format(average_epoch_time))
print("Total time : {:.3f}s".format(tot_time))
if args.world_size > 1:
summary_print(rank, test_loss, accuracy, average_epoch_time, tot_time)
def init_processes(rank, world_size, fn, batch_size, backend='gloo'):
import os
os.environ['MASTER_ADDR'] = '10.0.3.29'
os.environ['MASTER_PORT'] = '9901'
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
os.environ['NCCL_DEBUG'] = 'INFO'
os.environ['GLOO_SOCKET_IFNAME'] = 'enp0s31f6'
dist.init_process_group(backend=backend, world_size=world_size, rank=rank, init_method="env://")
fn(rank, batch_size, world_size)
if __name__ == '__main__':
init_processes(args.rank, args.world_size, run, args.batch_size, backend=args.backend)
torch.multiprocessing.set_start_method('spawn')
# processes = []
# for rank in range(1):
# p = Process(target=init_processes,
# args=(rank, args.world_size, run, args.batch_size, args.backend))
# p.start()
# processes.append(p)
#
# for p in processes:
# p.join() |
st175890 | check ps -elf | grep python, and see if you have any processes from previous runs that still have not been killed. Maybe they are occupying that port and are still alive. |
st175891 | Thx for reply, no background process was found and the port was always available.
I have fixed a typo in my command, where --master_addr ='10.0.3.29' --master_port=9901 had been typed as --master_addr ='10.0.3.29' --master_port='10.0.3.29'.
However the error has remained and at least one process has launched with the output
Traceback (most recent call last):
File "mnist-dist.py", line 203, in <module>
init_processes(args.rank, args.world_size, run, args.batch_size, backend=args.backend)
File "mnist-dist.py", line 198, in init_processes
dist.init_process_group(backend=backend, world_size=world_size, rank=rank, init_method="env://")
File "/home/dl/anaconda2/envs/torch-dist-py3.6/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 354, in init_process_group
store, rank, world_size = next(rendezvous(url))
File "/home/dl/anaconda2/envs/torch-dist-py3.6/lib/python3.6/site-packages/torch/distributed/rendezvous.py", line 143, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, start_daemon)
RuntimeError: Address already in use
dl-Server:10648:10648 [1] NCCL INFO NET : Using interface enp0s31f6:10.0.13.29<0>
dl-Server:10648:10648 [1] NCCL INFO NET/IB : Using interface enp0s31f6 for sideband communication
dl-Server:10648:10648 [1] NCCL INFO Using internal Network Socket
dl-Server:10648:10648 [1] NCCL INFO NET : Using interface enp0s31f6:10.0.13.29<0>
dl-Server:10648:10648 [1] NCCL INFO NET/Socket : 1 interfaces found
NCCL version 2.3.7+cuda9.0
dl-Server:10648:10648 [1] NCCL INFO rank 0 nranks 1
dl-Server:10648:10675 [1] NCCL INFO comm 0x7fd38c00d3f0 rank 0 nranks 1
dl-Server:10648:10675 [1] NCCL INFO CUDA Dev 1, IP Interfaces : enp0s31f6(PHB)
dl-Server:10648:10675 [1] NCCL INFO Using 256 threads
dl-Server:10648:10675 [1] NCCL INFO Min Comp Cap 6
dl-Server:10648:10675 [1] NCCL INFO comm 0x7fd38c00d3f0 rank 0 nranks 1 - COMPLETE
Train Epoch 1 - 0 / 234 Loss 5.576234
Train Epoch 1 - 10 / 234 Loss 5.541703
Train Epoch 1 - 20 / 234 Loss 5.515630
Train Epoch 1 - 30 / 234 Loss 5.514045
Train Epoch 1 - 40 / 234 Loss 5.485974
Train Epoch 1 - 50 / 234 Loss 5.462833
Train Epoch 1 - 60 / 234 Loss 5.422739
Train Epoch 1 - 70 / 234 Loss 5.374931
Train Epoch 1 - 80 / 234 Loss 5.342307
Train Epoch 1 - 90 / 234 Loss 5.291063
Train Epoch 1 - 100 / 234 Loss 5.220443
Train Epoch 1 - 110 / 234 Loss 5.083968
Train Epoch 1 - 120 / 234 Loss 5.002171
Train Epoch 1 - 130 / 234 Loss 4.953607
Train Epoch 1 - 140 / 234 Loss 4.894170
Train Epoch 1 - 150 / 234 Loss 4.805832
Train Epoch 1 - 160 / 234 Loss 4.792961
Train Epoch 1 - 170 / 234 Loss 4.732522
Train Epoch 1 - 180 / 234 Loss 4.770869
Train Epoch 1 - 190 / 234 Loss 4.688779
Train Epoch 1 - 200 / 234 Loss 4.725927
Train Epoch 1 - 210 / 234 Loss 4.620460
Train Epoch 1 - 220 / 234 Loss 4.605740
Train Epoch 1 - 230 / 234 Loss 4.563363
Epoch 1 of took 5.263s
Current Total time : 5.263s
Train Epoch 2 - 0 / 234 Loss 4.555817
Train Epoch 2 - 10 / 234 Loss 4.603082
Train Epoch 2 - 20 / 234 Loss 4.618500
Train Epoch 2 - 30 / 234 Loss 4.520389
Train Epoch 2 - 40 / 234 Loss 4.531864
Train Epoch 2 - 50 / 234 Loss 4.467782
Train Epoch 2 - 60 / 234 Loss 4.447100
Train Epoch 2 - 70 / 234 Loss 4.424728
Train Epoch 2 - 80 / 234 Loss 4.433639
Train Epoch 2 - 90 / 234 Loss 4.372109
Train Epoch 2 - 100 / 234 Loss 4.435561
Train Epoch 2 - 110 / 234 Loss 4.351253
Train Epoch 2 - 120 / 234 Loss 4.306677
Train Epoch 2 - 130 / 234 Loss 4.343150
Train Epoch 2 - 140 / 234 Loss 4.243150
Train Epoch 2 - 150 / 234 Loss 4.347620
Train Epoch 2 - 160 / 234 Loss 4.217095
Train Epoch 2 - 170 / 234 Loss 4.255800
Train Epoch 2 - 180 / 234 Loss 4.282191
Train Epoch 2 - 190 / 234 Loss 4.249407
Train Epoch 2 - 200 / 234 Loss 4.209113
Train Epoch 2 - 210 / 234 Loss 4.194527
Train Epoch 2 - 220 / 234 Loss 4.220213
Train Epoch 2 - 230 / 234 Loss 4.201759
Epoch 2 of took 5.524s
Current Total time : 10.787s
Train Epoch 3 - 0 / 234 Loss 4.158279
Train Epoch 3 - 10 / 234 Loss 4.111032
Train Epoch 3 - 20 / 234 Loss 4.147989
Train Epoch 3 - 30 / 234 Loss 4.255434
Train Epoch 3 - 40 / 234 Loss 4.111946
Train Epoch 3 - 50 / 234 Loss 4.111733
Train Epoch 3 - 60 / 234 Loss 4.176547
Train Epoch 3 - 70 / 234 Loss 4.063233
Train Epoch 3 - 80 / 234 Loss 4.079793
Train Epoch 3 - 90 / 234 Loss 4.042555
Train Epoch 3 - 100 / 234 Loss 4.050662
Train Epoch 3 - 110 / 234 Loss 4.066662
Train Epoch 3 - 120 / 234 Loss 4.090621
Train Epoch 3 - 130 / 234 Loss 4.015823
Train Epoch 3 - 140 / 234 Loss 4.092526
Train Epoch 3 - 150 / 234 Loss 4.045942
Train Epoch 3 - 160 / 234 Loss 4.048071
Train Epoch 3 - 170 / 234 Loss 3.984233
Train Epoch 3 - 180 / 234 Loss 3.942847
Train Epoch 3 - 190 / 234 Loss 3.943717
Train Epoch 3 - 200 / 234 Loss 3.959996
Train Epoch 3 - 210 / 234 Loss 4.059554
Train Epoch 3 - 220 / 234 Loss 3.918130
Train Epoch 3 - 230 / 234 Loss 4.074725
Epoch 3 of took 5.308s
Current Total time : 16.095s
Train Epoch 4 - 0 / 234 Loss 3.944645
Train Epoch 4 - 10 / 234 Loss 3.923414
Train Epoch 4 - 20 / 234 Loss 3.944232
Train Epoch 4 - 30 / 234 Loss 3.978234
Train Epoch 4 - 40 / 234 Loss 3.950741
Train Epoch 4 - 50 / 234 Loss 3.913695
Train Epoch 4 - 60 / 234 Loss 3.907088
Train Epoch 4 - 70 / 234 Loss 4.026055
Train Epoch 4 - 80 / 234 Loss 3.854659
Train Epoch 4 - 90 / 234 Loss 3.954557
Train Epoch 4 - 100 / 234 Loss 3.880200
Train Epoch 4 - 110 / 234 Loss 3.911777
Train Epoch 4 - 120 / 234 Loss 3.866536
Train Epoch 4 - 130 / 234 Loss 3.957554
Train Epoch 4 - 140 / 234 Loss 3.930515
Train Epoch 4 - 150 / 234 Loss 3.950871
Train Epoch 4 - 160 / 234 Loss 3.845739
Train Epoch 4 - 170 / 234 Loss 3.905876
Train Epoch 4 - 180 / 234 Loss 3.884211
Train Epoch 4 - 190 / 234 Loss 4.034623
Train Epoch 4 - 200 / 234 Loss 3.863284
Train Epoch 4 - 210 / 234 Loss 3.899471
Train Epoch 4 - 220 / 234 Loss 3.837218
Train Epoch 4 - 230 / 234 Loss 3.862398
Epoch 4 of took 5.271s
Current Total time : 21.366s
Train Epoch 5 - 0 / 234 Loss 3.878444
Train Epoch 5 - 10 / 234 Loss 3.919256
Train Epoch 5 - 20 / 234 Loss 3.872842
Train Epoch 5 - 30 / 234 Loss 3.926296
Train Epoch 5 - 40 / 234 Loss 3.787506
Train Epoch 5 - 50 / 234 Loss 3.959824
Train Epoch 5 - 60 / 234 Loss 3.830777
Train Epoch 5 - 70 / 234 Loss 3.883856
Train Epoch 5 - 80 / 234 Loss 3.877614
Train Epoch 5 - 90 / 234 Loss 3.846863
Train Epoch 5 - 100 / 234 Loss 3.908530
Train Epoch 5 - 110 / 234 Loss 3.819784
Train Epoch 5 - 120 / 234 Loss 3.798816
Train Epoch 5 - 130 / 234 Loss 3.757388
Train Epoch 5 - 140 / 234 Loss 3.837136
Train Epoch 5 - 150 / 234 Loss 3.855000
Train Epoch 5 - 160 / 234 Loss 3.821057
Train Epoch 5 - 170 / 234 Loss 3.777124
Train Epoch 5 - 180 / 234 Loss 3.714392
Train Epoch 5 - 190 / 234 Loss 3.776406
Train Epoch 5 - 200 / 234 Loss 3.886733
Train Epoch 5 - 210 / 234 Loss 3.927509
Train Epoch 5 - 220 / 234 Loss 3.719052
Train Epoch 5 - 230 / 234 Loss 3.785564
Epoch 5 of took 5.216s
Current Total time : 26.582s
Test set : Average loss: 4.8523, Accuracy: 9453/10000 (94%)
Average epoch time(ex. 1.) : 5.330s
Total time : 26.582s
Would you like to have a look ? |
st175892 | Maybe the way I was using launch utility was to blame, do you have any idea? Was I suppose to type same command on the master node and other worker nodes?
Any suggestion would be welcome, since I’ve been stuck for too long. |
st175893 | @leo-mao, you should not set world_size and rank in torch.distributed.init_process_group, they are automatically set by torch.distributed.launch.
So please change that to dist.init_process_group(backend=backend, init_method=“env://”)
Also, you should not set WORLD_SIZE, RANK env variables in your code either since they will be set by launch utility. |
st175894 | you are right, it works after I delete rank and world_size parameter in torch.distributed.init_process_group, thanks a lot |
st175895 | Hi, I have met a situation which was little different with @leo-mao. I want to train my model using one machine (node) which has multi-GPUs with torch.distributed. Since each time I just used 2 GPUs, I want to run several models at the same time. The problem is when I have started one model running with torch.distributed, others will get an error info " RuntimeError: Address already in use ". I set the initial way as @teng-li says. Maybe I ignore something like port setting ? I am confused about it. I will be very appreciate if someone can give me some suggestions. |
st175896 | @memray Hi, I have solved my questions. The reason why this bug happened is that two programme used the same port. So my solution is using random port in your command line.
For example, you can write your sh command as " python -m torch.distributed.launch --nproc_per_node=$NGPUS --master_port=$RANDOM train.py ". Just use random number to occupy port. Hope my finding can solve your problem. |
st175897 | Hi, I am working on distributed.launch module recently, I have some question.
I think with the launch and distributedDataParallel(model), you don’t need to average grads manually.
2.During your training, does your gpu0 have more memory usage than the other gpus? I found that the other gpus have extra memory usage in gpu0, it’s annoying. |
st175898 | @zeal Regarding 1, yes you don’t need to manually average gradients. Regarding 2, this is possible if you have some code that somehow uses GPU 0 at some time during execution of your program. This is not an invariant of using the distributed data parallel module. |
st175899 | It’s weird. I found out it seems like the Gpu cache release problem of pytorch. I add ‘torch.cuda.empty_cached’ in somewhere of my code and every gpu have same memory usage. But the program runs rather slower since the empty_cached was add in a for loop.
I still cannot found out what’s wrong. Does it in theory, if you use distributed training, every gpu will have the same memory usage? I know that if you use dataparallel module, the gpu0 will have more memory consumtion. |
st175900 | I have tried to donot set the rank and world_size, but it shows that “ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable RANK expected, but not set”
os.environ['MASTER_ADDR'] = '171.65.34.137'
os.environ['MASTER_PORT'] = '2000901'
#dist.init_process_group(backend, rank=rank, world_size=size)
dist.init_process_group(backend)
Do you have any idea what’s this comes from? |
st175901 | Could you please provide an example script to reproduce this, and the arguments that you’re passing in to DDP? thanks! |
st175902 | I’m interested in parallel training of multiple instances of a neural network model, on a single GPU. Of course this is only relevant for small models which on their own, don’t utilize the GPU well enough.
According to this 23, Pytorch’s multiprocessing package allows to parallelize CUDA code.
I think that with slight modification of this example code 33, I managed to do what I wanted (train several models) instead of training a single model using the Hogwild algorithm, and it worked pretty well. I managed to train 10 models in less then 150% the time it took to train a single one.
Is this functionality related in any way to Nvidia’s MPS 29 capability? I suspect not, because it doesn’t seems to involve anything related to starting an MPS server from the system, setting the GPU to work in EXCLUSIVE mode and so on.
I understand that MPS is the way in which Nvidia supports CUDA multithreading/multiprocessing. Otherwise, it’s not really possible to tell the GPU to do things in parallel.
Is this true? if it is then what exactly multiprocessing does and how does it work so well on the GPU without using MPS?
Can anyone point me to an example which does demonstrate the use of MPS with Pytorch? I couldn’t really find one. And strangely enough, the above 28-page Nvidia guide on MPS doesn’t include any example in Pytorch or any other leading framework.
Thanks! |
st175903 | I understand that MPS is the way in which Nvidia supports CUDA multithreading/multiprocessing.
hmm we need to be more specific. Each process receives its own cuda context on each device used by the process. Per-device contexts are shared by all CPU threads within the process. Any CPU thread in the process may submit work to any cuda stream (the kernel launch and stream API are thread safe), and the work may run concurrently with work submitted from other CPU threads. And of course, each kernel may use thousands of GPU threads.
By default (without MPS) each device runs kernels from only one context (process) at a time. If several processes target the same device, their kernels can’t run concurrently and GPU context switches between processes will occur. MPS multiplexes kernels from different processes so kernels from any thread of any process targeting that device CAN run concurrently (not sure how MPS works at a low level, but it works).
MPS is application-agnostic. After starting the MPS daemon in your shell:
nvidia-cuda-mps-control –d
all processes (Python or otherwise) that use the device have their cuda calls multiplexed so they can run concurrently. You shouldn’t need to do anything pytorch-specific: start the MPS daemon in the background, then launch your pytorch processes targeting the same device.
One thing I don’t know is whether nccl allreduces in Pytorch can handle if data from all processes is actually on one GPU. I’ve never seen it tried. Sounds like your case doesn’t need inter-process nccl comms though.
MPS has been around for years, and works on any recent generation. It is NOT the same thing as “multi-instance GPU” or MIG, which is Ampere-specific. (I think MIG sandboxes client processes more aggressively than MPS, providing better per-process fault isolation among other things. MPS should be fine for your case.) |
st175904 | Thanks for your reply!
mcarilli:
Any CPU thread in the process may submit work to any cuda stream (the kernel launch and stream API are thread safe), and the work may run concurrently with work submitted from other CPU threads.
One thing I don’t understand then, is how come in practice this example 17 can parallelize training (on single GPU), without MPS, and yet it doesn’t even use different cuda streams?
Does it mean it just happens on high level due to better utilization of “idle” times, where the GPU is not busy, and these times now can be filled by commands from other threads/processes? If that’s the case it’s surprising that the improvement is almost linear in the number of processes.
mcarilli:
By default (without MPS) each device runs kernels from only one context (process) at a time. If several processes target the same device, their kernels can’t run concurrently and GPU context switches between processes will occur.
So it means that without MPS, work submitted from different CPU processes cannot run concurrently on the same GPU, but work submitted from different threads can (via streams)?
If I understand correctly, the example code above uses processes? |
st175905 | alexgo:
One thing I don’t understand then, is how come in practice this example can parallelize training (on single GPU), without MPS, and yet it doesn’t even use different cuda streams?
Without MPS, I don’t see how this example could run work from different processes concurrently. It’s possible the model is so small, and each process’s individual utilization is so low, that you observe a speedup with multiple processes even if they aren’t running concurrently. Hard to tell without profiling 5.
work submitted from different CPU processes cannot run concurrently on the same GPU, but work submitted from different threads can (via streams)?
Streams are free-floating from threads. Any thread may submit work to any stream.
If all threads submit work to the same stream, those kernels will be serialized, even if the threads are in the same process. However, no context switches will be required.
If threads in the same process submit work to different streams, the kernels may run with true concurrency (overlapping).
If threads in different processes submit work to the same device, without MPS the kernels may not run concurrently regardless what streams are used, because they are in separate contexts. MPS allows processes to submit work to the device without a context switch. I believe (but i’m not sure) kernels may run with true concurrrency (ie, overlapping) even if processes each use their default stream. But even if the kernels from different processes can’t truly overlap, MPS increasing the density of work submission by avoiding expensive context switches is beneficial.
Pytorch does not compile to use per-thread default streams. By default, all threads submit work to a shared default stream (called “legacy” default stream in some cuda docs). Therefore, unless you manually set stream contexts 5 in each thread, case 1 applies. |
st175906 | mcarilli:
Without MPS, I don’t see how this example could run work from different processes concurrently
Not only that it works concurrently (in the sense that running the example with 10 processes vs. 1 results in much less than 10x slowdown), but in this case, MPS almost doesn’t help further.
I think I’ve convinced myself by now that I know how to start and stop an MPS server. I modified this code to use a simpler network (MLP) and not use a dataloader but instead just have all data in memory, and in this case MPS showed some improvement, but still far from linear. That’s how I knew that MPS works.
But okay, I guess this doesn’t explain much without profiling like you said. Maybe this example is too complex. Maybe there are idle GPU times implied in it which allow the speedup without MPS, and don’t allow MPS to help because the Cuda computations aren’t actually possible to parallelize due to low utilization.
That’s why it would be very helpful to see some very minimal Pytorch example that can demonstrate good MPS usage, i.e. if you set num_processes=1, then the run takes X seconds, and then you change it to num_processes=10, and the runtime stays about X seconds (plus a small overhead), but without MPS it would be 10X.
Maybe you could provide a clue on how to write such minimal example? i.e, should I use torch.multiprocessing as in the example above? or something else for dispatching processes to the GPU? |
st175907 | mcarilli:
If all threads submit work to the same stream, those kernels will be serialized, even if the threads are in the same process. However, no context switches will be required.
If threads in the same process submit work to different streams, the kernels may run with true concurrency (overlapping).
If threads in different processes submit work to the same device, without MPS the kernels may not run concurrently regardless what streams are used, because they are in separate contexts. MPS allows processes to submit work to the device without a context switch. I believe (but i’m not sure) kernels may run with true concurrrency (ie, overlapping) even if processes each use their default stream. But even if the kernels from different processes can’t truly overlap, MPS increasing the density of work submission by avoiding expensive context switches is beneficial.
That’s interesting. So according to this (case 3) I understand that this example may not be optimal for execution with MPS. It might be beneficial to try and further modify this example, and make sure that each process does it’s Cuda work on a different stream? because I don’t think this is the case now. I will try it out. Thanks! |
st175908 | I’m interested in the training of multiple neural network models on a single GPU as well. For example, running MobileNet and ResNet on a single GPU at the same time. Is there any example code how could I implement that on PyTorch? with MPS and without MPS (naive to running).
Appreciate for any help |
st175909 | I have some simple model code, transfer learned from resnet, and when I run it without distributed, everything works fine. however, when I try it in ditributed mode, I get this weird error:
Error detected in CudnnBatchNormBackward.
and then:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2048]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
The model in question is a vanilla resnet, which is loaded like so:
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
backbone = models.resnet50(pretrained=True)
self.backbone = torch.nn.SyncBatchNorm.convert_sync_batchnorm(backbone)
def forward(self, x):
y1 = self.backbone(x)
y_n = F.normalize(z1, dim=-1)
return y_n
and the training loop looks like so:
for batch_idx, ((img1, img2), _) in enumerate(train_loader):
if args.gpu is not None:
img1 = img1.cuda(args.gpu, non_blocking=True)
img2 = img2.cuda(args.gpu, non_blocking=True)
optimizer.zero_grad()
out_1 = model(img1)
out_2 = model(img2)
loss = contrastive_loss_fn(out1,out2)
loss.backward()
optimizer.step()
and it errors out at loss.backward in essence. I suspect it is because I run the model twice, but I am unsure of this.
Any pointers would be great… Havent been able to troubleshoot this for days now!
PS: I have tried cloning the outputs and using SyncedBatchnorm… neither seems to help! |
st175910 | Solved by pbelevich in post #2
I can suggest to set broadcast_buffers=False in the DDP module constructor as was mentioned here |
st175911 | I can suggest to set broadcast_buffers=False in the DDP module constructor as was mentioned here 1 |
st175912 | I spent about 6 days hunting for this solution
Thank you VERY much @pbelevich genius stuff!
|I just have a follow up question (it all works fine now with the buffer flag): what does this do to performance? are there any gotchas I need to be aware of?
Thank you. |
st175913 | I have a problem running the spawn function from mp on Slurm on multiple GPUs.
Instructions To Reproduce the Issue:
Full runnable code:
import torch, os
def test_nccl_ops():
num_gpu = 2
print("NCCL init before spawn")
import torch.multiprocessing as mp
dist_url = "file:///tmp/nccl_tmp_file"
mp.spawn(_test_nccl_worker, nprocs=num_gpu, args=(num_gpu, dist_url), daemon=False)
print("NCCL init succeeded.")
def _test_nccl_worker(rank, num_gpu, dist_url):
import torch.distributed as dist
dist.init_process_group(backend="NCCL", init_method=dist_url, rank=rank, world_size=num_gpu)
dist.barrier()
print("Worker after barrier")
if __name__ == "__main__":
test_nccl_ops()
On the other hand, we implemented this Slurm script to run an experiment on 2 GPUs:
#!/bin/bash -l
#SBATCH --account=Account
#SBATCH --partition=gpu # gpu partition
#SBATCH --nodes=1 # 1 node, 4 GPUs per node
#SBATCH --time=24:00:00
#SBATCH --job-name=detectron2_demo4 # job name
module load Python/3.9.5-GCCcore-10.3.0
module load CUDA/11.1.1-GCC-10.2.0
cd /experiment_path
export NCCL_DEBUG=INFO
srun python main.py --num-gpus 2
When I ran this script I faced an error (cat slurm-xxx.out), and no error file:
The following have been reloaded with a version change:
1) GCCcore/10.3.0 => GCCcore/10.2.0
2) binutils/2.36.1-GCCcore-10.3.0 => binutils/2.35-GCCcore-10.2.0
3) zlib/1.2.11-GCCcore-10.3.0 => zlib/1.2.11-GCCcore-10.2.0
NCCL init before spawn
[W ProcessGroupNCCL.cpp:1569] Rank 0 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
[W ProcessGroupNCCL.cpp:1569] Rank 1 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
gpu04:9770:9770 [0] NCCL INFO Bootstrap : Using [0]bond0:10.10.1.4<0>
gpu04:9770:9770 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
gpu04:9770:9770 [0] NCCL INFO NET/IB : No device found.
gpu04:9770:9770 [0] NCCL INFO NET/Socket : Using [0]bond0:10.10.1.4<0>
gpu04:9770:9770 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda10.2
gpu04:9771:9771 [1] NCCL INFO Bootstrap : Using [0]bond0:10.10.1.4<0>
gpu04:9771:9771 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
gpu04:9771:9771 [1] NCCL INFO NET/IB : No device found.
gpu04:9771:9771 [1] NCCL INFO NET/Socket : Using [0]bond0:10.10.1.4<0>
gpu04:9771:9771 [1] NCCL INFO Using network Socket
gpu04:9771:9862 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
gpu04:9771:9862 [1] NCCL INFO Trees [0] -1/-1/-1->1->0|0->1->-1/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1
gpu04:9771:9862 [1] NCCL INFO Setting affinity for GPU 1 to 3fff
gpu04:9770:9861 [0] NCCL INFO Channel 00/02 : 0 1
gpu04:9770:9861 [0] NCCL INFO Channel 01/02 : 0 1
gpu04:9770:9861 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
gpu04:9770:9861 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1
gpu04:9770:9861 [0] NCCL INFO Setting affinity for GPU 0 to 3fff
gpu04:9771:9862 [1] NCCL INFO Channel 00 : 1[6000] -> 0[5000] via P2P/IPC
gpu04:9770:9861 [0] NCCL INFO Channel 00 : 0[5000] -> 1[6000] via P2P/IPC
gpu04:9771:9862 [1] NCCL INFO Channel 01 : 1[6000] -> 0[5000] via P2P/IPC
gpu04:9770:9861 [0] NCCL INFO Channel 01 : 0[5000] -> 1[6000] via P2P/IPC
gpu04:9771:9862 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
gpu04:9771:9862 [1] NCCL INFO comm 0x7f057c000e00 rank 1 nranks 2 cudaDev 1 busId 6000 - Init COMPLETE
gpu04:9770:9861 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
gpu04:9770:9861 [0] NCCL INFO comm 0x7f5210000e00 rank 0 nranks 2 cudaDev 0 busId 5000 - Init COMPLETE
gpu04:9770:9770 [0] NCCL INFO Launch mode Parallel
Expected behavior:
To run training on 2 GPUs and print other more outputs then “NCCL init before spawn” and NCCL debug info.
Environment:
Paste the output of the following command:
No CUDA runtime is found, using CUDA_HOME='/usr/local/software/CUDAcore/11.1.1'
--------------------- --------------------------------------------------------------------------------
sys.platform linux
Python 3.9.5 (default, Jul 9 2021, 09:35:24) [GCC 10.3.0]
numpy 1.21.1
detectron2 0.5 @/home/users/aimhigh/detectron2/detectron2
Compiler GCC 10.2
CUDA compiler CUDA 11.1
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.9.0+cu102 @/home/users/aimhigh/.local/lib/python3.9/site-packages/torch
PyTorch debug build False
GPU available No: torch.cuda.is_available() == False
Pillow 8.3.1
torchvision 0.10.0+cu102 @/home/users/aimhigh/.local/lib/python3.9/site-packages/torchvision
fvcore 0.1.5.post20210727
iopath 0.1.9
cv2 4.5.3
--------------------- --------------------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.1.2
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
Additional note: the first time I assumed it is a detectron2 problem but it’s not. You can find my previous discussion with detectron2 developers: link 5. |
st175914 | Hello,
In some weird cases (with scaling up and down), I get the following error:
{"name": "torchelastic.worker.status.FAILED", "source": "AGENT", "timestamp": 0, "metadata": {"run_id": "elastic-imagenet-wxmlb", "global_rank": null, "group_rank": null, "worker_id": null, "role": "default", "hostname": "elastic-imagenet-wxmlb-worker-7", "state": "FAILED", "total_run_time": 0, "rdzv_backend": "etcd", "raw_error": "Traceback (most recent call last):\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 238, in launch_agent\n result = agent.run()\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py\", line 125, in wrapper\n result = f(*args, **kwargs)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py\", line 700, in run\n result = self._invoke_run(role)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py\", line 822, in _invoke_run\n self._initialize_workers(self._worker_group)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py\", line 125, in wrapper\n result = f(*args, **kwargs)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py\", line 670, in _initialize_workers\n self._rendezvous(worker_group)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py\", line 125, in wrapper\n result = f(*args, **kwargs)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py\", line 530, in _rendezvous\n store, group_rank, group_world_size = spec.rdzv_handler.next_rendezvous()\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/etcd_rendezvous.py\", line 152, in next_rendezvous\n rdzv_version, rank, world_size = self._rdzv_impl.rendezvous_barrier()\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/etcd_rendezvous.py\", line 284, in rendezvous_barrier\n return self.init_phase()\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/etcd_rendezvous.py\", line 346, in init_phase\n raise RendezvousClosedError()\ntorch.distributed.elastic.rendezvous.api.RendezvousClosedError\n", "metadata": "{\"group_world_size\": null, \"entry_point\": \"python\"}", "agent_restarts": 0}}
It is not clear from the logs nor the error message what closed the Rendezvous backend. This error forces the whole task to fail and without enough understanding to what’s going on I cannot fix this issue.
Any help?
Thank you very much |
st175915 | How are you scaling up and scaling down? The RendezvousClosedError is raised when the whole gang is not accepting anymore rendezvous (for example when a job if finished). Perhaps one of your nodes left the group and now is trying to rejoin when the job finished completing? It would be useful to have an example to reproduce what you are seeing.
cc @cbalioglu |
st175916 | Actually I realized now that this happens even without scaling. The output of the other node is:
INFO 2021-08-03 11:20:51,599 Rendezvous timeout occured in EtcdRendezvousHandler {"name": "torchelastic.worker.status.FAILED", "source": "AGENT", "timestamp": 0, "metadata": {"run_id": "elastic-imagenet-dzq2v", "global_rank": null, "group_rank": null, "worker_id": null, "role": "default", "hostname": "elastic-imagenet-dzq2v-worker-0", "state": "FAILED", "total_run_time": 901, "rdzv_backend": "etcd", "raw_error": "Traceback (most recent call last):\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 238, in launch_agent\n result = agent.run()\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py\", line 125, in wrapper\n result = f(*args, **kwargs)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py\", line 700, in run\n result = self._invoke_run(role)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py\", line 822, in _invoke_run\n self._initialize_workers(self._worker_group)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py\", line 125, in wrapper\n result = f(*args, **kwargs)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py\", line 670, in _initialize_workers\n self._rendezvous(worker_group)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py\", line 125, in wrapper\n result = f(*args, **kwargs)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py\", line 530, in _rendezvous\n store, group_rank, group_world_size = spec.rdzv_handler.next_rendezvous()\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/etcd_rendezvous.py\", line 152, in next_rendezvous\n rdzv_version, rank, world_size = self._rdzv_impl.rendezvous_barrier()\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/etcd_rendezvous.py\", line 284, in rendezvous_barrier\n return self.init_phase()\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/etcd_rendezvous.py\", line 349, in init_phase\n return self.join_phase(state[\"version\"])\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/etcd_rendezvous.py\", line 390, in join_phase\n active_version = self.wait_for_peers(expected_version)\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/etcd_rendezvous.py\", line 563, in wait_for_peers\n active_version, state = self.try_wait_for_state_change(\n File \"/job/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/etcd_rendezvous.py\", line 859, in try_wait_for_state_change\n raise RendezvousTimeoutError()\ntorch.distributed.elastic.rendezvous.api.RendezvousTimeoutError\n", "metadata": "{\"group_world_size\": null, \"entry_point\": \"python\"}", "agent_restarts": 0}}
Probably this node closed the Rendezvous? |
st175917 | Can you try using the new Rendezvous — PyTorch master documentation 4 over the etcd rendezzvous and see if you still run into the same error? |
st175918 | Would you please explain how I can do this?
It seems I need to replace --rdzv_backend=etcd in the python -m torchelastic.distributed.launch command with something, right? What this thing should be? |
st175919 | Correct, you replace replace --rdzv_backend=etcd with --rdzv_backend=c10d. Command will look something like
python -m torch.distributed.run
--nnodes=1:4
--nproc_per_node=$NUM_TRAINERS
--rdzv_id=$JOB_ID
--rdzv_backend=c10d
--rdzv_endpoint=$HOST_NODE_ADDR
YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...)
docs: torch.distributed.run (Elastic Launch) — PyTorch master documentation 3 |
st175920 | Thanks for the answer.
What should I run on $HOST_NODE_ADDR (as replacement of etcd)? |
st175921 | If you are doing single node training you can use localhost otherwise you can select one of your machines and find it’s hostname and use that. |
st175922 | I am using the Kubernetes controller 1 for my experiments. Is this new backend also supported by this controller? If yes, is there any tutorial I could follow regarding this setup? |
st175923 | Hi everyone! I was using DDP to train DCGAN, which worked fine(with the same network as DCGAN Tutorial — PyTorch Tutorials 1.9.0+cu102 documentation 1).
However, when I removed part of the code (remove update step for generator), I got this error Buckets with more than one variable cannot include variables that expect a sparse gradient. To find out the reason, I commented out the code inside with torch.no_grad() and error disappeared, which confuses me. Maybe the possible cause is my use of torch.no_grad()? But how did it even work before I removed the generator update step?
Anyone can help me understand what is going on here? How could simply removing one update step possibly lead to an error?
The error I got
**** line 63, in subprocess_fn
real_pred = discriminator(real_data)
File "/home/azav/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/azav/.local/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 692, in forward
if self.reducer._rebuild_buckets():
RuntimeError: Buckets with more than one variable cannot include variables that expect a sparse gradient.
I compared the code with error and code without error in the following two blocks.
Code with error:
import os
import torch
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import Adam
import torchvision.utils as vutil
from models.dcgan import Generator, Discriminator
from utils.parser import train_base
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '7777'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
print(f'running on rank {rank}')
generator = Generator().to(rank)
discriminator = Discriminator().to(rank)
if args.distributed:
generator = DDP(generator, device_ids=[rank], broadcast_buffers=False)
discriminator = DDP(discriminator, device_ids=[rank], broadcast_buffers=False)
d_optim = Adam(discriminator.parameters(), lr=2e-4)
g_optim = Adam(generator.parameters(), lr=2e-4)
discriminator.train()
generator.train()
if rank == 0:
fixed_z = torch.randn(64, 100, 1, 1).to(rank)
pbar = range(args.iter)
for e in pbar:
real_data = torch.randn((args.batchsize, 3, 64, 64)).to(rank)
real_pred = discriminator(real_data)
latent = torch.randn((args.batchsize, 100, 1, 1)).to(rank)
fake_data = generator(latent)
fake_pred = discriminator(fake_data)
d_loss = d_logistic_loss(real_pred, fake_pred)
d_optim.zero_grad()
d_loss.backward()
d_optim.step()
if rank == 0 and e % 100 == 0:
print(f'Epoch D loss:{d_loss.item()};')
with torch.no_grad():
imgs = generator(fixed_z)
vutil.save_image(imgs, f'str(e).zfill(5)}.png', normalize=True)
cleanup()
print(f'Process {rank} exits...')
if __name__ == '__main__':
parser = train_base()
args = parser.parse_args()
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!')
Code without error:
import torch
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import Adam
from torch.utils.data import DataLoader
import torchvision.utils as vutil
from models.dcgan import Generator, Discriminator
from utils.parser import train_base
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '7777'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
generator = Generator().to(rank)
discriminator = Discriminator().to(rank)
if args.distributed:
generator = DDP(generator, device_ids=[rank], broadcast_buffers=False)
discriminator = DDP(discriminator, device_ids=[rank], broadcast_buffers=False)
d_optim = Adam(discriminator.parameters(), lr=2e-4)
g_optim = Adam(generator.parameters(), lr=2e-4)
discriminator.train()
generator.train()
if rank == 0:
fixed_z = torch.randn(64, 100, 1, 1).to(rank)
pbar = range(args.iter)
for e in pbar:
real_data = torch.randn((args.batchsize, 3, 64, 64)).to(rank)
real_pred = discriminator(real_data)
latent = torch.randn((args.batchsize, 100, 1, 1)).to(rank)
fake_data = generator(latent)
fake_pred = discriminator(fake_data)
d_loss = d_logistic_loss(real_pred, fake_pred)
d_optim.zero_grad()
d_loss.backward()
d_optim.step()
latent = torch.randn((args.batchsize, 100, 1, 1)).to(rank)
fake_data = generator(latent)
fake_pred = discriminator(fake_data)
g_loss = g_nonsaturating_loss(fake_pred)
g_optim.zero_grad()
g_loss.backward()
g_optim.step()
if rank == 0 and e % 100 == 0:
print(f'Epoch D loss:{d_loss.item()};')
with torch.no_grad():
imgs = generator(fixed_z)
vutil.save_image(imgs, f'{str(e).zfill(5)}.png', normalize=True)
cleanup()
print(f'Process {rank} exits...')
if __name__ == '__main__':
parser = train_base()
args = parser.parse_args()
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!') |
st175924 | Are you using any sparse tensors in your dataset? According to your stack trace the error is getting hit in the forward pass not the code under torch.no_grad(). I tried running your example code that does not work (with slight modifications to argparse and loss fn) but it is working for me. Do you have another smaller example to demonstrate?
import os
import torch
import torch.nn as nn
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import Adam
import torchvision.utils as vutil
import argparse
# ====== https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html ======
# Root directory for dataset
dataroot = "data/celeba"
# Number of workers for dataloader
workers = 2
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
# =======================================
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '7777'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
print(f'running on rank {rank}')
generator = Generator(args.num_gpus).to(rank)
discriminator = Discriminator(args.num_gpus).to(rank)
if args.distributed:
generator = DDP(generator, device_ids=[rank], broadcast_buffers=False)
discriminator = DDP(discriminator, device_ids=[rank], broadcast_buffers=False)
d_optim = Adam(discriminator.parameters(), lr=2e-4)
g_optim = Adam(generator.parameters(), lr=2e-4)
discriminator.train()
generator.train()
if rank == 0:
fixed_z = torch.randn(64, 100, 1, 1).to(rank)
pbar = range(args.iter)
for e in pbar:
real_data = torch.randn((args.batchsize, 3, 64, 64)).to(rank)
real_pred = discriminator(real_data)
latent = torch.randn((args.batchsize, 100, 1, 1)).to(rank)
fake_data = generator(latent)
fake_pred = discriminator(fake_data)
d_loss = real_pred - fake_pred
d_optim.zero_grad()
d_loss.backward()
d_optim.step()
if rank == 0 and e % 100 == 0:
print(f'Epoch D loss:{d_loss.item()};')
with torch.no_grad():
imgs = generator(fixed_z)
vutil.save_image(imgs, f'str(e).zfill(5).png', normalize=True)
cleanup()
print(f'Process {rank} exits...')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
args = parser.parse_args()
args.num_gpus = 2
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!') |
st175925 | Hi Howard, thank you for replying. I ran your code and got the same error (I moved save_image line and modified your loss to be real_pred.mean()- fake_pred.mean() to make the loss scalar). Here is the error I got. BTW, I don’t think this issue is related to my dataset cause I don’t use any dataset here. The ‘data’ is just random number generated by torch.randn().
running on rank 1
running on rank 0
Epoch D loss:-0.025493651628494263;
Traceback (most recent call last):
File "debug2.py", line 178, in <module>
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
File "/home/azav/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/azav/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/azav/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/azav/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/home/azav/Documents/LargeGANs/debug2.py", line 147, in subprocess_fn
real_pred = discriminator(real_data)
File "/home/azav/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/azav/.local/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 692, in forward
if self.reducer._rebuild_buckets():
RuntimeError: Buckets with more than one variable cannot include variables that expect a sparse gradient.
The code I ran is:
import os
import torch
import torch.nn as nn
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import Adam
import torchvision.utils as vutil
import argparse
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
# =======================================
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '7777'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
print(f'running on rank {rank}')
generator = Generator(args.num_gpus).to(rank)
discriminator = Discriminator(args.num_gpus).to(rank)
if args.distributed:
generator = DDP(generator, device_ids=[rank], broadcast_buffers=False)
discriminator = DDP(discriminator, device_ids=[rank], broadcast_buffers=False)
d_optim = Adam(discriminator.parameters(), lr=2e-4)
g_optim = Adam(generator.parameters(), lr=2e-4)
discriminator.train()
generator.train()
if rank == 0:
fixed_z = torch.randn(64, 100, 1, 1).to(rank)
pbar = range(args.iter)
for e in pbar:
real_data = torch.randn((args.batchsize, 3, 64, 64)).to(rank)
real_pred = discriminator(real_data)
latent = torch.randn((args.batchsize, 100, 1, 1)).to(rank)
fake_data = generator(latent)
fake_pred = discriminator(fake_data)
d_loss = real_pred.mean() - fake_pred.mean()
d_optim.zero_grad()
d_loss.backward()
d_optim.step()
if rank == 0 and e % 2 == 0:
print(f'Epoch D loss:{d_loss.item()};')
with torch.no_grad():
imgs = generator(fixed_z)
cleanup()
print(f'Process {rank} exits...')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--num_gpus', type=int, help='Number of GPUs', default=2)
parser.add_argument('--batchsize', type=int, help='Batch size for each GPU', default=2)
parser.add_argument('--iter', type=int, help='Total training iterations', default=10)
args = parser.parse_args()
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!') |
st175926 | I think it’s related to code block inside torch.no_grad() even though stack trace said error is hit in the forward pass. The environment I’m using is ‘1.8.1+cu101’.
Following-up information on previous code by Howard:
If I change the save_img line to be vutil.save_image(imgs, '%e.png' % e, normalize=True), the error changes to
running on rank 0
running on rank 1
Epoch D loss:0.07447701692581177;
Traceback (most recent call last):
File "debug2.py", line 178, in <module>
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
File "/home/azav/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/azav/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/azav/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/home/azav/.local/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/home/azav/Documents/LargeGANs/debug2.py", line 147, in subprocess_fn
real_pred = discriminator(real_data)
File "/home/azav/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/azav/.local/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 692, in forward
if self.reducer._rebuild_buckets():
RuntimeError: Out of range variable index specified. |
st175927 | Hey @Hongkai_Zheng
I tried the code below, but couldn’t reproduce the error either. Which version of PyTorch are you using? I am on 1.10.0a0+git9730d91.
BTW, I also checked that there is no sparse gradients after the backward pass.
import os
import torch
import torch.nn as nn
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import Adam
import torchvision.utils as vutil
import argparse
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
# =======================================
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '7777'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
print(f'running on rank {rank}')
generator = Generator(args.num_gpus).to(rank)
discriminator = Discriminator(args.num_gpus).to(rank)
if args.distributed:
generator = DDP(generator, device_ids=[rank], broadcast_buffers=False)
discriminator = DDP(discriminator, device_ids=[rank], broadcast_buffers=False)
d_optim = Adam(discriminator.parameters(), lr=2e-4)
g_optim = Adam(generator.parameters(), lr=2e-4)
discriminator.train()
generator.train()
if rank == 0:
fixed_z = torch.randn(64, 100, 1, 1).to(rank)
pbar = range(args.iter)
for e in pbar:
real_data = torch.randn((args.batchsize, 3, 64, 64)).to(rank)
real_pred = discriminator(real_data)
latent = torch.randn((args.batchsize, 100, 1, 1)).to(rank)
fake_data = generator(latent)
fake_pred = discriminator(fake_data)
d_loss = real_pred.mean() - fake_pred.mean()
d_optim.zero_grad()
d_loss.backward()
d_optim.step()
if rank == 0 and e % 2 == 0:
print(f'Epoch D loss:{d_loss.item()};')
with torch.no_grad():
imgs = generator(fixed_z)
cleanup()
print(f'Process {rank} exits...')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--num_gpus', type=int, help='Number of GPUs', default=2)
parser.add_argument('--batchsize', type=int, help='Batch size for each GPU', default=2)
parser.add_argument('--iter', type=int, help='Total training iterations', default=10)
args = parser.parse_args()
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!') |
st175928 | Hi @mrshenli , thank you for replying. I’m using Pytorch 1.8.1 cuda 101. Can you also try the following code in your environment (for which I got RuntimeError: Buckets with more than one variable cannot include variables that expect a sparse gradient. )? Is this an issue related to pytorch or cuda version?
import os
import torch
import torch.nn as nn
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import Adam
import torchvision.utils as vutil
import argparse
# ====== https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html ======
# Root directory for dataset
dataroot = "data/celeba"
# Number of workers for dataloader
workers = 2
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 1
class Generator(nn.Module):
def __init__(self, ngpu):
super(Generator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
nn.Tanh()
# state size. (nc) x 64 x 64
)
def forward(self, input):
return self.main(input)
class Discriminator(nn.Module):
def __init__(self, ngpu):
super(Discriminator, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
)
def forward(self, input):
return self.main(input)
# =======================================
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '7777'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
print(f'running on rank {rank}')
generator = Generator(args.num_gpus).to(rank)
discriminator = Discriminator(args.num_gpus).to(rank)
if args.distributed:
generator = DDP(generator, device_ids=[rank], broadcast_buffers=False)
discriminator = DDP(discriminator, device_ids=[rank], broadcast_buffers=False)
d_optim = Adam(discriminator.parameters(), lr=2e-4)
g_optim = Adam(generator.parameters(), lr=2e-4)
discriminator.train()
generator.train()
if rank == 0:
fixed_z = torch.randn(64, 100, 1, 1).to(rank)
pbar = range(args.iter)
for e in pbar:
real_data = torch.randn((args.batchsize, 3, 64, 64)).to(rank)
real_pred = discriminator(real_data)
latent = torch.randn((args.batchsize, 100, 1, 1)).to(rank)
fake_data = generator(latent)
fake_pred = discriminator(fake_data)
d_loss = real_pred.mean() - fake_pred.mean()
d_optim.zero_grad()
d_loss.backward()
d_optim.step()
if rank == 0 and e % 2 == 0:
print(f'Epoch D loss:{d_loss.item()};')
with torch.no_grad():
imgs = generator(fixed_z)
vutil.save_image(imgs, f'{e}.png', normalize=True)
cleanup()
print(f'Process {rank} exits...')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--num_gpus', type=int, help='Number of GPUs', default=2)
parser.add_argument('--batchsize', type=int, help='Batch size for each GPU', default=2)
parser.add_argument('--iter', type=int, help='Total training iterations', default=10)
args = parser.parse_args()
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!') |
st175929 | Hi everyone! I have a problem understanding what happens to state_dicts, when using DataParallel. Once I apply DataParallel to a model with requires_grad = True, then the state_dict().keys() are empty.
My code has these two classes:
import torch
import torch.nn as nn
import torchvision
from torchvision import models
class LTE(torch.nn.Module):
def __init__(self, requires_grad=True):
super(LTE, self).__init__()
vgg_pretrained_features = models.vgg19(pretrained=True).features
self.slice1 = torch.nn.Sequential()
for x in range(2):
self.slice1.add_module(str(x), vgg_pretrained_features[x])
if not requires_grad:
for param in self.slice1.parameters():
param.requires_grad = requires_grad
class TTSR(nn.Module):
def __init__(self):
super(TTSR, self).__init__()
self.LTE = LTE(requires_grad=True)
self.LTE_copy = LTE(requires_grad=False)
def forward(self, sr=None):
print("LTE "+str(torch.cuda.current_device())+" "+str(self.LTE.state_dict().keys()))
print("LTE_copy "+str(torch.cuda.current_device())+" "+str(self.LTE_copy.state_dict().keys()))
If I now initialize a TTSR model and run the forward function:
device = torch.device('cuda')
_model = TTSR().to(device)
a = _model(sr=1)
Then, this is my output:
LTE 0 odict_keys(['slice1.0.weight', 'slice1.0.bias'])
LTE_copy 0 odict_keys(['slice1.0.weight', 'slice1.0.bias'])
This is fine so far! But if I use DataParallel now (with two GPUs),
model = nn.DataParallel(_model, list(range(2)))
a = model(sr=1)
my output is as follows:
LTE 0 odict_keys([ ])
LTE_copy 0 odict_keys(['slice1.0.weight', 'slice1.0.bias'])
LTE 1 odict_keys([ ])
LTE_copy 1 odict_keys([ ])
Why does requires_grad = True lead to those Keys [‘slice1.0.weight’, ‘slice1.0.bias’] missing in the state_dict? This only happens once I send that model to DataParallel.
Did I understand it correct, that DataParallel always has one main GPU where the model is stored in (DataParallel imbalanced memory usage) and therefore the state_dicts of LTE1 & LTE_copy1 are empty? |
st175930 | I suspect that you don’t see the keys because of this line 2 which is called from DataParallel.forward. Basically you should not try to access replicas on different GPUs directly, because they are handled in different way than regular modules. Also, we recommend to use DistributedDataParallel, instead of this class, to do multi-GPU training, even if there is only a single node. See: Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel and Distributed Data Parallel. |
st175931 | On a single GPU, I’m able to train a multi-task model like this:
model = MultiTaskBERT()
model.to(device)
data_task_A = iter(get_data_loader())
data_task_B = iter(get_data_loader())
max_steps = max(len(data_task_A), len(data_task_B))
task_probs = [0.5, 0.5]
for epoch in range(num_epochs):
for step in range(max_steps):
#Choose one of the tasks with a probability
data_current = random.choice([data_task_A, data_task_B], p=task_probs)
if step < len(data_current):
batch = next(data_current)
train(model, current_batch)
The block above converts DataLoaders to iterators so that I can alternate between batches of the different tasks.
I tried to modify the script so that I could use DDP (single node, multi-GPU)
from transformers.data.data_collator import default_data_collator
model = MultiTaskBERT()
model.to(device)
model = DDP(model, device_ids=[gpu_idx], find_unused_parameters=True)
ds_A = get_dataset()
ds_B = get_dataset()
sampler_A = DistributedSampler(ds_A, num_replicas=world_size, rank=rank)
sampler_B = DistributedSampler(ds_B, num_replicas=world_size, rank=rank)
data_task_A = iter(DataLoader(
ds_A,
batch_size=batch_size,
collate_fn=default_data_collator,
sampler=sampler_A
))
data_task_B = iter(DataLoader(
ds_B,
batch_size=batch_size,
collate_fn=default_data_collator,
sampler=sampler_B
))
max_steps = max(len(data_task_A), len(data_task_B))
task_probs = [0.5, 0.5]
for epoch in range(num_epochs):
for step in range(max_steps):
#Choose one of the tasks with a probability
data_current = random.choice([data_task_A, data_task_B], p=task_probs)
if step < len(data_current):
batch = next(data_current)
train(model, current_batch)
I was able to train error free, but noticed that my accuracy scores were much worse than single GPU. I began debugging by trying to train one task at a time, and comparing to single GPU. On a single GPU I am able to use the code from the first block to train a single task by simply setting the probabilities for all tasks to 0, except for the one I want to train (which I set to 1).
The code below more-or-less matches my single task, single GPU results. The problem disappears when I enumerate my DataLoader, instead of converting it to an iterator.
from transformers.data.data_collator import default_data_collator
model = MultiTaskBERT()
model.to(device)
model = DDP(model, device_ids=[gpu_idx], find_unused_parameters=True)
ds_A = get_dataset()
sampler_A = DistributedSampler(ds_A, num_replicas=world_size, rank=rank)
data_task_A = DataLoader(
ds_A,
batch_size=batch_size,
collate_fn=default_data_collator,
sampler=sampler_A
)
for epoch in range(num_epochs):
for i, current_batch in enumerate(data_task_A):
train(model, current_batch)
My question is: How can I alternate between batches from the different tasks, in a DDP setting? It appears that converting my DataLoader to an iterator messes with the distributed sampling…should that be happening?
Thanks! |
st175932 | If the dataset that you are using is an IterableDataset then I don’t believe that converting the DataLoader for that dataset should be messing up distributed sampling. Since it looks like it is working when using one dataloader for you, isn’t a possible workaround to try combining the two different tasks into one custom data loader and implement your own __next__() with the task probabilities. |
st175933 | I used an IterableDataset and got
ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.distributed.DistributedSampler object at 0x7fab6f18e898>
That workaround idea sounds good, I will try that and post back. |
st175934 | Hi everyone, what is the best practice to share a massive CPU tensor over multiple processes (read-only + single machine + DDP)?
I think torch.Storage — PyTorch 1.9.0 documentation 7 with share=True suited my needs, but I can’t find a way to save storage and read it as a tensor. (same open issue 2 on Oct 29, 2019 )
I also tried to copy training data to /dev/shm (reference) and run DDP with 8 GPUs, but nothing is different. The memory usage when running with 8 GPUs is the same as before, but I tested with a single process, loading the dataset may occupy about 1 GB of memory. Am I missing something here? |
st175935 | Solved by siahuat0727 in post #5
Finally, I found a way to utilize torch.Storage.from_file.
For detail, see here. |
st175936 | Thanks for posting the question @siahuat0727, did you try torch.multiprocessing.Queue to pass the tensor objects between the processes? you can take a look at the `torch.multiprocessing doc 8 and see if this works for you. |
st175937 | you can also use tensor.shared_memory() for sharing a big tensor across processes. |
st175938 | Hi @wanchaol, thank you very much for your reply!
I think tensor.shared_momory should work well in pure multiprocess program.
I will look at how to pass the reference in pytorch-lightning DDP mode. |
st175939 | Finally, I found a way to utilize torch.Storage.from_file.
For detail, see here 10. |
st175940 | Hi All,
I am trying to run DINO 1 on multiple nodes with facebookincubator/submitit repo. We have a slurm server and I am able to train DINO on the slurm server using a single node (8gpus) [WITHOUT USING submitit] but when I try to run with multiple nodes, I am getting the below error:
submitit ERROR (2021-07-30 01:10:30,581) - Submitted job triggered an exception
Traceback (most recent call last):
File “/home/user/skanaconda3/envs/url/lib/python3.8/runpy.py”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/home/user/skanaconda3/envs/url/lib/python3.8/runpy.py”, line 87, in _run_code
exec(code, run_globals)
File “/home/user/skanaconda3/envs/url/lib/python3.8/site-packages/submitit/core/_submit.py”, line 11, in
submitit_main()
File “/home/user/skanaconda3/envs/url/lib/python3.8/site-packages/submitit/core/submission.py”, line 71, in submitit_main
process_job(args.folder)
File “/home/user/skanaconda3/envs/url/lib/python3.8/site-packages/submitit/core/submission.py”, line 64, in process_job
raise error
File “/home/user/skanaconda3/envs/url/lib/python3.8/site-packages/submitit/core/submission.py”, line 53, in process_job
result = delayed.result()
File “/home/user/skanaconda3/envs/url/lib/python3.8/site-packages/submitit/core/utils.py”, line 128, in result
self._result = self.function(*self.args, **self.kwargs)
File “run_with_submitit.py”, line 67, in call
main_dino_initialize_all.train_dino(self.args)
File “/home/user/code/dino/main_dino_initialize_all.py”, line 143, in train_dino
utils.init_distributed_mode(args)
File “/home/user/code/dino/utils.py”, line 468, in init_distributed_mode
dist.init_process_group(
File “/home/user/skanaconda3/envs/url/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 439, in init_process_group
_default_pg = _new_process_group_helper(
File “/home/user/skanaconda3/envs/url/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 528, in _new_process_group_helper
pg = ProcessGroupNCCL(
RuntimeError: ProcessGroupNCCL is only supported with GPUs, no GPUs found!
From logs, I see that the job initially gets assigned to two nodes [with 8 gpus in each node] and then stops with the above error. I think the code crashes at this line 4 . Why does at::cuda::getNumGPUs() returns 0 when there are gpus available?
Thanks in advance! |
st175941 | I am not familiar with submitit so I am unsure of how to validate the number of GPUs that is using. Before init_process_group can you also try printing the value of torch.cuda.device_count()? (torch.cuda.device_count — PyTorch master documentation 3). This may help to narrow down why there are any GPUs detected and whether this is an issue in the distributed package. |
st175942 | Hi
I init 2 processes on 2 GPU with the followin code
command is:
CUDA_VISIBLE_DEVICES=4,5 WORLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 --master_port 44744 train.py
after executing the above command, 2 processes will not exit.
if I comment out the for loop to enumerate loader_eval in the following code, 2 processes could successfully exit.
if I un-comment out the for loop to enumerate loader_eval in the following code, 2 processes could not exit.
is there any way to check where the process is pending? or how to check where the process is pending?
is there anything wrong with my code?
how to make those processes exit after the code finalized?
if args.distributed:
torch.cuda.set_device(args.local_rank)
torch.distributed.init_process_group(backend='nccl', init_method='env://')
args.world_size = torch.distributed.get_world_size()
args.rank = torch.distributed.get_rank()
logger_trainer_pim.info('----------> world_size == {}, rank=={}, local_rank == {}, gpu index =={}'.format(args.world_size, args.rank, args.local_rank, args.device.index))
logger_trainer_pim.info('Training in distributed mode with multiple processes, 1 GPU per process. Process %d, total %d.' % (args.rank, args.world_size))
else:
logger_trainer_pim.info('Training with a single process on 1 GPU.')
assert args.rank >= 0
dataset_eval = create_dataset(
args.dataset, root=args.data_validate_dir, split=args.val_split, is_training=False, batch_size=args.batch_size)
loader_eval = create_loader(
dataset_eval,
input_size=data_config['input_size'],
batch_size=args.validation_batch_size_multiplier * args.batch_size,
is_training=False,
use_prefetcher=args.prefetcher,
interpolation=data_config['interpolation'],
mean=data_config['mean'],
std=data_config['std'],
num_workers=args.num_workers,
distributed=args.distributed,
crop_pct=data_config['crop_pct'],
pin_memory=args.pin_mem,
)
print('------------------------------------------>')
print(args)
for batch_idx, (input, target) in enumerate(loader_eval):
if batch_idx>=1: break
print('-------------------------> enumerate loader_eval done.')
return {"init": 0.888, "final": 1} |
st175943 | You are setting world_size=2 and have nproc_per_node=2 so you actually have 4 processes total. For each of the nodes, it looks like you are setting the device to local_rank then spawning 2 subprocesses to load the data. I don’t see anything off about what you are doing but just wanted to make sure that was what was intended.
I am not able to run the example you provided due to missing arg parsing, imports, etc. Do you have a sample script I can run locally to reproduce the error? |
st175944 | Hi, I’m trying to adapt some code to run on multiple GPUs. To do so, I’ve followed this example 6. It seemed quite simple and seamless. I was able to make it work on a simple dummy example.
However, I’m unable to make it work on my actual code. I keep getting the following error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
Here’s a simplified version of my code.
The Model:
class VGAN_generator(nn.Module):
def __init__(self, z_dim, hidden_dim, x_dim, layers, col_type, col_ind ,condition=False,c_dim=0):
super(VGAN_generator, self).__init__()
self.input = nn.Linear(z_dim+c_dim, hidden_dim)
self.inputbn = nn.BatchNorm1d(hidden_dim)
self.hidden = []
self.BN = []
self.col_type = col_type
self.col_ind = col_ind
self.x_dim = x_dim
self.c_dim = c_dim
self.condition = condition
for i in range(layers):
fc = nn.Linear(hidden_dim, hidden_dim)
setattr(self, "fc%d"%i, fc)
self.hidden.append(fc)
bn = nn.BatchNorm1d(hidden_dim)
setattr(self, "bn%d"%i, bn)
self.BN.append(bn)
self.output = nn.Linear(hidden_dim, x_dim)
self.outputbn = nn.BatchNorm1d(x_dim)
def forward(self, z):
z = self.input(z)
z = self.inputbn(z)
z = torch.relu(z)
for i in range(len(self.hidden)):
z = self.hidden[i](z)
z = self.BN[i](z)
z = torch.relu(z)
x = self.output(z)
x = self.outputbn(x)
output = []
for i in range(len(self.col_type)):
sta = self.col_ind[i][0]
end = self.col_ind[i][1]
if self.col_type[i] == 'binary':
temp = torch.sigmoid(x[:,sta:end])
elif self.col_type[i] == 'normalize':
temp = torch.tanh(x[:,sta:end])
elif self.col_type[i] == 'one-hot':
temp = torch.softmax(x[:,sta:end], dim=1)
elif self.col_type[i] == 'gmm':
temp1 = torch.tanh(x[:,sta:sta+1])
temp2 = torch.softmax(x[:,sta+1:end], dim=1)
temp = torch.cat((temp1,temp2),dim=1)
elif self.col_type[i] == 'ordinal':
temp = torch.sigmoid(x[:,sta:end])
output.append(temp)
output = torch.cat(output, dim = 1)
return output
The Training
def V_Train(G, epochs, lr, dataloader, z_dim, device, steps_per_epoch = None):
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.cuda.manual_seed_all(0)
np.random.seed(0)
print("Let's use", torch.cuda.device_count(), "GPUs!")
if torch.cuda.device_count() > 1:
G = nn.DataParallel(G, device_ids=[0,1])
G.to(device)
G_optim = optim.Adam(G.parameters(), lr=lr, weight_decay=0.00001)
# the default # of steps is the # of batches.
if steps_per_epoch is None:
steps_per_epoch = len(dataloader)
for epoch in range(epochs):
it = 0
while it < steps_per_epoch:
for x_real in dataloader:
x_real = x_real.to(device)
z = torch.randn(x_real.shape[0], z_dim)
z = z.to(device)
x_fake = G(z) # ERROR HAPPENS HERE
# MORE CODE BELOW
# [...]
return G
Finally, here’s how the training is called:
device = torch.device("cuda:0" if GPU else "cpu")
V_Train(G, epochs, lr, dataloader, z_dim, device, steps_per_epoch)
I’ve had a hard time finding someone online with the same issue. I’ve seen mentions of tensor operations in the model __init__ causing issues. I don’t seem to have any of that here. My guess is that the batch norm is causing synchronization issues between devices. I wouldn’t know why or how to fix it.
I can also note that my code runs without issues on a single GPU
Thanks for your help! |
st175945 | Solved by ptrblck in post #2
You are creating self.hidden and self.BN as plain Python lists, which won’t work.
To properly register modules you would need to use nn.ModuleList instead and it’ll work. |
st175946 | You are creating self.hidden and self.BN as plain Python lists, which won’t work.
To properly register modules you would need to use nn.ModuleList instead and it’ll work. |
st175947 | This is exactly it! Thank you.
I was also storing lists of Modules inside Python dictionaries. I fixed it by using nn.ModuleDict to store nn.ModuleList objects.
Cheers! |
st175948 | I’m running a distributed pytorch training 2. Everything works like charm. I am fully utilizing all GPUs, all processes are in sync, everything is fine.
At the end of each epoch, I want to run some elaborate evaluation in a new process (not to block the training):
if args.rank == 0:
# only for the "main" rank
subprocess.run(['python3', 'my_eval_code.py', '--chk', 'checkpoint'])
At this point, execution stops, the new process is not started and everything just halts.
Is there some interdependence between pytorch’s DDP and subprocess module?
How can I start a new shell script (subprocess.run/subprocess.call/subprocess.popen) from inside a DDP process?
I also posted this question on SO 3.
Update (July 29th, 2021)
I changed my code to:
proc = subprocess.Popen(cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
print(f'\t{proc}={proc.poll()}')
try:
proc_o, proc_e = proc.communicate(timeout=120)
print(f'successfully communicated o={proc_o} e={proc_e} poll={proc.poll()}')
except subprocess.TimeoutExpired:
proc.kill()
proc_o, proc_e = proc.communicate()
print(f'time out o={proc_o} e={proc_e} poll={proc.poll()}')
No good: the Popen command is blocking, the print of the poll command is never executed, let alone the communicate.
When I check on the job with top, I see:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
37924 bagon 39 19 23640 2796 880 S 15.8 0.1 0:15.34 python3
Looking at the process that actually runs: I see this:
UID PID PPID C STIME TTY STAT TIME CMD
bagon 37924 37065 1 08:00 ? SNl 0:15 /home/bagon/.conda/envs/my_env/bin/python3 -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=50, pipe_handle=54) --multiprocessing-fork
It seems like there is some underlying mechanism preventing subprocess module from starting new processes.
Any help? |
st175949 | Thanks for posting @shaibagon. For your questions:
1.Yeah I think there’re some connection because PyTorch using multiprocessing and it has some interdependencies with subprocess
2. I think if you don’t want to block training, did you try if subprocess.popen works? could you also try multiprocessing.Process and see if that works? |
st175950 | I tried subprocess.popen as well I did not see any process starts and it is unclear what exactly is the problem - I don’t see any errors/feedback only that the execution halts. |
st175951 | I opened a bug report describing this issue along with a minimal script that preproduces this deadlock.
Fingers crossed. |
st175952 | I answered your issue on github torch.distributed and subprocess do not work together? · Issue #62381 · pytorch/pytorch · GitHub 2. Hopefully that clarifies how to use init_process_group. But I suspect this may not address the original issue brought up in this post? It is hard to say why my_eval_code.py is blocking and what exactly that code is doing. If you can find a locally reproducible example where it blocks that would help! |
st175953 | Hi,
in torch.distributed:
node means the machine(computer) id in the network.
rank, means global_rank, means the process id in the network
local_rank, means the process id in local machine(computer).
Is the my understanding above correct?
in my code,
local_rank could be read from args.local_rank if torch.distributed is applied. otherwise local_rank doesn’t exist in args, right?
how could I get node id and rank in my code? |
st175954 | Hi, your understanding is correct. Here are the definitions we also refer to in documentation (torch.distributed.run (Elastic Launch) — PyTorch master documentation 22)
I assume you are using torch.distributed.launch which is why you are reading from args.local_rank. If you don’t use this launcher then the local_rank will not exist in args.
As of torch 1.9 we have a improved and updated launcher (torch.distributed.run (Elastic Launch) — PyTorch master documentation 11) in which you can read local_rank and rank from the environment variables. If you need the node id you’re code is on you can identify it through its hostname? |
st175955 | @H-Huang ,
Thank you!
Yes, I am using torch.distributed.launch. Thank you for your reply! |
st175956 | nn.DataParallel(model) efficiently parallelizes batches for me. However, when looking at memory, I see that device0 is almost full, while other devices have some memory to spare. Is there a way to balance memory load (e.g. split batches non-equally across devices)? |
st175957 | Thanks for posting question @dyukha Yeah DDP supports uneven inputs starting from pytorch 1.8.1, you can take a look at the details in the doc DistributedDataParallel — PyTorch 1.9.0 documentation |
st175958 | @dyukha please also use DDP instead of Data Parallel, DDP is better to use even in a single process, and we are trying to deprecate Data Parallel in long term as well. see DataParallel — PyTorch 1.9.0 documentation |
st175959 | Thanks for the reply! I have to use DataParallel because of issues with DDP: Distributed Data Parallel example - "process 0 terminated with exit code 1" - #3 by dyukha 1 |
st175960 | Hi,
My example code is as follows,
the logs in log file includes ‘XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n’.
however, some GPU memory is still in use and GPU util is 0%.
it seems that process doesn’t exit, or process exist but memory is not cleaned.
is there any solution for this problem?
I used the following code to start 2 processes:
CUDA_VISIBLE_DEVICES=6,7 WORLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 --master_port 44744 train.py
def runner_full(ctp):
logs = open('log.txt', 'a')
train(ctp) # in train we will use GPU to train network
torch.cuda.empty_cache()
logs.write('XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n')
logs.flush()
logs.close()
return
if __name__ == "__main__":
ctp = {'a':0, 'b':1, 'c':2}
runner_full(ctp) |
st175961 | Based on the high memory usage I assume that you are either not deleting all references to CUDATensors or other processes might be using the GPU memory.
PyTorch itself will create the CUDA context, which won’t be released by calling torch.cuda.empty_cache(), and which will use approx. 600-1000MB of device memory. |
st175962 | Hi,
@ptrblck ,
Thank you!
I did many experiments and found out that the process won’t exit if I enumerate dataload in my train function.
the interesting thing is:
if I use torch.distributed.launch to launch 2 processes, the loader will result in the process won’t exit.
If I run 1 process directly(python train.py), the process will exit correctly.
Furthermore, I confirm no process use the GPU as I am doing experiment on the machine myself.
I suspect there might be 2 reasons:
as you said some Tensor or memory is not released. is there a way to check/confirm this?
generator(like yeild in dataloader) might not exit.
is there any way to check/confirm above 2 issues?
or do you have any idea about what I explained? |
st175963 | I am using a2-megagpu-16g in GCP which has 16 A100.
CUDA 11.1
NCCL 2.8.4
Pytorch 1.8.0 (installed via pip)
I am testing DDP based on Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.9.0+cu102 documentation 7
Backend with “Gloo” works but with “NCCL”, it fails
Running basic DDP example on rank 0.
Running basic DDP example on rank 1.
Traceback (most recent call last):
File "quick_tutorial3.py", line 66, in <module>
run_demo(demo_basic, 2)
File "quick_tutorial3.py", line 57, in run_demo
join=True)
File "/home/wonkyum/venv/espnet/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/wonkyum/venv/espnet/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/wonkyum/venv/espnet/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/wonkyum/venv/espnet/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/home/wonkyum/quick_tutorial3.py", line 39, in demo_basic
ddp_model = DDP(model, device_ids=[rank])
File "/home/wonkyum/venv/espnet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 446, in __init__
self._sync_params_and_buffers(authoritative_rank=0)
File "/home/wonkyum/venv/espnet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 460, in _sync_params_and_buffers
authoritative_rank)
File "/home/wonkyum/venv/espnet/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 1156, in _distributed_broadcast_coalesced
self.process_group, tensors, buffer_size, authoritative_rank
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:825, unhandled cuda error, NCCL version 2.7.8
ncclUnhandledCudaError: Call to CUDA function failed.
my code is like below:
import os
import sys
import tempfile
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_fn, world_size):
mp.spawn(demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == "__main__":
n_gpus = torch.cuda.device_count()
if n_gpus < 8:
print(f"Requires at least 8 GPUs to run, but got {n_gpus}.")
else:
run_demo(demo_basic, 2)
Is there a way to resolve this issue? |
st175964 | Could you post more information about the system, please?
It would be interesting to see which NVIDIA driver is used and how you are executing the command, i.e. are you in a docker or bare metal? |
st175965 | ptrblck:
It would be interesting to see which NVIDIA driver is used and how you are executing the command, i.e. are you in a docker or bare metal?
I was running PyTorch example on custom container built over nvidia/cuda:11.1.1 docker. ( Docker Hub 2 )
CUDA, CUDNN, NCCL came with docker. |
st175966 | Thanks for the information. Are you seeing the same issue with the 1.9.0 or the nightly wheels? I also assume you are selecting the CUDA11.1 wheel during the installation? |
st175967 | I figured out the reason. GKE COS has CUDA driver installed 450.51.06. For minor compatibility of CUDA 11.1, driver should be at least 450.80.02. CUDA Compatibility :: GPU Deployment and Management Documentation 70 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.