id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st178968 | Hi,
I try to work with MPI backend but it did not work.
I change to Gloo backend.
Thanks, |
st178969 | Hi, I am trying to use DistribuedDataDarallel for multi-node data-parallelism.
I want to know how can I measure the time breakdown for data load, forward, backward, communication?
Also, for calculating FLops, I am going to use the repository[Calculating flops of a given pytorch model 4] in github. Does anyone know the good way for calculating Flops. |
st178970 | If the program uses GPUs, you can use elapsed_time 22 to measure the time spent on forward, backward, and optimizer. It is harder to break down computation and communication of the backward pass, as DDP tries to overlap these two and DDP conducts communication on dedicated CUDA streams that are not visible from the application side. Besides, communications are launched as soon as a gradient bucket is ready, meaning that it may or may not always saturate the bandwidth. To get around this, you can run local forward-backward, and then explicitly using allreduce from application side to conduct gradient synchronization after the backward pass. This will expose opportunities to measure that from application. |
st178971 | Hi, I tried DistributedDataParallel with nccl backend.
It works when I use 1 or 2 nodes (each with 4 V100).
However, error happens when further scaling to 3 or 4 nodes, and always a node with the following error (other nodes reports differently and looks correct).
gpu45:169732:170179 [0] transport/net_ib.cc:789 NCCL WARN NET/IB : Got completion with error 12, opcode 1, len 11155, vendor err 129
I tried Pytorch Version: 1.2-cuda10.0 1.4-cuda10.1 1.5-cuda10.1
And the nccl version is 2.4.8.
The nccl_debug info of 4 nodes are listed below: Node0 1, Node1 2, Node2 1, [Node3] (http://49.234.107.127:81/index.php/s/5B2wEFHSFCWSHfm 2) .
When I tried 4 nodes, the [ NCCL WARN NET/IB] always happens in the third node.
If I exclude the node (gpu45), and only run on the other three nodes, the [ NCCL WARN NET/IB] also happens. |
st178972 | This might be relevant to this issue 50 in NCCL repo, and this comment 55 seems fixed it. |
st178973 | Thanks a lot!
I give up using NCCL temporarily and it should relate to the system setting. |
st178974 | I want to gather tensors from specific ranks in each rank (For example, I want gather ranks=[0,1] in rank0&rank1, and gather ranks=[2,3] in rank2&3). I implement by initial new group:
import os
import random
import torch
import torch.nn as nn
import torch.multiprocessing as mp
import torch.distributed as dist
import torch.utils.data
import torch.utils.data.distributed
from torch.multiprocessing import Process
from absl import flags
from absl import app
FLAGS = flags.FLAGS
flags.DEFINE_integer('nodes_num', 1, 'machine num')
flags.DEFINE_integer('ngpu', 4, 'ngpu per node')
flags.DEFINE_integer('world_size', 4, 'FLAGS.nodes_num*FLAGS.ngpu')
flags.DEFINE_integer('node_rank', 0, 'rank of machine, 0 to nodes_num-1')
flags.DEFINE_integer('rank', 0, 'rank of total threads, 0 to FLAGS.world_size-1, will be re-compute in main_worker func')
@torch.no_grad()
def group_gather(tensor, rank, ngpu_per_node):
#ranks = [0,1]
if rank == 0 or rank == 1:
ranks = [0,1]
if rank == 2 or rank == 3:
ranks = [2,3]
print('ranks: ', ranks)
group = dist.new_group(ranks = ranks)
tensors_gather = [torch.ones_like(tensor) for _ in range(2)]
torch.distributed.all_gather(tensors_gather, tensor, group, async_op=False)
output = torch.cat(tensors_gather, dim=0)
print('gather out shape: ', output.shape)
return output
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.fc = nn.Linear(3,2)
def forward(self, x, rank, ngpu_per_node):
x_gather = group_gather(x, rank, ngpu_per_node)
out = self.fc(x_gather)
return out
def main(argv):
del argv
FLAGS.append_flags_into_file('tmp.cfg')
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = str(random.randint(1,100000))
mp.spawn(main_worker, nprocs=FLAGS.ngpu, args=())
def main_worker(gpu_rank):
FLAGS._parse_args(FLAGS.read_flags_from_files(['--flagfile=./tmp.cfg']), True)
FLAGS.mark_as_parsed()
FLAGS.rank = FLAGS.node_rank * FLAGS.ngpu + gpu_rank # rank among FLAGS.world_size
dist.init_process_group(
backend='nccl',
init_method='env://',
world_size=FLAGS.world_size,
rank=FLAGS.rank)
model = ToyModel()
torch.cuda.set_device(gpu_rank)
model.cuda()
model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[gpu_rank])
x = torch.randn(4,3).cuda()
model(x, FLAGS.rank, FLAGS.ngpu)
if __name__ == '__main__':
app.run(main)
In group_gather(…), I init new group according to thread’s rank.
But this scripts can not always work well, It may crash in some times, and raise error:
Traceback (most recent call last):
File "/root/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/root/test_distcomm/test_group_gather.py", line 78, in main_worker
model(x, FLAGS.rank, FLAGS.ngpu)
File "/root/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/root/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/root/test_distcomm/test_group_gather.py", line 48, in forward
x_gather = group_gather(x, gpu_rank, ngpu_per_node)
File "/root/anaconda3/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
return func(*args, **kwargs)
File "/root/test_distcomm/test_group_gather.py", line 35, in group_gather
torch.distributed.all_gather(tensors_gather, tensor, group, async_op=False)
File "/root/anaconda3/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1153, in all_gather
work = group.allgather([tensor_list], [tensor])
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:410, unhandled system error, NCCL version 2.4.8
I think the logic in code is correct, and I can not figure out where is wrong.
I run this code with 4 nvidia-t4 gpus with cuda10.1, my pytorch version is 1.4.0.
You can simply run this code with ‘python main.py’ (may need pip install absl-py) |
st178975 | Solved by mrshenli in post #3
The new_group API requires all processes to call with the same ranks argument if even if they do not participate in the new group. See the API doc here: https://pytorch.org/docs/stable/distributed.html#torch.distributed.new_group
In the code above, the following code breaks the above assumption.
… |
st178976 | If I set ranks in group_gather func as [0,1] consistently, this code can work well all the time |
st178977 | The new_group API requires all processes to call with the same ranks argument if even if they do not participate in the new group. See the API doc here: https://pytorch.org/docs/stable/distributed.html#torch.distributed.new_group 8
In the code above, the following code breaks the above assumption.
if rank == 0 or rank == 1:
ranks = [0,1]
if rank == 2 or rank == 3:
ranks = [2,3]
print('ranks: ', ranks)
group = dist.new_group(ranks = ranks)
It needs to be modified to the following:
g1 = dist.new_group(ranks = [0, 1])
g2 = dist.new_group(ranks = [2, 3])
# check rank to see if the current process should use g1 or g2 |
st178978 | mrshenli:
In the code above, the following code breaks the above assumption.
Yes, it works well now! Thanks very much |
st178979 | Used DistributedDataParallel, but find that the speed with 4 nodes (4 GPUs per node) is sometimes even slower. Sometimes, the job runs 4300 images for each second, which is normal. If it is 4300 at the beginning, the job will always be running at this fast speed. But sometimes, the job is running at 1000 images per second, and the whole job will be at this speed always. The jobs are running in a cluster, and in different physical machines but same machine type.
For the job with problematic issues, the GPU utility is always 98%~100%. Pytorch version = 1.4; CUDA=10.1; Ubuntu 16.04 docker image. NCCL is for sure to use the Infinity band with the following logs. The time cost of data loading is also very small (less than 1%).
Are there any idea to debug?
41de615398b349e78486287e94d4883b000000:1573:1573 [0] NCCL INFO NET/IB : Using [0]mlx4_0:1/IB ; OOB eth0:10.0.0.4<0> |
st178980 | Do the jobs always run on the exactly same set of machines?
If not, can there be any straggler in cluster? Or can different network layouts (same rack, different rack, etc.) play a role here?
For debugging, it will be helpful to identify which step (forward, backward, opt.step) takes longer (on all nodes) when the throughput drops to 1000 images/s. elapsed_time 3 should be able to tell that. All communication in DDP occurs in the backward pass. If all steps are the same but the backward pass takes long for all processes, then it might be caused by network issues. If some processes suffers from slower data loading, or slower forward, or slower optimizer, it looks more like a straggler problem. |
st178981 | Thanks for your reply. I checked again. They are not running on the same machines. For the problematic job, i killed it and re-run it. The job was scheduled on those 4 machines again and the speed is still 1000. Then, i submit the same job again, which are scheduled on another 4 machines. The new job is running 4k. So, the problem might be the issue of machines or rack as you suggested.
One more question is that if the network has some issues on those machines or straggler issues, would it be possible that the GPU is still utilizing 98%~100%? The GPUs are fully utilized and i was thinking there is no network issue. |
st178982 | amsword:
One more question is that if the network has some issues on those machines or straggler issues, would it be possible that the GPU is still utilizing 98%~100%? The GPUs are fully utilized and i was thinking there is no network issue.
Not 100% sure, but if GPU reports block waiting for AllReduce as busy, then slow network or straggler could lead to 100% utilization for the entire group. This can be measured by submitting a job that only runs allreduce. BTW, based one past observations, GPU would (sometimes?) report 100% utilization even if DDP hangs. So I think this is possible.
cc @ngimel in case you know how CUDA would behave here |
st178983 | @mrshenli is correct, if there are straggler GPUs, other GPUs waiting for them would report 100% utilization with AllReduce. |
st178984 | Thanks very much @mrshenli @ngimel for the explanation of ALLReduce leading 100%. One more question is, is there any way to detect which GPU is the straggler (among 16 GPUs)? |
st178985 | Is it expected for nn.parallel.DistributedDataParallel in a 1GPU:1Process setup to use a little extra memory on one of the GPUs? The use isn’t exorbitant ( 3614 MiB vs. 4189 MiB). If so, what is this extra memory used for? Is it the all_reduce call on the gradients? If not what would this be attributed to?
1 Gpu per 1 process spun up with: mp.spawn(run, nprocs=args.num_replicas, args=(args.num_replicas,))
Entire module wrapped with nn.parallel.DistributedDataParallel
loss function is a member of above module.
pytorch 1.4
Multiprocessing init created via:
def handle_multiprocessing_logic(rank, num_replicas):
"""Sets the appropriate flags for multi-process jobs."""
args.gpu = rank # Set the GPU device to use
if num_replicas > 1:
torch.distributed.init_process_group(
backend='nccl', init_method='env://',
world_size=args.num_replicas, rank=rank
)
# Update batch size appropriately
args.batch_size = args.batch_size // num_replicas
# Set the cuda device
torch.cuda.set_device(rank)
print("Replica {} / {} using GPU: {}".format(
rank + 1, num_replicas, torch.cuda.get_device_name(rank))) |
st178986 | Solved by mrshenli in post #7
Is the DDP process the only process using that GPU? The extra size ~500MB looks like an extra cuda context. Does this behavior still persist if you set CUDA_VISIBLE_DEVICES env var properly (instead of using torch.cuda.set_device(rank)) before launching each process? |
st178987 | Yes, this is expected currently, because DDP creates buckets to consolidate gradient communication. Checkout this 28 and this 22. We could potentially mitigate this problem by setting param.grad to point to different offsets in the bucket so that we don’t need two copies of grads. |
st178988 | Thanks for the response; just to clarify: you mean it is expected that 1 of the GPUs in say a 2 GPU (single-process-single-gpu) DDP setup will use more memory because of bucketing? Wouldn’t the buckets be of the same size on both devices? |
st178989 | oh, sorry, I misread the question. I thought you mean DDP uses a little more memory than local model.
you mean it is expected that 1 of the GPUs in say a 2 GPU (single-process-single-gpu) DDP setup will use more memory because of bucketing?
no, they should be the same I think |
st178990 | The general problem of more memory makes sense in the current way things are setup (from your src links), let me see if I can come up with a minimum viable example for this effect (more mem on one of the GPUs). |
st178991 | Is the DDP process the only process using that GPU? The extra size ~500MB looks like an extra cuda context. Does this behavior still persist if you set CUDA_VISIBLE_DEVICES env var properly (instead of using torch.cuda.set_device(rank)) before launching each process? |
st178992 | mrshenli:
Is the DDP process the only process using that GPU? The extra size ~500MB looks like an extra cuda context. Does this behavior still persist if you set CUDA_VISIBLE_DEVICES env var properly (instead of using torch.cuda.set_device(rank) ) before launching each process?
Yup, I don’t have X or anything else running:
I don’t set CUDA_VISIBLE_DEVICES but I do call torch.cuda.set_device(rank) before instantiating the model / optimizers, etc (see function in initial post).
Edit: not sure how I would be able to set the ENV var when using mp.spawn such that it doesn’t apply to both processes.
Edit2: didn’t realize children can modify their ENV independently of the parent and other processes: https://stackoverflow.com/questions/24642811/set-env-var-in-python-multiprocessing-process 12 |
st178993 | Using pytotch 1.5, setting CUDA_VISIBLE_DEVICES appropriately per process, doing any CUDA related stuff before the ENV var is set and changing the DDP constructor with:
device_ids=[0], # set w/cuda environ var
output_device=0, # set w/cuda environ var
it seems to be working as expected: |
st178994 | It might be relevant to this post 61. If CUDA_VISIBLE_DEVICES is not set to one device per process, and the application program calls clear_cache somewhere without a device context, it could try to initialize the CUDA context on device 0. |
st178995 | Hi,
I’ve recently started using the distributed training framework for PyTorch and followed the imagenet 5 example. I’m using multi-node multi-GPU training. While running the code, during the 1st epoch itself, I see multiple processes starting at GPU 0 of both the servers. They are not present initially when I start the training. From the GPU memory usage, it seems that the other processes are some copy of the model (they all have a fixed usage like 571M). Since running an epoch takes ~12 hours for my use case, debugging step by step is not exactly a feasible solution. I’ve ensured that I pass args.gpu as argument whenever I do a .cuda() call. Also, the model loading/saving is done as suggested in the imagenet example.
Are there any pointers to the probable cause of the issue (or some intelligent ways to debug the code)? Thanks in advance. |
st178996 | Solved by Soumya_Sanyal in post #4
Found the bug. So we need to be careful with setting the right GPU context while calling clear_cache() function, otherwise it allocates fixed memory on GPU0 for the other GPUs. Relevant issue here. |
st178997 | Could you please share the cmd you used to launch the processes?
Does the problem disappear if you set CUDA_VISIBLE_DEVICES when launching the process and not passing in --gpu (let it use the default and only visible one)? |
st178998 | Hi Shen,
I always set the CUDA_VISIBLE_DEVICES for each run using export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 for example. The code runs on all the 8 GPUs with full utilization, so multiprocessing is surely working. The command I use is as follows on the two servers I’m using (with appropriate IP and port set):
python train_distributed.py --dist-url 'tcp://ip:port' --dist-backend 'gloo' --multiprocessing-distributed --world-size 2 --rank 0
python train_distributed.py --dist-url 'tcp://ip:port' --dist-backend 'gloo' --multiprocessing-distributed --world-size 2 --rank 1 |
st178999 | Found the bug. So we need to be careful with setting the right GPU context while calling clear_cache() function, otherwise it allocates fixed memory on GPU0 for the other GPUs. Relevant issue here 68. |
st179000 | Following imagenet-example: https://github.com/pytorch/examples/blob/master/imagenet/main.py 6, It seems that seed is not set in default (default is None):
parser.add_argument('--seed', default=None, type=int, help='seed for initializing training. ')
But when we use DistributedDataParallel mode, if seed is not set, the initialized parameters across multi-gpu will be different, resulting in different model param is kept in different gpus during training process (although we only save ckpt in rank0 gpu).
I am not sure whether this phenomenon will cause unknown errors, or may lead to an unstable results? Is it safe for me not to set the initialization seed? |
st179001 | This should be fine, because DistributedDataParallel broadcasts model states from rank 0 to all other ranks at construction time. See the code below:
github.com
pytorch/pytorch/blob/34284c127930dc12d612c47cab44cf09b432b522/torch/nn/parallel/distributed.py#L280-L285 18
# Sync params and buffers
module_states = list(self.module.state_dict().values())
if len(module_states) > 0:
self._distributed_broadcast_coalesced(
module_states,
self.broadcast_bucket_size) |
st179002 | import concurrent.futures
import torch
import torch.nn as nn
import torch.optim as optimizer
from torch.distributions import Categorical
class mymodel1(nn.Module):
def __init__(self):
super(mymodel1,self).__init__()
self.weight = nn.Linear(3,2)
def forward(self, X):
out = self.weight(X)
out = nn.Softmax(dim = 0)(out)
return out
class mymodel2(nn.Module):
def __init__(self):
super(mymodel2,self).__init__()
self.weight = nn.Linear(3,2)
def forward(self, X):
out = self.weight(X)
out = nn.Softmax(dim = 0)(out)
return out
class mymodel3(nn.Module):
def __init__(self):
super(mymodel3,self).__init__()
self.weight = nn.Linear(3,2)
def forward(self, X):
out = self.weight(X)
out = nn.Softmax(dim = 0)(out)
return out
def doTrain(model, X):
a1 = model()
return list(a1.parameters())
X = torch.randn(12,3)
updatedParams = []
results = []
with concurrent.futures.ProcessPoolExecutor() as executor:
f1 = executor.submit(doTrain, mymodel1, X[0*4:(0+1)*4])
f2 = executor.submit(doTrain, mymodel2, X[1*4:(1+1)*4])
f3 = executor.submit(doTrain, mymodel3, X[2*4:(2+1)*4])
print(f1.result())
print(f2.result())
print(f3.result())
Output
[Parameter containing:
tensor([[-0.3413, -0.4291, 0.0850],
[-0.4270, -0.4523, -0.3700]], requires_grad=True), Parameter containing:
tensor([0.5327, 0.2588], requires_grad=True)]
[Parameter containing:
tensor([[-0.3413, -0.4291, 0.0850],
[-0.4270, -0.4523, -0.3700]], requires_grad=True), Parameter containing:
tensor([0.5327, 0.2588], requires_grad=True)]
[Parameter containing:
tensor([[-0.3413, -0.4291, 0.0850],
[-0.4270, -0.4523, -0.3700]], requires_grad=True), Parameter containing:
tensor([0.5327, 0.2588], requires_grad=True)]
Can somebody tell me why I am getting same parameters returned from the different processes although the models are different ? |
st179003 | This might be because the per-process RNG are initialized with the same seed by default? Can you try if manually changing seed using manual_seed 1 solves the problem? |
st179004 | Thanks,
I used
torch.manual_seed(num)
with a different value for num in the init method of each of the models and that did it. |
st179005 | Hi,
what is the difference between
model = nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
and
model = nn.DataParallel(model, device_ids=[args.gpu])
? |
st179006 | Solved by mrshenli in post #2
DistributedDataParallel is multi-process parallelism, where those processes can live on different machines. So, for model = nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]), this creates one DDP instance on one process, there could be other DDP instances from other processes in the … |
st179007 | DistributedDataParallel is multi-process parallelism, where those processes can live on different machines. So, for model = nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]), this creates one DDP instance on one process, there could be other DDP instances from other processes in the same group working together with this DDP instance. Check out this https://pytorch.org/docs/master/notes/ddp.html 190
DataParallel is single-process multi-thread parallelism. It’s basically a wrapper of scatter + paralllel_apply + gather. For model = nn.DataParallel(model, device_ids=[args.gpu]), since it only works on a single device, it’s the same as just using the original model on GPU with id args.gpu. See https://github.com/pytorch/pytorch/blob/df8d6eeb19423848b20cd727bc4a728337b73829/torch/nn/parallel/data_parallel.py#L153 74
DataParallel is easier to use, as you don’t need additional code to setup process groups, and a one-line change should be sufficient to enable it.
DistributedDataParallel is faster and scalable. If you have multiple GPUs or machines and care about training speed, DistributedDataParallel should be the way to go. |
st179008 | Yes, but DataParallel cannot scale beyond one machine. It is slower than DistributedDataParallel even in a single machine with multiple GPUs due to GIL contention across multiple threads and the extra overhead introduced by scatter and gather and per-iteration model replication. |
st179009 | I would like to ask some questions regarding the DDP code used in the torchvision's reference example on classification 6. An example of using this script is given as follows, on a machine with 8 GPUs:
python -m torch.distributed.launch --nproc_per_node=8 --use_env train.py --model resnext50_32x4d --epochs 100
My first question concerns the saving and loading of checkpoints.
This is how a checkpoint is saved in the script:
checkpoint = {
'model': model_without_ddp.state_dict(),
'optimizer': optimizer.state_dict(),
'lr_scheduler': lr_scheduler.state_dict(),
'epoch': epoch,
'args': args}
utils.save_on_master(
checkpoint,
os.path.join(args.output_dir, 'model_{}.pth'.format(epoch)))
utils.save_on_master(
checkpoint,
os.path.join(args.output_dir, 'checkpoint.pth'))
But in the DDP tutorial 6, it seems necessary that torch.distributed.barrier() is called somewhere:
# Use a barrier() to make sure that process 1 loads the model after process 0 saves it.
dist.barrier()
...
# Use a barrier() to make sure that all processes have finished reading the checkpoint
dist.barrier()
Why is dist.barrier() not necessary in the above reference example?
My second question is about the validation stage.
This 1 is how it’s done in the script:
for epoch in range(args.start_epoch, args.epochs):
if args.distributed:
train_sampler.set_epoch(epoch)
train_one_epoch(model, criterion, optimizer, data_loader, device, epoch, args.print_freq, args.apex)
lr_scheduler.step()
evaluate(model, criterion, data_loader_test, device=device)
Doesn’t this mean that the evaluate() function is called on all the processes (i.e. all the GPUs in this case)? Shouldn’t we rather do something like this:
for epoch in range(args.start_epoch, args.epochs):
if args.distributed:
train_sampler.set_epoch(epoch)
train_one_epoch(model, criterion, optimizer, data_loader, device, epoch, args.print_freq, args.apex)
lr_scheduler.step()
if torch.distributed.get_rank() == 0: # master
evaluate(model, criterion, data_loader_test, device=device)
# save checkpoint here as well
But then, again, shouldn’t we wait, using dist.barrier(), for all the processes to finish the computations and for the master to gather the gradients, before evaluating the model?
Thank you very much in advance for your help! |
st179010 | Why is dist.barrier() not necessary in the above reference example?
IIUC, the torchvision example only saves the checkpoint to file in epoch but is not reading from it unless it is recovering from a crash? In that case it is not a hard requirement to perform the barrier, because
it does not need to ensure non-master processes are reading stale checkpoints.
non-master processes will just block on DDP backward (AllReduce) and waiting for the master to join.
Doesn’t this mean that the evaluate() function is called on all the processes (i.e. all the GPUs in this case)?
cc @fmassa for this implementation
But then, again, shouldn’t we wait, using dist.barrier() , for all the processes to finish the computations and for the master to gather the gradients, before evaluating the model?
The gradients are synchronized in DDP backward using AllReduce operations. So, there is no need to add another barrier here to do that. As soon as loss.backward() returns, the local gradients should be representing the global average. However, it might need a barrier here for a different reason. If the evaluate() step takes too long, non-master processes could timeout on AllReduce. If that happens, barrier might help. |
st179011 | @mrshenli Thanks for your prompt reply!
IIUC, the torchvision example only saves the checkpoint to file in epoch but is not reading from it unless it is recovering from a crash?
Well, the script has a resume code as follows:
if args.resume:
checkpoint = torch.load(args.resume, map_location='cpu')
model_without_ddp.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer'])
lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])
args.start_epoch = checkpoint['epoch'] + 1
so I guess it does what you’ve described, which is a usual scenario. But then are you saying that the demo_checkpoint() example given in the tutorial 1 handles a different scenario (other than resuming a training)?
The gradients are synchronized in DDP backward using AllReduce operations. So, there is no need to add another barrier here to do that. As soon as loss.backward() returns, the local gradients should be representing the global average.
Thanks! This has cleared up a lot of things for me. I guess the gradients are averaged? In this case, it seems that the learning rate should be scaled up by the number of GPUs.
Besides, the tutorial also notes that
if training starts from random parameters, you might want to make sure that all DDP processes use the same initial values. Otherwise, global gradient synchronizes will not make sense.
but I don’t see this being taken into account anywhere in the reference script. |
st179012 | Well, the script has a resume code as follows:
Not 100% sure about how this example would be used. @fmassa would know more. Given the code, it looks the resume mode is designed for starting from pre-trained models or resume from crash. In these cases, the checkpoint file is ready before launching the script, so it should be fine.
you saying that the demo_checkpoint() example given in the tutorial 4 handles a different scenario
It is not targeting any specific use case, just wanted to make sure the example code can run as is. The main information it tries to convey is that applications need to make sure checkpoints are ready before loading them. We previous saw users running into weird errors caused by reading too soon.
if training starts from random parameters, you might want to make sure that all DDP processes use the same initial values. Otherwise, global gradient synchronizes will not make sense.
DDP handles this by broadcasting model weights from rank 0 to others at construction time. However, if the application modified model weights after constructing DDP and if that resulted in inconsistent weights across processes, DDP won’t be able to recover, as the broadcast only happens once in ctor. |
st179013 | @mrshenli Great. Thank you so much for the explanations!
I hope @fmassa could join the discussion and clarify the points for which you mentioned him earlier, especially the one related to evaluate(). I tested the code and it seems that this function is called only once across the processes.
The training loop looks like this:
for epoch in range(args.start_epoch, args.epochs):
if args.distributed:
train_sampler.set_epoch(epoch)
train_one_epoch(...)
lr_scheduler.step()
evaluate(model, criterion, data_loader_test, device=device)
where the evaluation function looks like:
def evaluate(model, criterion, data_loader, device, print_freq=100):
model.eval()
metric_logger = utils.MetricLogger(delimiter=" ")
header = 'Test:'
with torch.no_grad():
for image, target in metric_logger.log_every(data_loader, print_freq, header):
...
# gather the stats from all processes
metric_logger.synchronize_between_processes()
print(' * Acc@1 {top1.global_avg:.3f} Acc@5 {top5.global_avg:.3f}'
.format(top1=metric_logger.acc1, top5=metric_logger.acc5))
return metric_logger.acc1.global_avg
This function has some print() to display the accuracy. In my experiment, that string is only displayed once, which means the function is called only once. Why? There is no is_main_process() check. Why isn’t this function called on all processes? I’m confused… |
st179014 | I’ve just realized that we shouldn’t wrap the evaluation phase inside if torch.distributed.get_rank() == 0:, because data_loader_test also splits the data across all the processes.
And for the print() part, this code 10 explains why the message is displayed only once.
So now I understand what @fmassa did. Thanks. |
st179015 | I am running inference using mmdetection (https://github.com/open-mmlab/mmdetection 1) and I get the above error for this piece of code;
model = init_detector("faster_rcnn_r50_fpn_1x.py", "faster_rcnn_r50_fpn_1x_20181010-3d1b3351.pth", device='cuda:0')
img = 'intersection_unlabeled/frame0.jpg'
result = inference_detector(model, img)
show_result_pyplot(img, result, model.CLASSES)
And the full error log is:
Traceback (most recent call last):
File "C:/Users/sarim/PycharmProjects/thesis/pytorch_learning.py", line 355, in <module>
test()
File "C:/Users/sarim/PycharmProjects/thesis/pytorch_learning.py", line 338, in test
model = init_detector("faster_rcnn_r50_fpn_1x.py", "faster_rcnn_r50_fpn_1x_20181010-3d1b3351.pth", device='cuda:0')
File "C:\Users\sarim\PycharmProjects\thesis\mmdetection\mmdet\apis\inference.py", line 36, in init_detector
checkpoint = load_checkpoint(model, checkpoint)
File "c:\users\sarim\appdata\local\programs\python\python37\lib\site-packages\mmcv\runner\checkpoint.py", line 188, in load_checkpoint
load_state_dict(model, state_dict, strict, logger)
File "c:\users\sarim\appdata\local\programs\python\python37\lib\site-packages\mmcv\runner\checkpoint.py", line 96, in load_state_dict
rank, _ = get_dist_info()
File "c:\users\sarim\appdata\local\programs\python\python37\lib\site-packages\mmcv\runner\utils.py", line 21, in get_dist_info
initialized = dist.is_initialized()
AttributeError: module 'torch.distributed' has no attribute 'is_initialized'
My pytorch version is 1.1.0 |
st179016 | Could you check your PyTorch version again (print(torch.__version__)), as torch.distributed.is_initialized() 30 is in Pytorch 1.1.0. |
st179017 | Of course, I did that before. Here is proof:
cmd.PNG1103×646 21 KB
Anyways, I got it to work on windows by commenting out the lines that use distributed training. I read here that pytorch doesn’t support distributed training on windows but it does so on Linux:
github.com/facebookresearch/maskrcnn-benchmark
Issue: AttributeError: module 'torch.distributed' has no attribute 'is_initialized' 11
opened by shuxp
on 2018-12-10
closed by shuxp
on 2018-12-10
❓ Questions and Help
I installed according 'conda install pytorch-nightly-cpu -c pytorch' (torch-nightly1.0.0.dev20181209), also I installed pytorch stable from official site following...
And this makes sense because I encountered no such issue when running the same code on Google Colab |
st179018 | Thanks for checking and good to hear you’re figured it out. My second guess would also be that you’re using a platform which does not support distributed training. |
st179019 | I wish dist.is_initialized() just returned always false instead of bombing out. This way the code is more cleaner between different platforms for non-distributed use. BTW, it seems same thing happens for methods like is_gloo_available() etc. |
st179020 | There is torch.distributed.is_available() API to check if distributed package is available. APIs from distributed package is only available when is_available returns true. Let me add that to our docs. |
st179021 | https://github.com/pytorch/pytorch/pull/37021 51 is landed. The API doc of torch.distributed.is_available is added to master. |
st179022 | hello,
there is any way to run pytorch distributed on windows?
i see at pytorch main page that there is version for windows but when i tried to used it,
i get that torch.distributed.is_available() is False
Thanks! |
st179023 | Solved by mrshenli in post #3
Currently, torch.distributed does not support Windows yet. Just created a poll/issue to track how many people would need this feature: https://github.com/pytorch/pytorch/issues/37068 |
st179024 | Currently, torch.distributed does not support Windows yet. Just created a poll/issue to track how many people would need this feature: https://github.com/pytorch/pytorch/issues/37068 161 |
st179025 | I want to train a large model on a TPU V3 Pod with 5 TPU devices. I am very novice on TPU. I already code a model which I train on multi-gpu (4 V100) using DataParallel. I found DataParallel is very easy to incorporate. I have a couple of concerns to train this model on google cloud TPU:
Can I train the same learning with DataParallel on could TPU V3 device with 5 TPU ? or do need to do any modifications except changing the library to xla?
Should I use DataParallel or DistributedDataParallel to train the model on TPU Pod?
Does anyone have any experience with pythorch-lightning with multi-tpu device TPU Pod?
Sorry for the novice level questions. Any types of resources, suggestions will be a great help. |
st179026 | Should I use DataParallel or DistributedDataParallel to train the model on TPU Pod?
General guidance fo DataParallel vs DistributedDataParallel: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#comparison-between-dataparallel-and-distributeddataparallel 23 |
st179027 | I am running into an issue with multi-process DDP where it is loading my extra kernels on every run. Normally, this only happens at the beginning when I start the job. Since switching to DDP, it happens on every epoch.
This seems really wasteful. Am I doing something wrong s.t. it is doing that or is this expected behavior?
The following happens before every epoch:
Using /tmp/torch_extensions as PyTorch extensions root…
Detected CUDA files, patching ldflags
Emitting ninja build file /tmp/torch_extensions/fused/build.ninja…
Building extension module fused…
ninja: no work to do.
Loading extension module fused…
Using /tmp/torch_extensions as PyTorch extensions root…
Detected CUDA files, patching ldflags
Emitting ninja build file /tmp/torch_extensions/upfirdn2d/build.ninja…
Building extension module upfirdn2d…
ninja: no work to do.
Loading extension module upfirdn2d…
Using /tmp/torch_extensions as PyTorch extensions root…
Detected CUDA files, patching ldflags
Emitting ninja build file /tmp/torch_extensions/fused/build.ninja…
Using /tmp/torch_extensions as PyTorch extensions root…
Using /tmp/torch_extensions as PyTorch extensions root…
Building extension module fused…
ninja: no work to do.
Loading extension module fused…
Loading extension module fused…
Using /tmp/torch_extensions as PyTorch extensions root…
Using /tmp/torch_extensions as PyTorch extensions root…
Detected CUDA files, patching ldflags
Emitting ninja build file /tmp/torch_extensions/upfirdn2d/build.ninja…
Building extension module upfirdn2d…
ninja: no work to do.
Loading extension module upfirdn2d…
Loading extension module fused…
Using /tmp/torch_extensions as PyTorch extensions root…
The below is a stripped down version of my code.
def train(epoch, step, model, optimizer, scheduler, loader, args, gpu):
model.train()
averages = {'total_loss': Averager()}
starting_step = step
t = time.time()
optimizer.zero_grad()
for batch_idx, images in enumerate(loader):
step += len(images)
images = images.cuda(gpu)
outputs = model(images)
kl_zs, ll_losses, latents, generations = outputs[:4]
prior_variances, posterior_variances = outputs[4:6]
avg_kl_loss = torch.stack(kl_zs).mean()
avg_ll_loss = torch.stack(ll_losses).mean()
avg_kl_loss_penalized = avg_kl_loss * args.kl_lambda
if args.kl_anneal:
anneal_scale = max(0, min(step / args.kl_anneal_end, 1))
avg_kl_loss_penalized *= anneal_scale
total_loss = avg_ll_loss + avg_kl_loss_penalized
averages['total_loss'].add(total_loss.item())
total_loss.backward()
optimizer.step()
if scheduler:
scheduler.step()
optimizer.zero_grad()
if step - starting_step >= args.max_epoch_steps:
break
return averages['total_loss'].item(), step
def main(gpu, args):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
dist.init_process_group(backend='nccl', rank=gpu, world_size=args.num_gpus)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
train_loader, test_loader, image_shape = get_loaders_and_shape(args, rank=gpu)
model, optimizer = get_model(args, image_shape, gpu)
torch.cuda.set_device(gpu)
model.cuda(gpu)
model = DDP(model, device_ids=[gpu], find_unused_parameters=True)
step = 0
start_epoch = 0 # start from epoch 0 or last checkpoint epoch
total_epochs = args.num_epochs
for epoch in range(start_epoch, start_epoch + total_epochs):
train_loss, step = train(epoch,
step,
model,
optimizer,
scheduler,
train_loader,
args,
gpu)
results['train_loss'].append(train_loss)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description='Independent variational objects.')
parser.add_argument('--dataset',
default='gymnasticsRgb',
type=str)
parser.add_argument('--num_workers',
default=4,
type=int,
help='number of data workers')
parser.add_argument('--num_gpus',
default=1,
type=int,
help='number of gpus.')
parser.add_argument('--debug',
action='store_true',
help='use debug mode (without saving to a directory)')
parser.add_argument('--lr',
default=3e-4,
type=float,
help='learning rate assuming adam.')
parser.add_argument('--weight_decay',
default=0,
type=float,
help='weight decay')
parser.add_argument('--seed', default=0, type=int, help='random seed')
parser.add_argument('--max_epoch_steps', default=200000, type=int)
parser.add_argument('--max_test_steps', default=50000, type=int)
parser.add_argument('--num_epochs', default=250, type=int,
help='the number of epochs to train for. at 200000 ' \
'max_epoch steps, this would go for 2500 epochs to ' \
'reach 5e8 steps.')
parser.add_argument(
'--batch_size',
default=100,
type=int)
parser.add_argument('--optimizer',
default='adam',
type=str,
help='adam or sgd.')
parser.add_argument('--num_transition_layers', type=int, default=4)
parser.add_argument('--num_latents', type=int, default=2)
parser.add_argument('--latent_dim', type=int, default=32)
parser.add_argument('--translation_layer_dim', type=int, default=128)
parser.add_argument(
'--output_variance',
type=float,
default=.25_
args = parser.parse_args()
mp.spawn(main, nprocs=args.num_gpus, args=(args,)) |
st179028 | Could you share the code snippet where you actually load the extensions? I’m assuming you’re using load_inline 5 to load your extra kernels? If so, is this happening for every epoch? |
st179029 | Hi,I am new to Pytorch DistributedDataParallel. Object detection, My model can achieve 79mAP with nn.DataParallel, but with DistributedDataParallel, it only 50+mAP.
In both ways, except for batchsize, every parameter is the same. For nn.DataParallel, the batchsize is 32, and 4 GPUs, for DistributedDataParallel, the batchsize is 8 for per GPU, 4 GPUS, So the total number of batchsize is the same. Is my approach correct? Why is DistributedDataParallel performing worse?
Here is .py with nn.DataParallel:
#train.py
import os
os.environ['CUDA_VISIBLE_DEVICES']='0,1,2,3'
import torch
from model_dla34 import get_pose_net
from loss import CtdetLoss,AverageMeter
from torch.utils.data import DataLoader
from datasets import My_certernet_datasets
import time
datasets = 'pascal'
batch_size = 32
start_epoch = 0
end_epoch = 70
init_lr = 1.25e-4
def main():
train_data = My_certernet_datasets(mode = 'train',datasets = datasets)
print('there are {} train images'.format(len(train_data)))
train_data_loader = DataLoader(dataset=train_data,
num_workers=16,
batch_size=batch_size,
shuffle=True,
pin_memory=True
)
model = get_pose_net()
if torch.cuda.device_count() > 1:
model = torch.nn.DataParallel(model)
model = model.to(device)
criterion = CtdetLoss()
optimizer = torch.optim.Adam(model.parameters(), init_lr)
for epoch in range(start_epoch+1,end_epoch+1):
adjust_lr(optimizer,epoch,init_lr)
train(train_data_loader,model,criterion,optimizer,device,epoch,end_epoch)
if epoch % 10 == 0:
save_model(save_dir + '/model_last.pth',epoch,model,optimizer = None)
def train(train_data_loader,model,criterion,optimizer,device,epoch,end_epoch):
losses = AverageMeter()
model = model.train()
for i ,batch in enumerate(train_data_loader):
start_time = time.time()
for k in batch:
if k != 'meta':
batch[k] = batch[k].to(device)
output = model(batch['input'])
loss_stats = criterion(output,batch)
loss = loss_stats['loss']
optimizer.zero_grad()
loss.backward()
optimizer.step()
end_time = time.time()
ELA_time = (end_time - start_time)*(end_epoch - epoch)*len(train_data_loader)
ELA_time = time.strftime('%H:%M:%S',time.gmtime(ELA_time))
losses.update(loss.item())
print('[epoch:{},{}/{}]'.format(epoch,i,len(train_data_loader)),'current_loss:%.4f'% losses.current,\
'average_loss:%.4f' % losses.avg,'ELA_time:',ELA_time)
def adjust_lr(optimizer,epoch,init_lr,lr_step=[45,60]):
if epoch in lr_step:
lr = init_lr*(0.1**(lr_step.index(epoch) + 1))
print('Drop LR to',lr)
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def save_model(path,epoch,model,optimizer = None):
if isinstance(model,torch.nn.DataParallel):
state_dict = model.module.state_dict()
else:
state_dict = model.state_dict()
data = {'epoch':epoch,'state_dict':state_dict}
if not (optimizer is None):
data['optimizer'] = optimizer.state_dict()
torch.save(data,path)
if __name__ == '__main__':
main()
Here is .py with DistributedDataParallel:
#train_DDP.py
import os
os.environ['CUDA_VISIBLE_DEVICES']='0,1,2,3'
import torch
from model_dla34 import get_pose_net
from loss import CtdetLoss,AverageMeter
from torch.utils.data import DataLoader
from datasets import My_certernet_datasets
import time
from torch.utils.data.distributed import DistributedSampler
from torch.nn.parallel import DistributedDataParallel
torch.distributed.init_process_group(backend='nccl')
datasets = 'pascal'
batch_size = 8
start_epoch = 0
end_epoch = 70
init_lr = 1.25e-4
local_rank = torch.distributed.get_rank()
torch.cuda.set_device(local_rank)
device = torch.device('cuda',local_rank)
def main():
model = get_pose_net()
model.to(device)
model_val = model
if torch.cuda.device_count() > 1:
print("let's use GPU{}!!!".format(local_rank))
model = DistributedDataParallel(model, device_ids=[local_rank], output_device=local_rank,find_unused_parameters=True)
train_data = My_certernet_datasets(mode = 'train',datasets = datasets)
print('there are {} train images'.format(len(train_data)))
train_data_loader = DataLoader(dataset=train_data,
num_workers=16,
batch_size=batch_size,
sampler=DistributedSampler(train_data)
)
criterion = CtdetLoss()
optimizer = torch.optim.Adam(model.parameters(), init_lr)
for epoch in range(start_epoch+1,end_epoch+1):
DistributedSampler(train_data).set_epoch(epoch)
adjust_lr(optimizer,epoch,init_lr)
train(train_data_loader,model,criterion,optimizer,device,epoch,end_epoch)
if epoch % 10 == 0 and local_rank==0:
save_model(save_dir + '/model_last_{}.pth'.format(epoch),epoch,model,optimizer = None)
def train(train_data_loader,model,criterion,optimizer,device,epoch,end_epoch):
losses = AverageMeter()
model.train()
for i ,batch in enumerate(train_data_loader):
start_time = time.time()
for k in batch:
if k != 'meta':
batch[k] = batch[k].to(device)
output = model(batch['input'])
loss_stats = criterion(output,batch)
loss = loss_stats['loss']
optimizer.zero_grad()
loss.backward()
optimizer.step()
end_time = time.time()
ELA_time = (end_time - start_time)*(end_epoch - epoch)*len(train_data_loader)
ELA_time = time.strftime('%H:%M:%S',time.gmtime(ELA_time))
losses.update(loss.item())
if local_rank==0:
print('[epoch:{},{}/{}]'.format(epoch,i,len(train_data_loader)),'current_loss:%.4f'% losses.current,\
'average_loss:%.4f' % losses.avg,'ELA_time:',ELA_time)
def adjust_lr(optimizer,epoch,init_lr,lr_step=[45,60]):
if epoch in lr_step:
lr = init_lr*(0.1**(lr_step.index(epoch) + 1))
print('Drop LR to',lr)
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def save_model(path,epoch,model,optimizer = None):
if isinstance(model,torch.nn.DataParallel):
state_dict = model.module.state_dict()
else:
state_dict = model.state_dict()
data = {'epoch':epoch,'state_dict':state_dict}
if not (optimizer is None):
data['optimizer'] = optimizer.state_dict()
torch.save(data,path)
if __name__ == '__main__':
main()
Someone can help me? This has been bothering me for a few days. Thank you very much!!! |
st179030 | Gradients are averaged (divided by the number of processes) when using nn.DistributedDataParallel. This is not the case when using nn.DataParallel. You can either multiply them after the call to backward to make them equivalent to the output of nn.DataParallel. |
st179031 | Thank you for your reply.
I am a bit confused. I think the loss is independent of the number of batchsizes, and the batchsize increases, making the gradient more robust.
In my case, for nn.DataParallel, loss is divided by 32(the total batchsize of 4 gpus is 32); for nn.DistributedDataParallel, the loss of single process is divided by 8(per gpu batchsize is 8), at this time, their gradients are the same, even if the average of the gradient is calculated later(divided by number of precesses), the gradient is almost the same as the former. |
st179032 | Hi everyone,
If all_gather are used more than once, the next cuda() function will deadlock. And GPU-Utils are 100%.
Details:
I wrapped all_gather in collect:
def collect(x):
x = x.contiguous()
out_list = [torch.zeros_like(x, device=x.device, dtype=x.dtype)
for _ in range(dist.get_world_size())]
dist.all_gather(out_list, x)
return torch.cat(out_list, dim=0)
Next:
a_all = collect(a)
b_all = collect(b)
c = torch.rand(10,10).cuda() # deadlock! No error report.
The deadlock doesn’t happen if I don’t collect b_all or don’t use cuda().
My code runs on 4 GPUs using DDP.
Pytorch version: 1.4.
This problem has been confusing me for a couple of days. I’ll appreciate any help! |
st179033 | Could you provide a small repro for the problem that you are seeing? I ran the following script locally on my GPU machine and didn’t notice any deadlocks:
import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
def collect(x):
x = x.contiguous()
out_list = [torch.zeros_like(x, device=x.device, dtype=x.dtype)
for _ in range(dist.get_world_size())]
dist.all_gather(out_list, x)
return torch.cat(out_list, dim=0)
def run(rank, size):
""" Distributed function to be implemented later. """
print ('START: {}'.format(rank))
a = torch.rand(10, 10).cuda(rank)
b = torch.rand(10, 10).cuda(rank)
a_all = collect(a)
b_all = collect(b)
c = torch.rand(10,10).cuda(rank)
print ('DONE : {}'.format(rank))
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 4
processes = []
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join()
Running the script:
$ python /tmp/test_allgather.py
START: 0
START: 1
START: 2
START: 3
DONE : 2
DONE : 3
DONE : 1
DONE : 0 |
st179034 | Hi All,
I have a question on combining model weights accurately.
I made one NN and trained the model separately on two datasets. Now I am trying to obtain a single model out these two models by combining the weights.
What are different ways to combine the weights in anyone’s opinion?
Thanks in advance for answering this question |
st179035 | Hi Jees,
let me try to make sure I understand your use case completely.
You have one model architecture and you initialized two models using it.
These two model were trained on two different datasets and converged.
Now you would like to “combine” all parameters of both models and create a single one.
Using this new single model you would now want to predict both datasets or just one of them?
Do both datasets contain the same classes or are the targets completely different? |
st179036 | Thanks a lot @ptrblck for the queries.
You have one model architecture and you initialized two models using it.
Yes that is very true
These two model were trained on two different datasets and converged.
that is also correct
Now you would like to “combine” all parameters of both models and create a single one.
that is the idea, yes
Using this new single model you would now want to predict both datasets or just one of them?
I want it to predict both of them with reasonable accuracy
Do both datasets contain the same classes or are the targets completely different?
completely different (we are not sure but let me consider that case)
Thanks for queries I hope this can help you to give me some pointers. |
st179037 | Hi,
Can we club two weight file into one weight file without loss of catastrophic loss with data A and data B,
In our case we trained initial some data and after some time I get some more new data. So I train only new data and club both weight file into one.
Number of targets is same class and both data file in same domain.
I club both model but I found catastrophic loss will be happen
Please suggest me what is the best way |
st179038 | I don’t think combining different trained parameters will automatically result in a good new model, as both models might (and most likely) have converged to different local minima.
The mean (I assume you are taking the average of all parameters) will not necessarily yield another minimum on the loss surface.
You could try to fine tune your model by retraining with the new samples and a low learning rate and check the validation accuracy again on the updated dataset. |
st179039 | dist.get_rank() returns either 0 or 1 even though I launch the training with
python -m torch.distributed.launch \ --nproc_per_node=4 \ --nnodes=2 \ --node_rank=0 \ --master_addr="$MASTER_ADDR" \ --master_port="$MASTER_PORT" \ train_dist.py &
I am using 2 nodes each with 4 GPUs. |
st179040 | Solved by mrshenli in post #4
If you have 8 processes (4 processes per node with 2 node), world_size should be 8 for init_process_group. But you don’t need to set that, as the launcher script will set the env vars for you properly.
I am not sure what I should set as RANK but I set it as 0 and 1 for the two nodes
There are… |
st179041 | Hey @ankahira
Two questions:
What parameters did you pass to init_process_group invocation in train_dist.py?
Can you check if RANK and WORLD_SZIE are set properly for each process?
Sudarshan wrote a great example 137 of how to use launcher.py, which might be helpful to you. |
st179042 | Actually I am abit confused about this. I understand that I should set WORLD-SIZE as Number of nodes i.e 2. I am not sure what I should set as RANK but I set it as 0 and 1 for the two nodes. For the init_process_group` I pass each of the GPUs as in 0, 1, 2 ,3. Something like this.
def init_processes(rank, size, fn, backend='nccl'): dist.init_process_group(backend, rank=rank, world_size=size) fn(rank, size)
if __name__ == "__main__":
size = 4 processes = [] for rank in range(size): p = Process(target=init_processes, args=(rank, size, run)) p.start() processes.append(p)
for p in processes:
p.join()` |
st179043 | ankahira:
I should set WORLD-SIZE as Number of nodes i.e 2. I am not sure
If you have 8 processes (4 processes per node with 2 node), world_size should be 8 for init_process_group. But you don’t need to set that, as the launcher script will set the env vars for you properly.
I am not sure what I should set as RANK but I set it as 0 and 1 for the two nodes
There are two ranks here:
node rank: this is what you provide for --node_rank to the launcher script, and it is correct to set it to 0 and 1 for the two nodes.
process rank: this rank should be --node_rank X --nproc_per_node + local GPU id, which should be 0~3 for the four processes in the first node, and 4~7 for the four processes in the second node. But you don’t need to set this for init_process_group either, as the launcher script should have set the env var for you.
With the params you provided to the launcher script, the following should be sufficient to init the process group.
dist.init_process_group(backend)
If you also need the local rank for DDP, you will need to parse it from the arg, and then pass it to the DDP constructor. Sth like:
model = torch.nn.parallel.DistributedDataParallel(model,
device_ids=[arg.local_rank],
output_device=arg.local_rank)
Check out the readme 78 in the launcher script. |
st179044 | @ankahira Can you please share your train_dist.py please? I still have problem with using the luncher! |
st179045 | Hi, I’ve tried to set CUDA_VISIBLE_DEVICES = ‘1’ in main function but when I move the model to cuda, It does not move to GPU1 but GPU0 instead (result in OOM due to GPU0 is in use). Please tell me if I’m wrong.
here is my code:
in train.py:
def main(config_file_path):
config = SspmYamlConfig(config_file_path)
dataloader_cfg = config.get_dataloader_cfg()
trainer_cfg = config.get_trainer_cfg()
logger_cfg = config.get_logger_cfg()
model_cfg = config.get_model_cfg()
pose_dataset_cfg = config.get_pose_dataset_cfg()
data_augmentation_cfg = config.get_augmentation_cfg()
target_generator_cfg = config.get_target_generator_cfg()
learning_rate = trainer_cfg['optimizer']['learning_rate']
# parsing device = [1] by config
device = ','.join(list(map(str, trainer_cfg['device'])))
os.environ['CUDA_DEVICE_ORDER']= 'PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES'] = device
model = getModel(model_cfg)
train_loader = DataLoader(train_dataset, **dataloader_cfg['train'])
val_loader = DataLoader(val_dataset, **dataloader_cfg['val'])
trainer = Trainer(
model, optimizer, logger,
writer, config, train_loader, val_loader
)
Trainer class is inherited from BaseTrainer where the model was transferd to cuda
class BaseTrainer(ABC):
def __init__(self, model, optimizer, logger, writer, config):
self.config = config
self.logger = logger
self.writer = writer
self.optimizer = optimizer
self.trainer_config = config.get_trainer_cfg()
self.device_list = self.trainer_config['device'] #device list is [1]
self.device_type = self._check_gpu(self.device_list)
self.device = torch.device(self.device_type)
self.model = model
self.model = self.model.to(self.device)
self.model = torch.nn.DataParallel(self.model)
def _check_gpu(self, gpus):
if len(gpus) > 0 and torch.cuda.is_available():
pynvml.nvmlInit()
for i in gpus:
handle = pynvml.nvmlDeviceGetHandleByIndex(i)
meminfo = pynvml.nvmlDeviceGetMemoryInfo(handle)
memused = meminfo.used / 1024 / 1024
self.logger.info('GPU{} used: {}M'.format(i, memused))
if memused > 1000:
pynvml.nvmlShutdown()
raise ValueError('GPU{} is occupied!'.format(i))
pynvml.nvmlShutdown()
return 'cuda'
else:
self.logger.info('Using CPU!')
return 'cpu' |
st179046 | Solved by ptrblck in post #4
If all devices are the same, use
CUDA_VISIBLE_DEVICES=1,2 python script.py args
to run the script and inside the script use cuda:0 and cuda:1 (or the equivalent .cuda(0), .cuda(1) commands).
However, if the mapping is not what you expect via nvidia-smi, you could force the PCI bus order order via… |
st179047 | If you are masking devices via CUDA_VISIBLE_DEVICES all visible devices will be mapped to device ids in the range [0, nb_visible_devices].
E.g. if your system has two GPUs and you are using CUDA_VISIBLE_DEVICES=1, you would have to access it inside the script as cuda:0. |
st179048 | thank you for your quick reply. but I have a question:
I have 3 GPUs, when I want to use only GPU1 and 2 (GPU0 is in use). how should I do? |
st179049 | If all devices are the same, use
CUDA_VISIBLE_DEVICES=1,2 python script.py args
to run the script and inside the script use cuda:0 and cuda:1 (or the equivalent .cuda(0), .cuda(1) commands).
However, if the mapping is not what you expect via nvidia-smi, you could force the PCI bus order order via CUDA_DEVICE_ORDER=PCI_BUS_ID in front of the aforementioned command. |
st179050 | I’ve found that I need to set VISIBLE device at the begining of my script. I’s my mistake, thank you for your help. I will close this topic |
st179051 | Hi all ,
I try to run pytorch with distributed system.
I run test1.py as below
import torch
import torch.distributed as dist
def main(rank, world):
if rank == 0:
x = torch.tensor([1., -1.]) # Tensor of interest
dist.send(x, dst=1)
print(‘Rank-0 has sent the following tensor to Rank-1’)
print(x)
else:
z = torch.tensor([0., 0.]) # A holder for recieving the tensor
dist.recv(z, src=0)
print(‘Rank-1 has recieved the following tensor from Rank-0’)
print(z)
if name == ‘main’:
dist.init_process_group(backend=‘mpi’)
main(dist.get_rank(), dist.get_world_size())
Then I run with single machine.
mpiexec -n 2 python test1.py
Finally, the error is
Traceback (most recent call last):
File "test1.py", line 17, in <module>
dist.init_process_group(backend='mpi')
File "/cluster/home/cnphuong/my_environment/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 392, in init_process_group
timeout=timeout)
File "/cluster/home/cnphuong/my_environment/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 452, in _new_process_group_helper
raise RuntimeError("Distributed package doesn't have MPI built in")
RuntimeError: Distributed package doesn't have MPI built in
Traceback (most recent call last):
File "test1.py", line 17, in <module>
dist.init_process_group(backend='mpi')
File "/cluster/home/cnphuong/my_environment/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 392, in init_process_group
timeout=timeout)
File "/cluster/home/cnphuong/my_environment/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 452, in _new_process_group_helper
raise RuntimeError("Distributed package doesn't have MPI built in")
RuntimeError: Distributed package doesn't have MPI built in
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[31741,1],1]
Exit code: 1
--------------------------------------------------------------------------
I also installed pytorch with
pip install torch torchvision
Please help me.
Thanks, |
st179052 | You need to build pytorch from source to enable MPI: https://pytorch.org/docs/stable/distributed.html#backends-that-come-with-pytorch 237 |
st179053 | HI,
I followed the instruction on git.
clone code from git.
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
install dependencies.
pip install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi
run setup and install. The errors were here.
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/…/"}
python setup.py install
Building wheel torch-1.6.0a0+32bbf12
-- Building version 1.6.0a0+32bbf12
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/cluster/home/cnphuong/pytorch/torch -DCMAKE_PREFIX_PATH=/cluster/software/Anaconda3/2019.03/bin/../ -DNUMPY_INCLUDE_DIR=/cluster/home/cnphuong/my_environment/lib/python3.6/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/cluster/home/cnphuong/my_environment/bin/python -DPYTHON_INCLUDE_DIR=/cluster/software/Python/3.6.6-foss-2018b/include/python3.6m -DPYTHON_LIBRARY=/cluster/software/Python/3.6.6-foss-2018b/lib/libpython3.6m.so.1.0 -DTORCH_BUILD_VERSION=1.6.0a0+32bbf12 -DUSE_NUMPY=True /cluster/home/cnphuong/pytorch
CMake Error: The source directory "/cluster/home/cnphuong/pytorch" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
Traceback (most recent call last):
File "setup.py", line 738, in <module>
build_deps()
File "setup.py", line 320, in build_deps
cmake=cmake)
File "/cluster/home/cnphuong/pytorch/tools/build_pytorch_libs.py", line 59, in build_caffe2
rerun_cmake)
File "/cluster/home/cnphuong/pytorch/tools/setup_helpers/cmake.py", line 324, in generate
self.run(args, env=my_env)
File "/cluster/home/cnphuong/pytorch/tools/setup_helpers/cmake.py", line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File "/cluster/software/Python/3.6.6-foss-2018b/lib/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '-GNinja', '-DBUILD_PYTHON=True', '-DBUILD_TEST=True', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_INSTALL_PREFIX=/cluster/home/cnphuong/pytorch/torch', '-DCMAKE_PREFIX_PATH=/cluster/software/Anaconda3/2019.03/bin/../', '-DNUMPY_INCLUDE_DIR=/cluster/home/cnphuong/my_environment/lib/python3.6/site-packages/numpy/core/include', '-DPYTHON_EXECUTABLE=/cluster/home/cnphuong/my_environment/bin/python', '-DPYTHON_INCLUDE_DIR=/cluster/software/Python/3.6.6-foss-2018b/include/python3.6m', '-DPYTHON_LIBRARY=/cluster/software/Python/3.6.6-foss-2018b/lib/libpython3.6m.so.1.0', '-DTORCH_BUILD_VERSION=1.6.0a0+32bbf12', '-DUSE_NUMPY=True', '/cluster/home/cnphuong/pytorch']' returned non-zero exit status 1.
I only want to run with multi CPUs. These step is correct or not?
Thanks, |
st179054 | ph0123:
CMake Error: The source directory “/cluster/home/cnphuong/pytorch” does not appear to contain CMakeLists.txt.
Can you check if that directory has a CMakeLists.txt file? Usually there should be a CMakeLists.txt file in the top level directory when you clone pytorch. |
st179055 | pritamdamania87:
y has a CMakeLists.txt file? Usually there should be a CMakeLists.txt file in the top level directory when
Oh. I did not see CMakeLists.txt. I will try to clone again.
Thanks, |
st179056 | Hi,
I am working with a scripts that each training instance has different mumber of points, so when I applied DDP with syncBN, how will the BN stats in different GPUs to sync? I have traced the source code to here “https://github.com/pytorch/pytorch/blob/a4a5b6fcaae26fe241d32a7c4b2091ee69b600bb/torch/nn/modules/_functions.py 2” L33-L43
# calcualte global mean & invstd
mean, invstd = torch.batch_norm_gather_stats_with_counts(
input,
mean_all,
invstd_all,
running_mean,
running_var,
momentum,
eps,
count_all.view(-1).long().tolist()
)
In my case the “mean_all” and “invstd_all” should be weighted average accroding to different “counts” in GPUs, is it the actual situation?
BTW, the syncBN in NVIDIA apex just simply average “mean_all” and “invstd_all” which not support for different counts in GPUs.
Thanks very much |
st179057 | In my case the “mean_all” and “invstd_all” should be weighted average according to different “counts” in GPUs, is it the actual situation?
I think you’re right.
torch.batch_norm_gather_stats_with_counts leads to aten\src\ATen\native\cuda\Normalization.cuh.
The function you’re finding is batch_norm_reduce_statistics_kernel.
In the loop starts at L405, you can see that all statistics are calculated w.r.t. their own count.
for (int j = 0; j < world_size; j++) {
scalar_t count = counts[j];
accscalar_t m = vec_mean[j][i];
accscalar_t v = accscalar_t(1.0) / (vec_invstd[j][i]);
v = (v * v - epsilon) * count;
accscalar_t factor = 1.0 / (n + count);
var_n += v + (avg - m) * (avg - m) * n * count * factor;
avg = n * factor * avg + count * factor * m;
n += count;
} |
st179058 | Hi. I have a machine with multi-GPU.
And I a wrote training code with Single-Process Multi-GPU according to this docs 27.
Single-Process Multi-GPU
In this case, a single process will be spawned on each host/node and each process will operate on all the GPUs of the node where it’s running. To use DistributedDataParallel in this way, you can simply construct the model as the following:
But I found that it slower than just using single gpu. There must be something wrong in my code.
code
# file: main.py
torch.distributed.init_process_group(backend="nccl")
# dataset
class RandomDataset(Dataset):
def __getitem__(self, index):
return torch.randn(3,255,255),0
def __len__(self):
return 100
datasets = RandomDataset()
sampler = DistributedSampler(datasets)
dataloader = DataLoader(datasets,16,sampler=sampler)
# model
model = torch.nn.Sequential(
torchvision.models.resnet101(False),
torch.nn.Linear(1000,2)
).cuda()
model = DistributedDataParallel(model)
begin_time = time.time()
# training loop
for i in range(10):
for x, y in dataloader:
x = x.cuda()
y = y.reshape(-1).cuda()
optimizer.zero_grad()
output = model(x)
loss = critertion(output,y)
loss.backward()
optimizer.step()
print('Cost:',time.time()-begin_time)
launch with
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=1 main.py
Time cost:
DistributedDataParallel with single-process and 2-gpu
Single-gpu
22s
19s
GPU memory cost:
DistributedDataParallel with single-process and 2-gpu
Single-gpu
3101MiB(GPU 0) / 2895MiB(GPU 1)
4207MiB
I’ve been debuging and looking docs for hours. I’ll be appreciated that somebody have a look.
thanks. |
st179059 | Solved by mrshenli in post #2
Hi @Jack5358
DistributedDataParallel’s single-process-multi-gpu mode is not recommended, because it does parameter replication, input split, output gather, etc. in every iteration, and Python GIL might get in the way. If you just have one machine, with one process per machine, then it will be very … |
st179060 | Hi @Jack5358
DistributedDataParallel’s single-process-multi-gpu mode is not recommended, because it does parameter replication, input split, output gather, etc. in every iteration, and Python GIL might get in the way. If you just have one machine, with one process per machine, then it will be very similar to DataParallel.
The recommended solution is to use single-process-single-gpu, which means, in your use case with two GPUs, you can spawn two processes, and each process exclusively works on one GPU. This should be faster than the current setup. |
st179061 | Hi @mrshenli
Do you have code reference for this recommended solution you are proposing? single-process-single-gpu?
Thanks
gmondaut |
st179062 | You can find some examples under the “Multi-Process Single-GPU” section in https://pytorch.org/docs/stable/nn.html#distributeddataparallel 252. |
st179063 | Hello, I meet the same slower question. I use multi nodes and multi gpus, also with spawn. After checking the code, I find the time costs heavily in optimizer.step(). Any solutions? |
st179064 | Hey @xiefeiwhu
optimizer.step() is not part of DDP forward-backward. Which optimizer are you using? and do you observe the same slowness in local training.
BTW, how did you measure the delay? You might need to use CUDA events 10 to get accurate timing measures, as there could be pending ops in the CUDA stream so that time.time() cannot faithfully represent the time cost. |
st179065 | Sorry, I missed this. Yes please checkout this example 57 that uses device_ids=[rank] to specify which device DDP should use. |
st179066 | I use Adam as the optimizer. After a carefully check with the code, the slowness seems to be the communication time between the nodes as my model is about 180M params, i.e. 760MB. The computation time is faster than the communication time. Then, I expand the nodes from 2 to 4 and the communication time is bigger but not 2 times which accelerates the training procedure to some extent. |
st179067 | You can try ProcessGroupRoundRobin and see if it helps in reducing the communication time. Example usage: https://github.com/pytorch/pytorch/blob/master/test/distributed/test_c10d.py#L1511 55. Note that this API is not officially supported yet. |
Subsets and Splits