id
stringlengths
3
8
text
stringlengths
1
115k
st175568
When trying to train a model in DDP which has a sync_batch_norm layer with uneven inputs, the suggestion is to use the parameter throw_on_early_termination as True within the join() context manager of DDP. I wanted to know what is the best way to handle the exception raised by the join() context manager in case one rank exhausts all its input and we just want the training to continue normally for next iterations? Should we just swallow the exception and move forward? Should we put a CUDA synchronize call as part of the exception handling? Anything else?
st175569
Solved by rvarm1 in post #2 we just want the training to continue normally for next iterations Assuming you mean continue training with DDP, this exception basically indicates this is not possible - one rank has finished all its inputs while others have not. As a result, if you continue training DDP will hang (probably at S…
st175570
we just want the training to continue normally for next iterations Assuming you mean continue training with DDP, this exception basically indicates this is not possible - one rank has finished all its inputs while others have not. As a result, if you continue training DDP will hang (probably at SyncBN step) and the exception is designed to indicate this. The main use case of the exception is to use it as a “signal” to finish the training process. i.e. all processes will raise this exception, and application code can catch it and terminate the main training loop, saving/evaluating the trained model appropriately. Usually inputs are only off by a few examples across ranks, so it should be fine to terminate training at this stage - if this isn’t true then you may want to look into how to better balance the dataset across ranks.
st175571
I am using 2 Nvidia GPUs for image training with DistributedDataParallel. But getting some unexpected error called dist._broadcast_coalesced(self.process_group, tensors, buffer_size) RuntimeError: flock: Input/output error Aborted (core dumped) If I use one gpu it works fine with that, but getting the error mentioned above when I use 2 gpus parallelly. Here are some function given below for initialization: def create_grids(self, img_size=416, gridsize=(13, 13), device=‘cpu’, type=torch.float32): “”" calculate a grid, with the defines gridsize over the input-image img_size: (width and height) of the input-image gridsize: size of the grid projected over the input-image device: cpu or gpu-index where to run the calculation on type: type of (float) nvidia of cpu to use on device """ nx, ny = gridsize # x and y grid size try: self.img_size = max(img_size) #take the biggest dimentions out of width or height, to calculate stride except TypeError: #if only one dimention is given, take that self.img_size = int(img_size) self.stride = self.img_size / max(gridsize) # build xy offsets yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) self.grid_xy = torch.stack((xv, yv), 2).to(device).type(type).view((1, 1, ny, nx, 2)) # build wh gains self.anchor_vec = self.anchors.to(device) / self.stride self.anchor_wh = self.anchor_vec.view(1, self.number_of_anchers, 1, 1, 2).to(device).type(type) self.gridsize = torch.Tensor(gridsize).to(device) self.nx = nx self.ny = ny Precisely, the error comes from the last line of this section below. I initialized my distributed training this way: # Initialize distributed training if len(gpu_list) > 1: #generate path of a (none existing) file, used to setup the distributed learning assert distributed_folder, "The distributed-training folder isnt't set" #check if not the default 0 distributed_learning_filename = str(Nnet_name) + "_distlearn_setup" #remove this file when program is stopped!! distributed_init_filepath = os.path.join(distributed_folder, distributed_learning_filename) #there is more than 1 GPU-index than use distributed training dist.init_process_group(backend='nccl', # use distributed backend 'nccl' init_method='file://' + str(distributed_init_filepath), #file used to setup the distributed learning world_size = distributed_world_size, #number of nodes for distributed training rank = distributed_node_rank) #distributed training node rank model = torch.nn.parallel.DistributedDataParallel(model, find_unused_parameters=True) model.yolo_layers = model.module.Get_YOLO_layers_list() # move yolo layer indices to top level Could you give me some idea about the error above or any suggestion why I’m getting those errors and please let me know if you want more info. Looking forward to hearing any suggestions.
st175572
Solved by lucastononrodrigues in post #2 Are you using multiple machines? If not, why don’t you try nn.DataParallel instead
st175573
Are you using multiple machines? If not, why don’t you try nn.DataParallel instead
st175574
Thanks @lucastononrodrigues for your quick and efficient feedback. I tried to use nn.DataParallel. Fortunately, it just worked for a sec and throws another error called: AttributeError: ‘YOLOLayer’ object has no attribute ‘gridsize’ The function related to this are given below: def create_grids(self, img_size=416, gridsize=(13, 13), device=‘cpu’, type=torch.float32): “”" calculate a grid, with the defines gridsize over the input-image img_size: (width and height) of the input-image gridsize: size of the grid projected over the input-image device: cpu or gpu-index where to run the calculation on type: type of (float) nvidia of cpu to use on device """ nx, ny = gridsize # x and y grid size try: self.img_size = max(img_size) #take the biggest dimentions out of width or height, to calculate stride except TypeError: #if only one dimention is given, take that self.img_size = int(img_size) self.stride = self.img_size / max(gridsize) # build xy offsets yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) self.grid_xy = torch.stack((xv, yv), 2).to(device).type(type).view((1, 1, ny, nx, 2)) # build wh gains self.anchor_vec = self.anchors.to(device) / self.stride self.anchor_wh = self.anchor_vec.view(1, self.number_of_anchers, 1, 1, 2).to(device).type(type) self.gridsize = torch.Tensor(gridsize).to(device) self.nx = nx self.ny = ny This error coming from this lines now: for yololayer in yolo_layers_list: # get number of grid points and anchor vec for this yolo layer ng, anchor_vec = yololayer.gridsize, yololayer.anchor_vec The complete function is given below: def build_targets(model, targets): # targets = [image, class, x, y, w, h] number_of_targets = len(targets) tcls, tbox, indices, av = [], [], [], [] #get the yolo-layers list and the number_of_classes multi_gpu = type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel) #check if this instance of the model runs distributed if multi_gpu: yolo_layers_list = model.module.Get_YOLO_layers_list() #if model runs distributed than the Get_YOLO_layers_list() function is stored under modules number_of_classes = model.module.num_classes #if model runs distributed than the num_classes variable is stored under modules else: yolo_layers_list = model.Get_YOLO_layers_list() number_of_classes = model.num_classes #go over the yolo-detector layers in the model for yololayer in yolo_layers_list: # get number of grid points and anchor vec for this yolo layer ng, anchor_vec = yololayer.gridsize, yololayer.anchor_vec # iou of targets-anchors t, a = targets, [] gwh = t[:, 4:6] * ng if number_of_targets: # use all anchors iou = torch.stack([wh_iou(x, gwh) for x in anchor_vec], 0) number_of_anchors = len(anchor_vec) a = torch.arange(number_of_anchors).view((-1, 1)).repeat([1, number_of_targets]).view(-1) t = targets.repeat([number_of_anchors, 1]) gwh = gwh.repeat([number_of_anchors, 1]) iou = iou.view(-1) # use all ious # reject anchors below iou_thres (OPTIONAL, increases P, lowers R) j = iou > model.hyp['iou_t'] t, a, gwh = t[j], a[j], gwh[j] # Indices b, c = t[:, :2].long().t() # target image, class gxy = t[:, 2:4] * ng # grid x, y gi, gj = gxy.long().t() # grid x, y indices indices.append((b, a, gj, gi)) # GIoU gxy -= gxy.floor() # xy tbox.append(torch.cat((gxy, gwh), 1)) # xywh (grids) av.append(anchor_vec[a]) # anchor vec # Class tcls.append(c) if c.shape[0]: # if any targets assert c.max() <= number_of_classes, 'Target classes exceed model classes' return tcls, tbox, indices, av
st175575
I could be wrong but it seems to me that you are not calling create_grids when you initialize the model, you’d need each layer having the self.gridsize defined. Is the create grids defined inside each layer?
st175576
Here is my complete YOLO model given below, could you please specify, if possible: class YOLOLayer(nn.Module): def __init__(self, anchors, number_of_classes, img_size): """ anchors: list of anchers to be used by this detection-layer number_of_classes: total number of classes that the network can detect img_size: (width, height) of the detection layer(input-image) """ super(MyYoloModel.YOLOLayer, self).__init__() self.anchors = torch.FloatTensor(anchors) self.number_of_anchers = len(anchors) #number of anchors (3) per Yolo-detection-layer self.number_of_classes = number_of_classes # number of classes self.nx = 0 # initialize number of x gridpoints self.ny = 0 # initialize number of y gridpoints def forward(self, p, img_size): batch_size, ny, nx = p.shape[0], p.shape[-2], p.shape[-1] #get information about the input matrix #check if we need to calculate the grid if (self.nx, self.ny) != (nx, ny): NNtools.create_grids(self, img_size, (nx, ny), p.device, p.dtype) # p.view(batch_size, 255, 13, 13) -- > (batch_size, 3, 13, 13, 85) # (batch_size, anchors, grid, grid, classes + xywh) p = p.view(batch_size, self.number_of_anchers, self.number_of_classes + 5, self.ny, self.nx).permute(0, 1, 3, 4, 2).contiguous() # prediction #check if we are training of running inference on the network if self.training: return p else: # inference io = p.clone() # inference output io[..., 0:2] = torch.sigmoid(io[..., 0:2]) + self.grid_xy # xy io[..., 2:4] = torch.exp(io[..., 2:4]) * self.anchor_wh # wh yolo method io[..., :4] *= self.stride torch.sigmoid_(io[..., 4:]) if self.number_of_classes == 1: io[..., 5] = 1 # single-class model # reshape from [1, 3, 13, 13, 85] to [1, 507, 85] return io.view(batch_size, -1, 5 + self.number_of_classes), p def Get_YOLO_layers_list(self): """ Returns a list with the layers of the model that are of the 'YOLOLayer'-class """ detector_list = [] for layer in self.children(): if type(layer) is self.YOLOLayer: detector_list.append(layer) #add the instance of this 'YOLOLayer' to the list return detector_list
st175577
Inside YOLOlayer you should define the gridsize if you’d want to call it in: for yololayer in yolo_layers_list: # get number of grid points and anchor vec for this yolo layer ng, anchor_vec = yololayer.gridsize, yololayer.anchor_vec
st175578
I’m using torch in WSL (Windows Subsystem for Linux). I installed torch and torchvision, but still got error when I tried to use torch.distributed: /usr/bin/python: No module named torch.distributed Maybe it doesn’ support WSL?
st175579
WSL1 doesn’t support GPU. But I think if you install pytorch cpu version, the torch.distributed should be available too.
st175580
Hi everyone! I use a piece of PyTorch code that runs on a single machine distributed setting. The code contains all_gather and all_reduce operations to gather predictions from each gpu and calculate metrics, respectively, without any noticeable slowing down in training speed. I recently added an extra bit in the code for a new thing that I’m trying where I need to broadcast a probability sampled from a uniform distribution (single number) in all gpus, so all of them have the same probability value. I added it within the training loop and looks like this: if dist.get_rank() == 0: prob = torch.rand(1).cuda() else: prob = torch.zeros(1).cuda() dist.broadcast(prob, 0) After adding this, the code becomes significantly slower! Any ideas why? Thank you!
st175581
I think in your example the creation of prob when rank != 0 should be prob=torch.zeros(1, device=rank). Your codes: def worker(gpu, *args): ... if gpu == 0: prob = torch.rand(1).cuda() #.to(device=gpu) print(gpu, prob) else: prob = torch.zeros(1).cuda() # .to(device=gpu) print(gpu, prob) # output 0 tensor([0.1045], device='cuda:0') 1 tensor([0.], device='cuda:0') <- same on cuda:0 Specify device def worker(gpu, *args): ... if gpu == 0: prob = torch.rand(1, device=gpu) print(gpu, prob) else: prob = torch.zeros(1, device=gpu) print(gpu, prob) # output 0 tensor([0.8787], device='cuda:0') 1 tensor([0.], device='cuda:1')
st175582
Hi David! Thank you for your comment. I also thought of that but the PyTorch documentation got me confused. In particular, in torch.Tensor.cuda — PyTorch 1.9.0 documentation 1 for the device argument, they say: device (torch.device) – The destination GPU device. Defaults to the current CUDA device. So I thought that if you don’t specify it the default would be the current CUDA device rather than always 'cuda:0'??
st175583
That’s up to whether you have set the current device. def worker(gpu, *args): ... torch.cuda.set_device(gpu) # <- This line makes the difference if gpu == 0: prob = torch.rand(1).cuda() print(gpu, prob) else: prob = torch.zeros(1).cuda() print(gpu, prob) # output 0 tensor([0.6916], device='cuda:0') 1 tensor([0.], device='cuda:1') # <- on correct device
st175584
This is super helpful! Thank you so much! I’ll keep you updated for the effect the fix has on the training speed.
st175585
I am wondering is it reasonable that using multi-gpus parallel hurts the model’s performance? (I am using DistributedDataParallel, btw) For single gpu, I use the batch size 32, and to test on 2 gpus, I set batch size equals to 16 for each gpu, so the overall batch size will be 32 which is same to the single gpu. (other hyper parameters stay the same) The performance of the model drops a lot when using 2 gpus. I am wondering is there any possible reason why it happens? Thanks!
st175586
Your model’s performance might depend on the batch size e.g. if it’s using batchnorm layers. Since you are using a smaller batch size on each rank now, the performance could change, and you could check SyncBatchNorm if that’s indeed the root cause.
st175587
I am using the following code snippet to ensure each task/process(local_rank) generated by slurm (using #SBATCH -n 4) is assigned to a specific local gpu: device = torch.device("cuda",local_rank) torch.cuda.set_device(device) Still, when I look at nvidia-smi on the “Process GPU” column I see two different GPU’s (say 0, and 1), assigned to only a single PID. If I tie a slurm task to a GPU using torch.cuda.set_device(device) (where device here is the local_rank) I should not have multiple GPU allocations right? Not sure if this is important but I am using two workers on my DataLoader (see definition below), but then that should allocate the process to same GPU (say 0) right? trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, num_workers=2,sampler = torch.utils.data.distributed.DistributedSampler(dataset=trainset,num_replicas=world_size,rank=world_rank)) Details: Slurm config: #SBATCH --nodes 2 #SBATCH -n 4 #SBATCH --gres:gpus=4 Expectation: Four tasks to be launched across two nodes. And each task to have one GPU allocated (because inside program launched by each task/process I have set torch.cuda.set_device(device)) Observed: PID 6173 for example is allocated to two gpus (0 and 1). The 4th task moved to the other node; so please ignore the same. GPU PID Type Process name Usage 0 6172 C …bin/python 2657MiB 0 6173 C …bin/python 673MiB 0 6174 C …bin/python 673MiB 1 6173 C …bin/python 2657MiB 2 6174 C …bin/python 2657MiB
st175588
Arun_Kumar2: Not sure if this is important but I am using two workers on my DataLoader (see definition below), but then that should allocate the process to same GPU (say 0) right? Are the dataloaders doing any GPU work? If not, that should not be relevant here. Still, when I look at nvidia-smi on the “Process GPU” column I see two different GPU’s (say 0 , and 1 ), assigned to only a single PID. My guess is that this might just be some book keeping allocations happening on GPU 0 from all the processes (ex: initializing some cuda contexts etc.). Could you share a minimal script to repro this and we can figure out what might be happening here.
st175589
Thanks for this. Putting together a minimial script to repro and will get back soon.
st175590
I have two GPU nodes. One has two GPUs and the other has only one GPU. I want to use them for distributed training and I run with this bash code: Node 1 python -m torch.distributed.launch --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr="0.0.0.0" --master_port=3338 train_dist.py --restore 0 --config-file configs/vgg16_nddr_additive_4_unpool_aug_shortcut_sing_cosine_dist.yaml Node 2 python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr="0.0.0.0" --master_port=3338 train_dist.py --restore 0 --config-file configs/vgg16_nddr_additive_4_unpool_aug_shortcut_sing_cosine_dist.yaml But the program is stuck. And I have no idea about it. But when the nproc_per_node is set to 1 on both of them there is no problem. So, should the num of GPU on each distributed node always be the same? Does there have any other solutions to run the unbalanced distributed training?
st175591
Solved by pritamdamania87 in post #2 This is currently a limitation of torch.distributed.launch where it assumes all nodes are symmetric. Basically, on each node it assumes the world_size is nproc_per_node * nnodes and as a result you see the hang since this is not consistent across all nodes.
st175592
This is currently a limitation of torch.distributed.launch where it assumes all nodes are symmetric. Basically, on each node it assumes the world_size is nproc_per_node * nnodes and as a result you see the hang since this is not consistent across all nodes.
st175593
Thanks a lot! I successfully solve it with mutliporcessing.spawn. Also, there is another way which means we need to re-implement launch.py.
st175594
Perhaps this answer should be revised? I read the source code and found that this has now been changed. github.com pytorch/pytorch/blob/master/torch/distributed/elastic/agent/server/api.py#L580 start_idx: int = 0, end_idx: int = -1, ) -> Tuple[int, List[int]]: if end_idx == -1: end_idx = len(role_infos) prefix_sum = 0 total_sum = 0 for idx in range(start_idx, end_idx): if role_idx > idx: prefix_sum += role_infos[idx].local_world_size total_sum += role_infos[idx].local_world_size return ( total_sum, list(range(prefix_sum, prefix_sum + role_infos[role_idx].local_world_size)), ) # pyre-fixme[56]: Pyre was not able to infer the type of the decorator # `torch.distributed.elastic.metrics.prof`. @prof def _assign_worker_ranks( self, store, group_rank: int, group_world_size: int, spec: WorkerSpec
st175595
this is the follow up of this. this is not urgent as it seems it is still in dev and not documented. pytorch 1.9.0 hi, log in ddp: when using torch.distributed.run instead of torch.distributed.launch my code freezes since i got this warning The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run. also, in the doc they talked about torchrun which we are supposed to use. probably it is not ready yet. because they didnt tell how to call torchrun. probably i need to change other options. i only used --nnodes=1 --node_rank=0 --nproc_per_node=2 . in order to log 1 into files, they mentioned --log_dir, -r, and -t. i tried: python -m torch.distributed.launch.py --log_dir logs -r 3 and also + -t 3. so the logger creates folders in logs with std and error for each process as mentioned in the doc but something else is still logging in terminal without logging in the files. the doc says that the log in files is for WORKERS. i think it is the launcher that is still logging into the terminal which is not considered as a worker i guess. warning which are important things are still go to terminal and not file. this is from the terminal which is not stored in file: xxx/lib/python3.7/site-packages/torch/distributed/launch.py:164: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead "The module torch.distributed.launch is deprecated " The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases. Please read local_rank from `os.environ('LOCAL_RANK')` instead. INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs: entrypoint : multi_g.py min_nodes : 1 max_nodes : 1 nproc_per_node : 2 run_id : none rdzv_backend : static rdzv_endpoint : 127.0.0.1:x rdzv_configs : {'rank': 0, 'timeout': 900} max_restarts : 3 monitor_interval : 5 log_dir : logs metrics_cfg : {} INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: logs/none_5s_gkwa5 INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group xxx/lib/python3.7/site-packages/torch/distributed/elastic/utils/store.py:53: FutureWarning: This is an experimental API and will be changed in future. "This is an experimental API and will be changed in future.", FutureWarning INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result: restart_count=0 master_addr=127.0.0.1 master_port=x group_rank=0 group_world_size=1 local_ranks=[0, 1] role_ranks=[0, 1] global_ranks=[0, 1] role_world_sizes=[2, 2] global_world_sizes=[2, 2] INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group INFO:torch.distributed.elastic.multiprocessing: Setting worker0 reply file to: logs/none_5s_gkwa5/attempt_0/0/error.json INFO:torch.distributed.elastic.multiprocessing: Setting worker1 reply file to: logs/none_5s_gkwa5/attempt_0/1/error.json INFO:torch.distributed.elastic.agent.server.api:[default] worker group successfully finished. Waiting 300 seconds for other agents to finish. INFO:torch.distributed.elastic.agent.server.api: Local worker group finished (SUCCEEDED). Waiting 300 seconds for other agents to finish xxx/lib/python3.7/site-packages/torch/distributed/elastic/ utils/store.py:71: FutureWarning: This is an experimental API and will be changed in future. "This is an experimental API and will be changed in future.", FutureWarning INFO:torch.distributed.elastic.agent.server.api: Done waiting for other agents. Elapsed: 0.000919342041015625 seconds {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 0, "group_rank": 0, "worker_id": "888", "role": "default", "hostname": "xxx", "state": "SUCCEEDED", "total_run_time": 10, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python\", \"local_rank\": [0], \"role_rank\": [0], \"role_world_size\": [2]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 1, "group_rank": 0, "worker_id": "889", "role": "default", "hostname": "xxx", "state": "SUCCEEDED", "total_run_time": 10, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python\", \"local_rank\": [1], \"role_rank\": [1], \"role_world_size\": [2]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "AGENT", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": null, "group_rank": 0, "worker_id": null, "role": "default", "hostname": "xxx", "state": "SUCCEEDED", "total_run_time": 10, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python\"}", "agent_restarts": 0}} files logs/none_5s_gkwa5/attempt_0/0/error.json are not created!!! again, this is still under dev i guess. not ready yet but it is in master. minimal code multigp.py from this: import argparse import torch import torch.distributed as dist import torch.nn as nn import torch.optim as optim from torch.nn.parallel import DistributedDataParallel as DDP import numpy as np class ToyModel(nn.Module): def __init__(self): super(ToyModel, self).__init__() self.net1 = nn.Linear(10, 10) self.relu = nn.ReLU() self.net2 = nn.Linear(10, 5) def forward(self, x): return self.net2(self.relu(self.net1(x))) def spmd_main(local_world_size, local_rank): # These are the parameters used to initialize the process group np.random.seed(0) dist.init_process_group(backend="nccl") demo_basic(local_world_size, local_rank) # Tear down the process group dist.destroy_process_group() def demo_basic(local_world_size, local_rank): # setup devices for this process. For local_world_size = 2, num_gpus = 8, # rank 0 uses GPUs [0, 1, 2, 3] and # rank 1 uses GPUs [4, 5, 6, 7]. n = torch.cuda.device_count() // local_world_size device_ids = list(range(local_rank * n, (local_rank + 1) * n)) model = ToyModel().cuda(device_ids[0]) ddp_model = DDP(model, device_ids) loss_fn = nn.MSELoss() optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) optimizer.zero_grad() outputs = ddp_model(torch.randn(20, 10)) labels = torch.randn(20, 5).to(device_ids[0]) loss_fn(outputs, labels).backward() optimizer.step() if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--local_rank", type=int, default=0) parser.add_argument("--local_world_size", type=int, default=1) args = parser.parse_args() spmd_main(args.local_world_size, args.local_rank) bash to run it with 2 gpus: python -m torch.distributed.launch \ --log_dir logs \ -r 3 \ --nnodes=1 \ --node_rank=0 \ --nproc_per_node=2 \ multigp.py \ --local_world_size=2 thanks
st175596
Hey Soufiane, When you say “my code freezes” do you mean that your process hangs? Unfortunately I haven’t been able to reproduce this issue. And regarding the torchrun script; you are most likely reading the docs of the master branch. torchrun will be part of v1.10 and is simply an alias to python -m torch.distributed.run. There are some known issues with the launcher scripts in v1.9. I was able to reproduce your problem with stderr not showing up in the log_dir. In the meantime we have fixed most of the issues and a patch release v1.9.1 will be released very soon. Using the latest nightly build, I verified that your script works as expected with proper log output. If this issue is blocking you, I suggest temporarily using a nightly build and migrating to v1.9.1 in a few days once released. Cheers, Can
st175597
hi Can, cbalioglu: When you say “my code freezes” do you mean that your process hangs? Unfortunately I haven’t been able to reproduce this issue. And regarding the torchrun script; you are most likely reading the docs of the master branch. torchrun will be part of v1.10 and is simply an alias to python -m torch.distributed.run. yes, by simply replacing torch.distributed.launch, which works fine, with torch.distributed.run the code hangs. in the example i provided above, i removed torch.distributed.barrier that used to test something. i think it is the reason for hanging. because when i type ctrl+c to stop the process, the last line printed in the stack was sleeping. it is like the process was waiting for something. and my guess it is that the process was stuck in the barrier. the same code works fine with launch. i cant run anything right now due to a power outage in our servers. please, give me some time to provide a full example. for torchrun, that explains it why an error was thrown for not recognizing it because i am using torch 1.9.0. cbalioglu: There are some known issues with the launcher scripts in v1.9. I was able to reproduce your problem with stderr not showing up in the log_dir. In the meantime we have fixed most of the issues and a patch release v1.9.1 will be released very soon. Using the latest nightly build, I verified that your script works as expected with proper log output. If this issue is blocking you, I suggest temporarily using a nightly build and migrating to v1.9.1 in a few days once released. this is good news. thanks. it is not urgent for me right now. but, reading these logs was essential to find the cause of an issue 1 where hints were buried in the first logs that were printed in terminal. i’ll wait for the next release. probably, it could be helpful for others to add this aspect in the doc of 1.9.0. for example, that ddp will turn off multi-threading… unless OMP_NUM_THREADS is explicitly set > 1; this was one of the warnings that i missed because the printing on terminal was fast, and mixed with my own logger. thanks
st175598
so, i did run the code again. the observed hanging has nothing to do with torch.distributed.barrier but it seems the nccl usage based on the error log. because you succeeded to run the code above with 1.9.0 using run, it may have something to do with my system. the code above/below works fine when using torch.distributed.launch but does not work with torch.distributed.run. not sure if this has something to do with my installation. i provide below the code and requirements for my environment. i use 2 tesla p100 gpus for the test. i removed the logging arguments so i can copy all the logs as once. *.json log files are never created with our without explicit request to log into file using launch or run. it is weird because often the log says something like this: [INFO] 2021-09-09 17:06:24,434 __init__: Setting worker0 reply file to: /tmp/torchelastic_ji82kwgj/none_v3x8ezxd/attempt_0/0/error.json. but i dont know what the purpose of these log files. probably there were never needed so they were never created. this is the error with run about nccl: Traceback (most recent call last): File "multig.py", line 60, in <module> spmd_main(args.local_world_size, args.local_rank) File "multig.py", line 28, in spmd_main demo_basic(local_world_size, local_rank) File "multig.py", line 43, in demo_basic Traceback (most recent call last): File "multig.py", line 60, in <module> ddp_model = DDP(model, device_ids) File "xxx/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__ dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1623448265233/work/torch/lib/c10d/ProcessGroupNCCL.cpp:911, invalid usage, NCCL version 2.7.8 ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc). spmd_main(args.local_world_size, args.local_rank) File "multig.py", line 28, in spmd_main demo_basic(local_world_size, local_rank) File "multig.py", line 43, in demo_basic please check the run.sh if i used correctly distributed.run in term of arguments. code multig.py: import argparse import torch import torch.distributed as dist import torch.nn as nn import torch.optim as optim from torch.nn.parallel import DistributedDataParallel as DDP import numpy as np class ToyModel(nn.Module): def __init__(self): super(ToyModel, self).__init__() self.net1 = nn.Linear(10, 10) self.relu = nn.ReLU() self.net2 = nn.Linear(10, 5) def forward(self, x): return self.net2(self.relu(self.net1(x))) def spmd_main(local_world_size, local_rank): # These are the parameters used to initialize the process group np.random.seed(0) dist.init_process_group(backend="nccl") demo_basic(local_world_size, local_rank) # Tear down the process group dist.destroy_process_group() def demo_basic(local_world_size, local_rank): # setup devices for this process. For local_world_size = 2, num_gpus = 8, # rank 0 uses GPUs [0, 1, 2, 3] and # rank 1 uses GPUs [4, 5, 6, 7]. n = torch.cuda.device_count() // local_world_size device_ids = list(range(local_rank * n, (local_rank + 1) * n)) model = ToyModel().cuda(device_ids[0]) ddp_model = DDP(model, device_ids) loss_fn = nn.MSELoss() optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) optimizer.zero_grad() outputs = ddp_model(torch.randn(20, 10)) labels = torch.randn(20, 5).to(device_ids[0]) loss_fn(outputs, labels).backward() optimizer.step() if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--local_rank", type=int, default=0) parser.add_argument("--local_world_size", type=int, default=1) args = parser.parse_args() spmd_main(args.local_world_size, args.local_rank) bash run.sh: #!/usr/bin/env bash # activate conda env here. # ============================================================================== cudaid=$1 export CUDA_VISIBLE_DEVICES=$cudaid python -m torch.distributed.launch \ --nnodes=1 \ --node_rank=0 \ --nproc_per_node=2 \ multi_g2.py \ --local_world_size=2 output with torch.distributed.launch when using ./run.sh 0,1. the job finished properly: xxx/lib/python3.7/site-packages/torch/distributed/launch.py:164: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead "The module torch.distributed.launch is deprecated " The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases. Please read local_rank from `os.environ('LOCAL_RANK')` instead. INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs: entrypoint : multi_g2.py min_nodes : 1 max_nodes : 1 nproc_per_node : 2 run_id : none rdzv_backend : static rdzv_endpoint : 127.0.0.1:x rdzv_configs : {'rank': 0, 'timeout': 900} max_restarts : 3 monitor_interval : 5 log_dir : None metrics_cfg : {} INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_db1cckbb/none_y8sql577 INFO:torch.distributed.elastic.agent.server.api: [default] starting workers for entrypoint: python INFO:torch.distributed.elastic.agent.server.api: [default] Rendezvous'ing worker group xxx/lib/python3.7/site-packages/torch/distributed/elastic/utils/store.py:53: FutureWarning: This is an experimental API and will be changed in future. "This is an experimental API and will be changed in future.", FutureWarning INFO:torch.distributed.elastic.agent.server.api: [default] Rendezvous complete for workers. Result: restart_count=0 master_addr=127.0.0.1 master_port=xxx group_rank=0 group_world_size=1 local_ranks=[0, 1] role_ranks=[0, 1] global_ranks=[0, 1] role_world_sizes=[2, 2] global_world_sizes=[2, 2] INFO:torch.distributed.elastic.agent.server.api :[default] Starting worker group INFO:torch.distributed.elastic.multiprocessing: Setting worker0 reply file to: /tmp/torchelastic_db1cckbb/none_y8sql577/attempt_0/0/error.json INFO:torch.distributed.elastic.multiprocessing: Setting worker1 reply file to: /tmp/torchelastic_db1cckbb/none_y8sql577/attempt_0/1/error.json INFO:torch.distributed.elastic.agent.server.api: [default] worker group successfully finished. Waiting 300 seconds for other agents to finish. INFO:torch.distributed.elastic.agent.server.api: Local worker group finished (SUCCEEDED). Waiting 300 seconds for other agents to finish xxx/lib/python3.7/site-packages/torch/distributed/elastic/utils/store.py:71: FutureWarning: This is an experimental API and will be changed in future. "This is an experimental API and will be changed in future.", FutureWarning INFO:torch.distributed.elastic.agent.server.api:Done waiting for other agents. Elapsed: 0.0006513595581054688 seconds {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 0, "group_rank": 0, "worker_id": "21051", "role": "default", "hostname": "xxx", "state": "SUCCEEDED", "total_run_time": 10, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python\", \"local_rank\": [0], \"role_rank\": [0], \"role_world_size\": [2]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 1, "group_rank": 0, "worker_id": "21053", "role": "default", "hostname": "xxx", "state": "SUCCEEDED", "total_run_time": 10, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python\", \"local_rank\": [1], \"role_rank\": [1], \"role_world_size\": [2]}", "agent_restarts": 0}} {"name": "torchelastic.worker.status.SUCCEEDED", "source": "AGENT", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": null, "group_rank": 0, "worker_id": null, "role": "default", "hostname": "xxx", "state": "SUCCEEDED", "total_run_time": 10, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python\"}", "agent_restarts": 0}} now, output with torch.distributed.run when using ./run.sh 0,1. the job hangs: $ ./run.sh 0,1 xxx/lib/python3.7/site-packages/torch/distributed/launch.py [INFO] 2021-09-09 17:06:24,426 run: Running torch.distributed.run with args: ['xxx/lib/python3.7/site-packages/torch/distributed/run.py', '--nnodes=1', '--node_rank=0', '--nproc_per_node=2', 'multig.py', '--local_world_size=2'] [INFO] 2021-09-09 17:06:24,427 run: Using nproc_per_node=2. ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** [INFO] 2021-09-09 17:06:24,427 api: Starting elastic_operator with launch configs: entrypoint : multig.py min_nodes : 1 max_nodes : 1 nproc_per_node : 2 run_id : none rdzv_backend : static rdzv_endpoint : 127.0.0.1:xxx rdzv_configs : {'rank': 0, 'timeout': 900} max_restarts : 3 monitor_interval : 5 log_dir : None metrics_cfg : {} [INFO] 2021-09-09 17:06:24,429 local_elastic_agent: log directory set to: /tmp/torchelastic_ji82kwgj/none_v3x8ezxd [INFO] 2021-09-09 17:06:24,429 api: [default] starting workers for entrypoint: python [INFO] 2021-09-09 17:06:24,429 api: [default] Rendezvous'ing worker group [INFO] 2021-09-09 17:06:24,429 static_tcp_rendezvous: Creating TCPStore as the c10d::Store implementation xxx/lib/python3.7/site-packages/torch/distributed/elastic/utils/store.py:53: FutureWarning: This is an experimental API and will be changed in future. "This is an experimental API and will be changed in future.", FutureWarning [INFO] 2021-09-09 17:06:24,433 api: [default] Rendezvous complete for workers. Result: restart_count=0 master_addr=127.0.0.1 master_port=same_as_above group_rank=0 group_world_size=1 local_ranks=[0, 1] role_ranks=[0, 1] global_ranks=[0, 1] role_world_sizes=[2, 2] global_world_sizes=[2, 2] [INFO] 2021-09-09 17:06:24,433 api: [default] Starting worker group [INFO] 2021-09-09 17:06:24,434 __init__: Setting worker0 reply file to: /tmp/torchelastic_ji82kwgj/none_v3x8ezxd/attempt_0/0/error.json [INFO] 2021-09-09 17:06:24,434 __init__: Setting worker1 reply file to: /tmp/torchelastic_ji82kwgj/none_v3x8ezxd/attempt_0/1/error.json Traceback (most recent call last): File "multig.py", line 60, in <module> spmd_main(args.local_world_size, args.local_rank) File "multig.py", line 28, in spmd_main demo_basic(local_world_size, local_rank) File "multig.py", line 43, in demo_basic Traceback (most recent call last): File "multig.py", line 60, in <module> ddp_model = DDP(model, device_ids) File "xxx/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__ dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1623448265233/work/torch/lib/c10d/ProcessGroupNCCL.cpp:911, invalid usage, NCCL version 2.7.8 ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc). spmd_main(args.local_world_size, args.local_rank) File "multig.py", line 28, in spmd_main demo_basic(local_world_size, local_rank) File "multig.py", line 43, in demo_basic ddp_model = DDP(model, device_ids) File "xxx/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__ dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1623448265233/work/torch/lib/c10d/ProcessGroupNCCL.cpp:911, invalid usage, NCCL version 2.7.8 ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc). [ERROR] 2021-09-09 17:06:34,499 api: failed (exitcode: 1) local_rank: 0 (pid: 26925) of binary: xxx/bin/python [ERROR] 2021-09-09 17:06:34,499 local_elastic_agent: [default] Worker group failed [INFO] 2021-09-09 17:06:34,499 api: [default] Worker group FAILED. 3/3 attempts left; will restart worker group [INFO] 2021-09-09 17:06:34,499 api: [default] Stopping worker group [INFO] 2021-09-09 17:06:34,500 api: [default] Rendezvous'ing worker group [INFO] 2021-09-09 17:06:34,500 static_tcp_rendezvous: Creating TCPStore as the c10d::Store implementation [INFO] 2021-09-09 17:06:34,501 api: [default] Rendezvous complete for workers. Result: restart_count=1 master_addr=127.0.0.1 master_port=xx group_rank=0 group_world_size=1 local_ranks=[0, 1] role_ranks=[0, 1] global_ranks=[0, 1] role_world_sizes=[2, 2] global_world_sizes=[2, 2] [INFO] 2021-09-09 17:06:34,501 api: [default] Starting worker group [INFO] 2021-09-09 17:06:34,502 __init__: Setting worker0 reply file to: /tmp/torchelastic_ji82kwgj/none_v3x8ezxd/attempt_1/0/error.json [INFO] 2021-09-09 17:06:34,503 __init__: Setting worker1 reply file to: /tmp/torchelastic_ji82kwgj/none_v3x8ezxd/attempt_1/1/error.json <HANGS INDEFINITELY> types ctlr+c: ^CTraceback (most recent call last): File "multig.py", line 60, in <module> spmd_main(args.local_world_size, args.local_rank) File "multig.py", line 27, in spmd_main dist.init_process_group(backend="nccl") File "xxx/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group _store_based_barrier(rank, store, timeout) File "xxx/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 207, in _store_based_barrier time.sleep(0.01) KeyboardInterrupt Traceback (most recent call last): File "xxx/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "xxx/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "xxx/lib/python3.7/site-packages/torch/distributed/run.py", line 637, in <module> main() File "xxx/lib/python3.7/site-packages/torch/distributed/run.py", line 629, in main run(args) File "xxx/lib/python3.7/site-packages/torch/distributed/run.py", line 624, in run )(*cmd_args) File "xxx/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 116, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "xxx/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper return f(*args, **kwargs) File "xxx/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 238, in launch_agent result = agent.run() File "xxx/lib/python3.7/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper result = f(*args, **kwargs) File "xxx/lib/python3.7/site-packages/torch/distributed/elastic/agent/server/api.py", line 700, in run result = self._invoke_run(role) File "xxx/lib/python3.7/site-packages/torch/distributed/elastic/agent/server/api.py", line 828, in _invoke_run time.sleep(monitor_interval) KeyboardInterrupt environment: info using https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py: $ python collect_env.py Collecting environment information... PyTorch version: 1.9.0 Is debug build: False CUDA used to build PyTorch: 11.1 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.10 Python version: 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0] (64-bit runtime) Python platform: Linux-4.15.0-122-generic-x86_64-with-debian-buster-sid Is CUDA available: True CUDA runtime version: 11.1.105 GPU models and configuration: GPU 0: Tesla P100-PCIE-16GB GPU 1: Tesla P100-PCIE-16GB Nvidia driver version: 455.32.00 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.2 HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] efficientnet-pytorch==0.7.0 [pip3] numpy==1.20.1 [pip3] torch==1.9.0 [pip3] torchvision==0.10.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia [conda] efficientnet-pytorch 0.7.0 pypi_0 pypi [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2021.3.0 h06a4308_520 [conda] numpy 1.20.1 pypi_0 pypi [conda] pytorch 1.9.0 py3.7_cuda11.1_cudnn8.0.5_0 pytorch [conda] torchvision 0.10.0 py37_cu111 pytorch conda virtual environment: $ pip freeze appdirs==1.4.4 attrs==20.3.0 backcall==0.1.0 bleach==2.1.3 certifi==2021.5.30 chardet==3.0.4 compress-pickle==1.1.0 cycler==0.10.0 Cython==0.29.2 decorator==4.3.2 efficientnet-pytorch==0.7.0 entrypoints==0.2.3 future==0.16.0 html5lib==1.0.1 idna==2.10 imageio==2.4.1 importlib-metadata==3.5.0 iniconfig==1.1.1 ipykernel==4.8.2 ipython==6.5.0 ipython-genutils==0.2.0 ipywidgets==7.4.2 jedi==0.12.1 Jinja2==2.10 jsonschema==2.6.0 jupyter==1.0.0 jupyter-client==5.2.4 jupyter-console==5.2.0 jupyter-core==4.4.0 kiwisolver==1.0.1 MarkupSafe==1.1.1 matplotlib==3.0.2 mistune==0.8.4 mock==4.0.3 more-itertools==8.8.0 munch==2.5.0 nbconvert==5.3.1 nbformat==4.4.0 networkx==2.5 notebook==5.7.4 numpy==1.20.1 olefile==0.46 opencv-python==4.1.2.30 packaging==20.9 pandocfilters==1.4.2 parso==0.3.1 pexpect==4.6.0 pickleshare==0.7.5 Pillow @ file:///tmp/build/80754af9/pillow_1625655818400/work (8.3.1) pluggy==0.13.1 pretrainedmodels==0.7.4 prometheus-client==0.3.1 prompt-toolkit==1.0.15 protobuf==3.7.1 ptyprocess==0.6.0 py==1.10.0 pygifsicle==1.0.1 Pygments==2.3.1 pyparsing==2.3.1 pytest==6.2.2 python-dateutil==2.8.0 PyWavelets==1.1.1 PyYAML==3.13 pyzmq==17.1.2 qtconsole==4.3.1 requests==2.24.0 scikit-image==0.17.2 scikit-learn==0.20.2 scipy==1.2.1 Send2Trash==1.5.0 simplegeneric==0.8.1 six==1.12.0 terminado==0.8.1 testpath==0.3.1 texttable==1.6.2 tifffile==2020.10.1 timm==0.4.12 toml==0.10.2 torch==1.9.0 torchvision==0.10.0 tornado==5.1.1 tqdm==4.31.1 traitlets==4.3.2 typing-extensions @ file:///tmp/build/80754af9/typing_extensions_1624965014186/work (3.10.0.0) urllib3==1.25.10 wcwidth==0.1.7 webencodings==0.5.1 widgetsnbextension==3.4.2 zipp==3.4.0 i removed some packages as they require compilation. creation of virtual envirnment and install pytroch: conda create -n env_test_issue_ddp python=3.7 conda install pytorch==1.9.0 torchvision==0.10.0 cudatoolkit=11.1 -c pytorch -c nvidia let me know if you need more info. thanks
st175599
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc). This probably indicates something where we end up using NCCL incorrectly. One guess I have is that probably the environment variables for rank and world_size are not set up correctly when we use torch.distributed.run.
st175600
sbelharbi: python -m torch.distributed.launch \ --nnodes=1 \ --node_rank=0 \ --nproc_per_node=2 \ multi_g2.py \ --local_world_size=2 this is the config i used and instead of launch i used run. any idea how to setup these variables in this case? i can try them. my env has 2 gpus located in the same machine. thanks
st175601
Hi, I was planning to process some data on a GPU (let me call it GPU1) and then send a tensor from this GPU1 to another GPU (GPU2) using torch.distributed.send and torch.distributed.recv. I was wondering, does the received tensor on GPU2 keep the gradients history from its previous life on GPU1? Is it possible to apply backpropagation ? Thanks in advance for your help
st175602
Your use case sounds like model parallel, so I’m unsure if you would really need to use send/recv or could use this simple example.
st175603
Thanks for your reply. Sorry for the incomplete explanation of the the problem. What I want is not similar to that you example. I want to code a parallel solver. So, each gpu is solving part of the problem in parallel and after some operations some of them have to share their tensors in pairs. I was wondering if the gradients of those tensor being transferred are lost and if they can be backpropagated.
st175604
There is no backpropagation for send and recv. You can use the RPC Framework: Distributed RPC Framework — PyTorch 1.9.0 documentation 1 and that will allow you to backpropagate across RPC calls. Alternatively, you could do this yourself via autograd functions, ex: pytorch/functional.py at master · pytorch/pytorch · GitHub. You can find docs for autograd functions here: Automatic differentiation package - torch.autograd — PyTorch 1.9.0 documentation
st175605
Hello. I have trained a DDP model on one machine with two gpus. DDP model hangs in forward at gpu:1 at second iteration. I debugged and turned out it was because of self.reducer._rebuild_buckets() function in torch/nn/modules/module.py. Is there anybody who helps me? thanks. Torch version 1.8 cu+11 , 1.9 cu+11 both checked. neither works. os : 5.4.0-81-generic #91~18.04.1-Ubuntu below is my code and log printed def run_train(train_fn, world_size, args): mp.spawn(train_fn, args=(args, ), nprocs=world_size, join=True) def train_init(rank, train_info: TrainInfo): setup(rank, train_info.world_size) if rank == 0: tracking_server = "http://192.168.35.10:5010" mlflow.set_tracking_uri(tracking_server) mlflow.set_experiment('Drawing') now_time = datetime.now() output_path = train_info.output_path logger, _ = getLogger(f'drawing_{rank}th', out_dir=output_path) logger.info(f'pre-settings Initialized.. Rank: {rank}') logger.info(f'model loading.. Rank: {rank}') model = Drawing(train_info.backbone).to(rank) if train_info.model_weight_path is not None: model.load_state_dict(torch.load(train_info.model_weight_path)) ddp_model = DDP(model, device_ids=[rank]) num_workers = 0 logger.info(f'data loading.. with {num_workers} workers Rank: {rank}') train_dataset = VerificationDataset(os.path.join(train_info.root_path, 'train'), input_size=train_info.input_size) val_dataset = VerificationDataset(os.path.join(train_info.root_path, 'val'), is_validation=True, input_size=train_info.input_size) item_dataset = ItemDataset(os.path.join(train_info.root_path, 'topN'), input_size=train_info.input_size) drawing_dataset = DrawingDataset(os.path.join(train_info.root_path, 'topN'), input_size=train_info.input_size) train_sampler = DistributedSampler(train_dataset, shuffle=True, seed=rank) val_sampler = DistributedSampler(val_dataset, shuffle=False) item_sampler = DistributedSampler(item_dataset, shuffle=False) distributed_samplers = [train_sampler, val_sampler, item_sampler] train_dataloader = DataLoader(train_dataset, batch_size=train_info.batch_size, num_workers=num_workers, pin_memory=True, collate_fn=verification_collate_fn, sampler=train_sampler) val_dataloader = DataLoader(val_dataset, batch_size=train_info.batch_size, num_workers=num_workers, pin_memory=True, collate_fn=verification_collate_fn, sampler=val_sampler) item_dataloader = DataLoader(item_dataset, batch_size=train_info.batch_size, num_workers=num_workers, pin_memory=True, sampler=item_sampler) drawing_dataloader = DataLoader(drawing_dataset, batch_size=train_info.batch_size, num_workers=num_workers, pin_memory=True) learning_rate = 0.001 # optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [100, 200], gamma=0.1) # lr_scheduler = None # device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # weight = torch.tensor([0.1, 0.9]).to(device) criterion = torch.nn.CrossEntropyLoss() params = {'run_name': train_info.run_name, 'backbone': train_info.backbone, 'data_root_path': train_info.root_path, 'num_workers': num_workers, ' model_weight_path': train_info.model_weight_path, "init_lr": learning_rate, 'lr_scheduler': lr_scheduler, 'weight': 'no-weight', 'device': rank, 'world_size': train_info.world_size, 'number_of_drawings': len(set(drawing_dataset.all_drawing_reg_nums)), 'optim': f'{optimizer.__class__}', 'train_data_count': len(train_dataset), 'val_data_count': len(val_dataset), 'vectors_batch_size': train_info.vectors_batch_size, 'loss_func': f'{criterion.__class__}', 'input_size': train_info.input_size, 'batch_size': train_info.batch_size, 'epochs': train_info.epochs} logger.info(f'params >> {params}') result_path = os.path.join('results', output_path, f'{rank}th') if rank == 0: with mlflow.start_run(run_name=train_info.run_name): mlflow.log_params(params) e, v_acc, t3_acc, t1_acc = train(ddp_model, train_dataloader, val_dataloader, item_dataloader, drawing_dataloader, distributed_samplers, optimizer, criterion, train_info.epochs, rank, logger, vectors_batch_size=train_info.vectors_batch_size, lr_scheduler=lr_scheduler, save_path=result_path, print_iter=train_info.print_iter) slack.postMessage( f"Start-time:{now_time} \t End-time:{datetime.now()} \t Elapsed time:{datetime.now() - now_time}s " f"\n {params} \n best_epoch:{e} \t 1/1 verification best_acc:{v_acc} \t top3 best_acc:{t3_acc} top1 best_acc:{t1_acc}") else: train(ddp_model, train_dataloader, val_dataloader, item_dataloader, drawing_dataloader, distributed_samplers, optimizer, criterion, train_info.epochs, rank, logger, vectors_batch_size=train_info.vectors_batch_size, lr_scheduler=lr_scheduler, save_path=result_path, print_iter=train_info.print_iter) cleanup() def setup(rank, world_size): os.environ['MASTER_ADDR'] = '192.168.35.1' os.environ['MASTER_PORT'] = '12355' dist.init_process_group("nccl", rank=rank, world_size=world_size) def train(model, train_dataloader, val_dataloader, item_dataloader, drawing_dataloader, distributed_samplers, optim, criterion, epochs, device, logger, vectors_batch_size=1024, lr_scheduler=None, save_path='', print_iter=1): acc_max = 0.0 top1_acc_max = 0 top3_acc_max = 0 best_model = None t_len = len(train_dataloader) v_len = len(val_dataloader) logger.info('<<<<<< Train Start >>>>>>>') save_path = Path(save_path) # model.to(device) if device == 0: writer = SummaryWriter(str(save_path / 'tensorboard')) writer.add_graph(model, [torch.zeros(1, 3, 512, 512), torch.zeros(1, 3, 512, 512)]) if not save_path.exists(): Path(save_path).mkdir(parents=True) epoch_at_best = 0 for e in range(epochs): for d_s in distributed_samplers: d_s.set_epoch(e) running_loss = 0.0 global_step = 0 model.train() ## Train for i, data in enumerate(train_dataloader): print(f"gpu{device} 1") img_x = data["imgs"]["data"].to(device) drawing_x = data["drawings"]["data"].to(device) y = data["is_sames"].to(device) print(f"gpu{device} 2") optim.zero_grad() print(f"gpu{device} 3") logit = model(img_x, drawing_x) ### Hangs here in second iteration with gpu 1 print(f"gpu{device} 4") loss = criterion(logit, y) print(f"gpu{device} 5") loss.backward() print(f"gpu{device} 6") optim.step() if lr_scheduler is not None: lr_scheduler.step() running_loss += loss.item() print(f"gpu{device} 7") if (i % print_iter) == 0: global_step = e * t_len + (i + 1) logger.info(f'\n Train device:{device} Epoch:{e} step[{global_step}/{epochs * t_len}] \t Train_loss avg : {running_loss/(i+1)}') if device == 0: mlflow.log_metric('Train Loss', running_loss / (i+1), step=global_step) # log_weights(model, writer, global_step) class Drawing(torch.nn.Module): def __init__(self, backbone='resnet18', cls=1): super().__init__() if backbone.lower() == 'resnet18': self.imageModel = torch.nn.Sequential(*list(models.resnet18(pretrained=True).children())[:-1]) ml = models.resnet18(pretrained=True) self.classifier = torch.nn.Linear(512 * 2, 2) else: self.imageModel = torch.nn.Sequential(*list(models.resnet50(pretrained=True).children())[:-1]) ml = models.resnet50(pretrained=False) # ml = torch.load('pre-train-models/drawings/drawing-pretrained_90.pt') # ml.requires_grad_(False) if cls == 1: self.classifier = nn.Sequential(nn.Linear(2048 * 2, 1024), nn.BatchNorm1d(1024), nn.ReLU(), nn.Linear(1024, 2) ) elif cls == 2: self.classifier = nn.Sequential( nn.Dropout(p=0.5), nn.Linear(2048 * 2, 1024), nn.BatchNorm1d(1024), nn.ReLU(), nn.Dropout(p=0.5), nn.Linear(1024, 512), nn.BatchNorm1d(512), nn.Sigmoid(), nn.Dropout(p=0.5), nn.Linear(512, 2) ) else: raise ValueError(f'classification layer error: check cls value! cls:{cls} ') self.drawingModel = torch.nn.Sequential(*list(ml.children())[:-1]) self.flatten = torch.nn.Flatten() # self.init_weights() def forward(self, img_x, drawing_x): img_x = self.imageModel(img_x) drawing_x = self.drawingModel(drawing_x) img_x = F.normalize(img_x, p=2, dim=1) drawing_x = F.normalize(drawing_x, p=2, dim=1) concat_x = torch.cat([img_x, drawing_x], dim=1) concat_x = self.flatten(concat_x) out = self.classifier(concat_x) return out log output <<<<<< Train Start >>>>>>> INFO:drawing_1th:<<<<<< Train Start >>>>>>> <<<<<< Train Start >>>>>>> INFO:drawing_0th:<<<<<< Train Start >>>>>>> gpu1 1 gpu1 2 gpu1 3 gpu1 4 gpu1 5 gpu1 6 gpu0 1 gpu0 2 gpu0 3 gpu0 4 gpu0 5 gpu0 6 gpu1 7 gpu0 7 Train device:1 Epoch:0 step[1/507600] Train_loss avg : 0.7977195978164673 INFO:drawing_1th: Train device:1 Epoch:0 step[1/507600] Train_loss avg : 0.7977195978164673 Train device:0 Epoch:0 step[1/507600] Train_loss avg : 0.8086593747138977 INFO:drawing_0th: Train device:0 Epoch:0 step[1/507600] Train_loss avg : 0.8086593747138977 gpu1 1 gpu1 2 gpu1 3 gpu0 1 gpu0 2 gpu0 3 gpu0 4 gpu0 5 gpu0 6
st175606
Recently, I have been running training of deep reinforcement learning policy network in a distributed manner with ddp. I am observing confusing behavior where the different ranks are at different learning updates at the same time. This does not m ake sense to me because I would think that the first call to loss.backward would wait for all ranks to get through the first batch before returning and continuing on to optimizer step. A code snippet is below. with contextlib.ExitStack() as stack: if self.problem.agent_train: stack.enter_context(self.agent_model.join()) if self.problem.detector_train: stack.enter_context(self.detection_model.join()) if self.problem.attacker_train: stack.enter_context(self.attacker_model.join()) # Main training loop train_iterations = [0 for _ in memories] last_checkpoints = [time.time() for _ in memories] error = torch.zeros(1) error_signal = None if self.world_size > 1: error_signal = dist.irecv(error, tag = 1) else: error_signal = None self.learning_steps_completed = 0 while self.learning_steps_completed < self.n_learning_steps: # Check for timeout if (time.time() - start) > self.timeout - 120: print("User defined timeout reached... terminating training") break # Stop learning when an error occurs in an agent if self.error_event.is_set(): had_error = True if self.verbose > 0: print(f"[Rank {self.rank}] Error in agent", flush = True) for i in range(self.world_size): if i != self.rank: dist.send(error, dst = i, tag = 1) break if error_signal is not None and error_signal.is_completed(): had_error = True print(f"[Rank {self.rank}] Ending early from error", flush = True) break # memories = [agent_memory, detection_memory, attacker_memory] # so this loop is just over the controller, detector, and attacker for i, memory in enumerate(memories): train_type, memory = memory if memory.has_batch(): self.learning_steps_completed += 1 # Sample the replay memory to get a batch of experiences (defined in child class of Memory) batch, indexes, weights = memory.sample() # Compute algorithm-specific loss (defined in child class) loss, priorities = self.compute_loss(batch, weights, train_type) # Do optimization step and increment the number of training iterations self.optimizers[train_type].zero_grad() loss.backward() # dist.barrier() self.optimizers[train_type].step() train_iterations[i] += 1 # Some periodic collective operations to collect intermediate results In the above code, the loss.backwards() is hit at different learning_steps_completed without the barrier but not with.
st175607
Matthew_Landen: In the above code, the loss.backwards() is hit at different learning_steps_completed without the barrier but not with. Could you explain what you mean by this? Are you printing self.learning_steps_completed after loss.backward on each process and you see different values across trainers where some are running ahead of others? Note that if you are using GPUs here, gpu execution is async so essentially your training loops is just enqueuing work to the GPU and when loss.backward returns it means the work has been enqueued to the GPU but not necessarily finished. When you do dist.barrier() it forces a CUDA synchronization and blocks the host until all GPU work is done.
st175608
pritamdamania87: Could you explain what you mean by this? Are you printing self.learning_steps_completed after loss.backward on each process and you see different values across trainers where some are running ahead of others? Yes, I’m seeing different learning steps on different ranks at once. A max difference is about 40. I am not using a GPU at all, just cpu. Does the not ddp distributed calls cause problems for the context manager of join()?
st175609
Matthew_Landen: Yes, I’m seeing different learning steps on different ranks at once. A max difference is about 40. I am not using a GPU at all, just cpu. Could it be possible this is just an output buffering issue? Could you add a sys.stdout.flush() line after your print statements and see if the output makes sense? Does the not ddp distributed calls cause problems for the context manager of join()? Not sure I followed this, are you referring to lines like self.agent_model.join() in your code? If so, is this the ddp.join() method or is this join() method doing something else?
st175610
pritamdamania87: Could it be possible this is just an output buffering issue? Could you add a sys.stdout.flush() line after your print statements and see if the output makes sense? No, it couldn’t be. I already flush the output. pritamdamania87: Not sure I followed this, are you referring to lines like self.agent_model.join() in your code? If so, is this the ddp.join() method or is this join() method doing something else? Yes, it is the ddp.join(). Because there could be three model training together, I have to enter the the model join contexts.
st175611
Matthew_Landen: Yes, it is the ddp.join(). Because there could be three model training together, I have to enter the the model join contexts. If you are using ddp.join(), I’m assuming this is because you have uneven data across your trainers. If so, isn’t it expected to see different number of training steps across different trainers? Could you also provide a complete repro script that we can try on our end and also the output that you are seeing on your end. Would be much easier to troubleshoot this issue for us that way.
st175612
Hello! For some reason, I’m getting this warning: [W ProcessGroupNCCL.cpp:1569] Rank 6 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device. This is triggered on model.cuda(device_id) The problem is that the code sometimes really hang! I was wondering if there is an explanation to what’s going on and optimally a solution to this issue. Thank you very much.
st175613
Solved by amirhf in post #2 I am assuming you are using a distributed launch. The warning message is self-explanatory. It seems like in each particular process other GPUs are still visible to the process. Ideally, on local_rank X, you want to only GPU X to be visible. There are some workarounds: Set CUDA_VISIBLE_DEVICES = in…
st175614
I am assuming you are using a distributed launch. The warning message is self-explanatory. It seems like in each particular process other GPUs are still visible to the process. Ideally, on local_rank X, you want to only GPU X to be visible. There are some workarounds: Set CUDA_VISIBLE_DEVICES = int(os.environ[“LOCAL_RANK”]) in your main worker function use torch.distributed.barrier(device_ids=int(os.environ["LOCAL_RANK"]) In your case, RANK 6 is using GPU 0 for barrier but it should use GPU 6 as barrier. You have to also be careful about how you set the device you want to use in your script. the set device should also be equal to the LOCAL_RANK.
st175615
recently, i also saw such issues with pytorch 1.9, but if it is pytorch 1.7.1, there is no such issue.
st175616
– EDIT: it seems a python issue or related. dont know how. from multiprocessing.util import register_after_fork is a python module that is being imported in torch.multiprocessing.reductions.py. not sure why this issue raises because from multiprocessing.util import register_after_fork works fine in python. almost certain that it has something to do with the installation. tried code on different virtual env. torch multiprocessing works fine… creating a new new virtual env. and installing pytorch in it solved the issue. hi, code was working fine. did a script where i used import torch.multiprocessing as mp. running the script throws the error bellow. now, running any script, without even multiprocessing, throws the same error. simply import torch throws the error. doing import torch in python or ipython interpreter is fine. any idea? pytorch 1.9.0 Python 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] :: Anaconda, Inc. on linux thanks error: Traceback (most recent call last): File "multips.py", line 3, in <module> import torch File "x/lib/python3.7/site-packages/torch/__init__.py", line 688, in <module> from torch import multiprocessing as multiprocessing File "x/lib/python3.7/site-packages/torch/multiprocessing/__init__.py", line 18, in <module> from .reductions import init_reductions File "x/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 7, in <module> from multiprocessing.util import register_after_fork ModuleNotFoundError: No module named 'multiprocessing.util'; 'multiprocessing' is not a package
st175617
sbelharbi: ModuleNotFoundError: No module named 'multiprocessing.util'; 'multiprocessing' is not a package @sbelharbi The module can not be found. I would try a clean install.
st175618
yeah. i am making a new install… installing pytroch in a fresh virtual env. solved the issue.
st175619
I am running experiments using DDP and I observed that, when setting non_blocking=True to move the batch to the GPUs, memory is reserved at GPU 0, even if it is not being used by the experiment (in which case utilization remains at 0%). When non_blocking=False, this does not happen. Is this normal? What might be causing it?
st175620
I’m unable to reproduce the additional memory usage of GPU0 using the DDP example and adding non_blocking=True to the to() operation, so could you post a minimal, executable code snippet, please?
st175621
Intro: Hello I heard of the super simple api of data parallelism in PyTorch so I decided to give it a try but after profiling I found almost identical results between using & not using the parallelism feature (DESPITE seeing all 4 GPUs active during training). In each instance I get roughly: duration: 56.92420029640198, loss: 2.6403932571411133… Code Comments: I’m using Transformer XL pertained model from hugging face & a custom training loop. I tried to keep the info as minimal as possible. The background regarding my training loop is that I was getting memory leaks unless I allocated a reusable mini_batch tensor. I’m concerned this could be related as I’m not sure how/if I need to distribute this Tensor manually to each GPU as well… Code: # ... model = AutoModelWithLMHead.from_pretrained('xlnet-base-cased').to('cpu') # ... import torch as pt from torch import optim # NOTE: this is the 'memory efficient version' for CUDA # pad_len := max number of tokens (input sequences padded to this length) def train_loop(model, input_output_data, epochs=5, batch_size=256, pad_len=200): input, output = input_output_data n_examples = len(input) n_batches = int(n_examples/batch_size+0.99999) model.train() # turn on training optimizer = optim.Adam(model.parameters(), lr=0.001) # helpers for tensor dict manipulation slice_inputs = lambda x, a, b: {k:x[k][a:b] for k in x} cast_inputs = lambda x, device='cuda': {k:x[k].to(device) for k in x} def assign_dict(a,b): for k in b: a[k][:] = b[k] # they must be padded to the same size for batching to work... all_inputs = tokenizer(input, return_tensors='pt', padding='max_length', truncation=True, max_length=pad_len) all_outputs = tokenizer(output, return_tensors='pt', padding='max_length', truncation=True, max_length=pad_len) all_inputs = cast_inputs(all_inputs, 'cpu') all_outputs = cast_inputs(all_outputs, 'cpu') # The idea was to have a reusable mini-batch tensor to avoid memory leaks... inputs = slice_inputs(all_inputs, 0, batch_size) outputs = slice_inputs(all_outputs, 0, batch_size) inputs = cast_inputs(inputs, 'cuda') outputs = cast_inputs(outputs, 'cuda') last_loss = None for i in range(epochs): print(f'epoch: {i+1}/{epochs}') for j in range(n_batches): optimizer.zero_grad() torch.cuda.empty_cache() try: a, b = (j*batch_size, (j+1)*batch_size) assign_dict(inputs, slice_inputs(all_inputs, a, b)) assign_dict(outputs, slice_inputs(all_outputs, a, b)) loss = model(**inputs, labels=outputs['input_ids'])[0].mean() except Exception as e: pdb.set_trace() raise print(f'batch: {j+1}/{n_batches}, loss: {loss}') loss.backward() optimizer.step() last_loss = loss.item() return last_loss Here is the code I use to actually perform the training (I switch model=serial_model to model=fast_model for comparison): # Train and Profile import time import torch as pt fast_model = pt.nn.DataParallel(model.to('cuda')) serial_model = model.to('cuda:0') model = serial_model # switch to fast_model for comparison start = time.time() final_loss = train_loop(model, (input, output), batch_size=10, epochs=1) duration = time.time() - start print(f'duration: {duration}, loss: {final_loss}') P.S. LMK if you want full code, I tried to keep it minimal. Thanks for your help!
st175622
Solved by eqy in post #8 At this stage it might be useful to drill down on the distribution of how time is spent in the training loop by adding a bunch of time.time() statements and narrowing down what the relative cost of each operation is. Note that to do this you would want to add torch.cuda.synchronize before and after …
st175623
What is the amount of time spent loading data vs. doing computation on each of the GPUs? Additionally, the use of torch.cuda.empty_cache() can add overhead if it is done for every batch. Is there some kind of issue where other processes are using the same GPU? Otherwise, this seems like it would be unnecessary.
st175624
eqy: What is the amount of time spent loading data vs. doing computation on each of the GPUs? I’m not sure exactly, judging based on nvidia-smi I’d estimate 50% time spent loading and 50% doing computation (this estimate is based on % of time I saw each GPU being utilized). eqy: Additionally, the use of torch.cuda.empty_cache() can add overhead if it is done for every batch. You’re right it is probably not necessary it is just a remnant of a previous solution I tried to deal with a memory leak (which is now gone). I will retest without clear cache and get back to you.
st175625
In that case you might see more improvement if you parallelize the data loading time. Have you tried using the dataloaders: torch.utils.data — PyTorch 1.9.0 documentation 5 for this part?
st175626
No I haven’t, I will try that next thanks! That said I have already tested preloading entire dataset to GPUs. So aside from data loading, I know PyTorch recommends using DataParallelDistributed, but what else could there be that is causing such small speed gains from x4 parallelism?
st175627
There may be indeed a bottleneck from using one process instead of one-process per GPU, but seeing no speedup suggests that it indeed something else that is bottlenecking the GPUs… Generally it is preferable to get close to 100% utilization on a single GPU before moving to multiple GPUs.
st175628
I am actually seeing some speed up, just not as much as I expected… Currently I when testing Preloading data onto GPU (slightly faster than parallel data loading I imagine but less scalable). I am getting a time reduction from 42 secs → 25 secs per epoch (thanks to your help!). That’s almost 2x speed with x4 GPUs, it’s not bad but I’m wondering if I should be seeing more? P.S. I am seeing 100% GPU utilization on each GPU in spurts but not consistently.
st175629
At this stage it might be useful to drill down on the distribution of how time is spent in the training loop by adding a bunch of time.time() statements and narrowing down what the relative cost of each operation is. Note that to do this you would want to add torch.cuda.synchronize before and after starting timing for parts containing GPU operations to ensure that the timing information is accurate. Once you’ve optimized the bottlenecks, you would then want to remove these synchronize calls to reduce the overhead.
st175630
I have a not-that-complex model, but it outputs this error with wrapped with DDP: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel; Then, to find out which parm is causing an issue, I turned on find_unused_parameters=True, then the error went away while I was expecting to see a list of parameters. This is pytorch1.8. Can anyone have any clues and explain potentially why? This may not be specific enough, but like to see if anyone has faced the same issue.
st175631
find_unused_parameters=True can properly take care of unused parameters and sync them, so it fixes the error. In PT 1.9, if your application has unused parameters and you set find_unused_parameters=False, you will see the error message that includes the indices of the unused parameters
st175632
hi, i have a large float cuda tensor v = (32, 6, 59536). applying torch.argsort(v, dim=1, descending=True) takes 14ms. (argsort) it is called twice. so, it weights in term of runtime. is there any way to speedup this op? the same questions goes for torch.sort. here is a weird behavior: runtime of torch.sort depends on the sorting axis!!! import numpy as np import torch def main(axis=-1): seed = 0 torch.manual_seed(seed) np.random.seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) cpu = np.random.rand(32, 6, 59536).astype(np.float32) gpu = torch.tensor(cpu).to(device='cuda:0') torch.cuda.synchronize() start_event = torch.cuda.Event(enable_timing=True) end_event = torch.cuda.Event(enable_timing=True) start_event.record() np.sort(cpu, axis=axis)[1] torch.cuda.synchronize() end_event.record() torch.cuda.synchronize() elapsed_time_ms = start_event.elapsed_time(end_event) print('time cpu: {}'.format(elapsed_time_ms)) torch.cuda.synchronize() start_event = torch.cuda.Event(enable_timing=True) end_event = torch.cuda.Event(enable_timing=True) start_event.record() torch.sort(gpu, dim=axis, descending=True).values[1] torch.cuda.synchronize() end_event.record() torch.cuda.synchronize() elapsed_time_ms = start_event.elapsed_time(end_event) print('time gpu: {}'.format(elapsed_time_ms)) if __name__ == '__main__': for axis in [0, 1, 2]: print('*************** sorting axis={} *************'.format(axis)) for i in range(4): print('run {}'.format(i)) main(axis=axis) output: *************** sorting axis=0 ************* run 0 time cpu: 272.53204345703125 time gpu: 6.794528007507324 run 1 time cpu: 268.1492614746094 time gpu: 5.733503818511963 run 2 time cpu: 268.2332458496094 time gpu: 5.738399982452393 run 3 time cpu: 271.6305236816406 time gpu: 5.7454400062561035 *************** sorting axis=1 ************* run 0 time cpu: 163.71519470214844 time gpu: 15.351776123046875 run 1 time cpu: 163.94012451171875 time gpu: 15.353407859802246 run 2 time cpu: 165.7310791015625 time gpu: 15.347135543823242 run 3 time cpu: 163.39231872558594 time gpu: 15.351840019226074 *************** sorting axis=2 ************* run 0 time cpu: 817.253173828125 time gpu: 6.221983909606934 run 1 time cpu: 820.2147827148438 time gpu: 5.75600004196167 run 2 time cpu: 817.254638671875 time gpu: 5.731488227844238 run 3 time cpu: 814.9271850585938 time gpu: 5.727200031280518 sorting using extreme axes seems way faster than sorting using inner axes, at least for pytorch. numpy behaves differently. any explanation? thanks
st175633
sbelharbi: i have a large float cuda tensor v = (32, 6, 59536). applying torch.argsort(v, dim=1, descending=True) takes 14ms. (argsort) it is called twice. so, it weights in term of runtime. is there any way to speedup this op? You could split the tensor along the other dimension and sort chunks in parallel on different GPUs and then concatenate them together later on a single GPU. sorting using extreme axes seems way faster than sorting using inner axes, at least for pytorch. numpy behaves differently. any explanation? This might have to do with contiguous data layout in case of some sorting axes. For example if you are sorting on the last axis, all the data is contiguous.
st175634
hi, i have a c++ loss-wrapped in python. here is some stats: in all these cases, ddp is used. but we can choose to use one or two gpus. here we show the forward time in the loss. more specifically, part of the code in the forward. that part operates on cpu. so, gpu is not involved since we convert the output gpu tensor from previous computation to cpu().numpy(). then, computations are carried on cpu. time is measured using def forward(x): torch.cuda.synchronize() start_event = torch.cuda.Event(enable_timing=True) end_event = torch.cuda.Event(enable_timing=True) start_event.record() # cpu region --- compute_on_cpu() # cpu region --- end_event.record() torch.cuda.synchronize() elapsed_time_ms = start_event.elapsed_time(end_event) print('time cpu: {}'.format(elapsed_time_ms)) 1 gpu: multi-threading is on: 70ms multi-threading is off: 500ms. 2gpus: multi-threading is on: 500ms multi-threading is off: 500ms. the loss uses the maximum number of threads (openmp omp_get_max_threads()) which is 48 in this case. it looks like when using multi-processes (multi-gpus), multi-threading is not working… any idea why (gil bottleneck???)? and how to fix this? the whole point of using ddp is to speedup computations. thanks
st175635
openmp threads are mostly for cpu computation, for GPU computation, you need to make sure tensors and computation are on GPUs, it will launch CUDA kernels and compute in parallel.
st175636
yes, but this still does not answer the question to why c++ multi-threading seems to be turned off when using ddp + multigpus. i dont not whether the threads of each process block each other or something else. the rest of computation is done on gpu where all tensors live. only compute_cpu() is done exclusively on cpu because it is simply a c++ cpu implementation. dont have yet a gpu implementation.
st175637
@ptrblck any idea why this is happening? this is the reason i was looking for a cuda extension in the other 1 thread. this is a cpu c++ code, and has nothing to do with the other thread. basically, the function compute_on_cpu() calls the c++ function: github.com meng-tang/rloss/blob/1caa759e568db2c7209ab73e73ac039ea3d7101c/pytorch/wrapper/bilateralfilter/bilateralfilter.cpp#L42 2 for(int i=0;i<W*H;i++) in_p[i] = in[i+k*W*H]; lattice.compute(out_p, in_p, 1); for(int i=0;i<W*H;i++) out[i+k*W*H] = out_p[i]; } delete [] out_p; delete [] in_p; } void bilateralfilter_batch(float * images, int len_images, float * ins, int len_ins, float * outs, int len_outs, int N, int K, int H, int W, float sigmargb, float sigmaxy){ const int maxNumThreads = omp_get_max_threads(); //printf("Maximum number of threads for this machine: %i\n", maxNumThreads); omp_set_num_threads(std::min(maxNumThreads,N)); #pragma omp parallel for for(int n=0;n<N;n++){ bilateralfilter(images+n*3*H*W, 3*H*W, ins+n*K*H*W, K*H*W, outs+n*K*H*W, K*H*W, H, W, sigmargb, sigmaxy); i call this function in pytorch after wrapping it in python using swig. this c++ function creates a parallel region. i assume that each thread will deal with a sample in the minibatch in the lop. this is actually fast (70ms). when using ddp+2gpus, the multi-threadins does not seem to work (call 500ms == time when using only 1 thread). the maximum threads on that machine is 48. batch size 32. any idea why? thank you very much for your help!
st175638
I’m not entirely sure, if you are seeing a warning or are setting the OMP threads too late, but here 1 omp_num_threads will be set to 1. Could you check, if this could the also the root cause for your observation?
st175639
you are so right!!! distributed turned off the multithreading and the c++ function was too late when it asks for the max nbr threads which will be 1. – begin side question side question: how to ask ddp to log into a specific file? the dd logging is in terminal which makes catching warnings difficult. there are a lot of messages but they are lost. is there a way to ask ddp to write logs in a specific file? or can we properly manipulate the instance of log here to do that? thanks the first time i used ddp, it starts throwing logs in terminal which is unpractical for debug. there should be a way to tell ddp to log into a file. the doc1 and doc2 do not seem to cover this. i didnt investigate further as other things have more priority. thanks github.com pytorch/pytorch/blob/65e6194aeb3269a182cfe2c05c122159da12770f/torch/distributed/run.py#L325 import torch from torch.distributed.argparse_util import check_env, env from torch.distributed.elastic.multiprocessing import Std from torch.distributed.elastic.multiprocessing.errors import record from torch.distributed.elastic.rendezvous.utils import _parse_rendezvous_config from torch.distributed.elastic.utils import macros from torch.distributed.elastic.utils.logging import get_logger from torch.distributed.launcher.api import LaunchConfig, elastic_launch log = get_logger() def get_args_parser() -> ArgumentParser: """Helper function parsing the command line options.""" parser = ArgumentParser(description="Torch Distributed Elastic Training Launcher") # # Worker/node size related arguments. # –end side question so, now i configure export OMP_NUM_THREADS=32 before running and the runtime is back to 70ms with ddp+2gpus!!! this is cool! thank you very much! this is a life saver! also, i was getting this warning which is impossible to read because it is printed on terminal! ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** which explains everything! also, i am using distributed.launch which is explains why i am getting this warning The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run in the examples they provided, they use launch. i should probably switch to run, it seems more up to date. also, they used launch in this under Launch utility. probably the doc needs to be updated probably in the next release. i am using pytorch 1.9.0. again, thank you so much! this was very helpful!
st175640
The linked line of code sounds right and you could try to set the --log_dir as described in this argument. Good to hear you’ve solved the threading issue.
st175641
thanks. that seems the way to tell run to log in file. thanks i think i need to go through run.py. its docstring is well documented. i expect this to be released in the next version because it does not exit in the current doc. again, thank you very much for your help!!!
st175642
not urgent, but i tried loggin into file with ddp. it worked partially. something else is still logging into terminal and not file. here in a separate thread. this may be still under dev. thanks
st175643
Dear PyTorch Community, When rewriting my library for training I stumbled upon the following problem: When I train with DistributedDataParallel (DDP) I get less accuracy than without. Especially if I limit DDP to only one node and one gpu I would expect that DDP and non-DDP gives approximately the same result given the same code and Hyperparameters. It does not for me. I distilled the problem in two examples of the Hello World for DeepLearning: Training a super simple FeedForward Network on MNIST. It has exactly the same hyperparemeters for distributed and not distributed training and generally the same code beside the adoptions needed to use the DDP library. Also all seeds are fixed to make it reproducible. The most astonishing things is: When I train the DDP model with only one node it still gives about 10% less accuracy. I tried this with different pytorch versions (1.4, 1.8.1 and 1.9.0) on four different computers (MacBook Pro on CPU, three different ubuntu machines, one with a 1080-ti, one with a 2080-ti and a Cluster with P100s inside). So no matter os OS or GPU or CPU when I limit DDP to one node only and compare it to the plain training I get an extremely worse accuracy. Also if I f.e. use two nodes and double the learning rate with the same batch size it gives me this worse results. Now I wonder how that could be. I can not imagine I am the only one to stumble upon that problem, but if it is a user error, than it would be cool to see what it is. I attached a link to a GitHub repo with both scripts. This is a link to the repository with the example: GitHub - joergsimon/mnist-distributed-problem: This is a super small repository demonstrating a Problem with DistributedDataParallel. A three layer feed forward neural network is trained with MNIST with and without data parallel with the same hyper parameters. If you configure DistributedDataParalell to use only one node, the model is quite worse in accuracy. If you have any suggestions how to make them equal beside tuning the learning rate please commend or send PR! Any suggestions would be awesome!
st175644
someone answered the question on stack overflow. Basically over trying things out apparently a different batch_size slipped into the two different versions. Correcting that gives the same result… pytorch - Hello World aka. MNIST with feed forward gets less accuracy in comparison of plain with DistributedDataParallel (DDP) model with only one node - Stack Overflow 1
st175645
I am using DataParallel to train my model on 2 GPUs. However, when calculating loss, it reports: RuntimeError: The size of tensor a (8) must match the size of tensor b (16) at non-singleton dimension 0 Here is my model.py import torch import torch.nn as nn import math import torch.nn.init as init import os class _ResBLockDB(nn.Module): def __init__(self, inchannel, outchannel, stride=1): super(_ResBLockDB, self).__init__() self.layers = nn.ModuleList([ nn.Conv2d(inchannel, outchannel, 3, stride, 1, bias=True), nn.ReLU(inplace=True), nn.Conv2d(outchannel, outchannel, 3, stride, 1, bias=True) ]) for i in self.modules(): if isinstance(i, nn.Conv2d): j = i.kernel_size[0] * i.kernel_size[1] * i.out_channels i.weight.data.normal_(0, math.sqrt(2 / j)) if i.bias is not None: i.bias.data.zero_() def forward(self, x): out = self.layers(x) residual = x out = torch.add(residual, out) return out class _ResBlockSR(nn.Module): def __init__(self, inchannel, outchannel, stride=1): super(_ResBlockSR, self).__init__() self.layers = nn.ModuleList([ nn.Conv2d(inchannel, outchannel, 3, stride, 1, bias=True), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(outchannel, outchannel, 3, stride, 1, bias=True) ]) for i in self.modules(): if isinstance(i, nn.Conv2d): j = i.kernel_size[0] * i.kernel_size[1] * i.out_channels i.weight.data.normal_(0, math.sqrt(2 / j)) if i.bias is not None: i.bias.data.zero_() def forward(self, x): out = self.layers(x) residual = x out = torch.add(residual, out) return out class _DeblurringMoudle(nn.Module): def __init__(self): super(_DeblurringMoudle, self).__init__() self.conv1 = nn.Conv2d(3, 64, (7, 7), 1, padding=3) self.relu = nn.LeakyReLU(0.2, inplace=True) layers = [] for i in range(0, 6): layers.append(_ResBLockDB(64, 64)) self.resBlock1 = nn.ModuleList(layers) # self.resBlock1 = self._makelayers(64, 64, 6) self.conv2 = nn.ModuleList([ nn.Conv2d(64, 128, (3, 3), 2, 1), nn.ReLU(inplace=True) ]) layers = [] for i in range(0, 6): layers.append(_ResBLockDB(128, 128)) self.resBlock2 = nn.ModuleList(layers) # self.resBlock2 = self._makelayers(128, 128, 6) self.conv3 = nn.ModuleList([ nn.Conv2d(128, 256, (3, 3), 2, 1), nn.ReLU(inplace=True) ]) layers = [] for i in range(0, 6): layers.append(_ResBLockDB(256, 256)) self.resBlock3 = nn.ModuleList(layers) # self.resBlock3 = self._makelayers(256, 256, 6) self.deconv1 = nn.ModuleList([ nn.ConvTranspose2d(256, 128, (4, 4), 2, padding=1), nn.ReLU(inplace=True) ]) self.deconv2 = nn.ModuleList([ nn.ConvTranspose2d(128, 64, (4, 4), 2, padding=1), nn.ReLU(inplace=True), nn.Conv2d(64, 64, (7, 7), 1, padding=3) ]) self.convout = nn.ModuleList([ nn.Conv2d(64, 64, (3, 3), 1, 1), nn.ReLU(inplace=True), nn.Conv2d(64, 3, (3, 3), 1, 1) ]) for i in self.modules(): if isinstance(i, nn.Conv2d): j = i.kernel_size[0] * i.kernel_size[1] * i.out_channels i.weight.data.normal_(0, math.sqrt(2 / j)) if i.bias is not None: i.bias.data.zero_() # def _makelayers(self, inchannel, outchannel, block_num, stride=1): # layers = [] # for i in range(0, block_num): # layers.append(_ResBLockDB(inchannel, outchannel)) # return nn.Sequential(*layers) def forward(self, x): con1 = self.relu(self.conv1(x)) res1 = self.resBlock1(con1) res1 = torch.add(res1, con1) con2 = self.conv2(res1) res2 = self.resBlock2(con2) res2 = torch.add(res2, con2) con3 = self.conv3(res2) res3 = self.resBlock3(con3) res3 = torch.add(res3, con3) decon1 = self.deconv1(res3) deblur_feature = self.deconv2(decon1) deblur_out = self.convout(torch.add(deblur_feature, con1)) return deblur_feature, deblur_out class _SRMoudle(nn.Module): def __init__(self): super(_SRMoudle, self).__init__() self.conv1 = nn.Conv2d(3, 64, (7, 7), 1, padding=3) self.relu = nn.LeakyReLU(0.2, inplace=True) layers = [] for i in range(0, 8): layers.append(_ResBlockSR(64, 64)) self.resBlock = nn.ModuleList(layers) # self.resBlock = self._makelayers(64, 64, 8, 1) self.conv2 = nn.Conv2d(64, 64, (3, 3), 1, 1) for i in self.modules(): if isinstance(i, nn.Conv2d): j = i.kernel_size[0] * i.kernel_size[1] * i.out_channels i.weight.data.normal_(0, math.sqrt(2 / j)) if i.bias is not None: i.bias.data.zero_() # def _makelayers(self, inchannel, outchannel, block_num, stride=1): # layers = [] # for i in range(0, block_num): # layers.append(_ResBlockSR(inchannel, outchannel)) # return nn.Sequential(*layers) def forward(self, x): con1 = self.relu(self.conv1(x)) res1 = self.resBlock(con1) con2 = self.conv2(res1) sr_feature = torch.add(con2, con1) return sr_feature class _GateMoudle(nn.Module): def __init__(self): super(_GateMoudle, self).__init__() self.conv1 = nn.Conv2d(131, 64, (3, 3), 1, 1) self.relu = nn.LeakyReLU(0.2, inplace=True) self.conv2 = nn.Conv2d(64, 64, (1, 1), 1, padding=0) for i in self.modules(): if isinstance(i, nn.Conv2d): j = i.kernel_size[0] * i.kernel_size[1] * i.out_channels i.weight.data.normal_(0, math.sqrt(2 / j)) if i.bias is not None: i.bias.data.zero_() def forward(self, x): con1 = self.relu(self.conv1(x)) scoremap = self.conv2(con1) return scoremap class _ReconstructMoudle(nn.Module): def __init__(self): super(_ReconstructMoudle, self).__init__() layers = [] for i in range(0, 8): layers.append(_ResBLockDB(64, 64)) self.resBlock = nn.ModuleList(layers) # self.resBlock = self._makelayers(64, 64, 8) self.conv1 = nn.Conv2d(64, 256, (3, 3), 1, 1) self.pixelShuffle1 = nn.PixelShuffle(2) self.relu1 = nn.LeakyReLU(0.1, inplace=True) self.conv2 = nn.Conv2d(64, 256, (3, 3), 1, 1) self.pixelShuffle2 = nn.PixelShuffle(2) self.relu2 = nn.LeakyReLU(0.2, inplace=True) self.conv3 = nn.Conv2d(64, 64, (3, 3), 1, 1) self.relu3 = nn.LeakyReLU(0.2, inplace=True) self.conv4 = nn.Conv2d(64, 3, (3, 3), 1, 1) for i in self.modules(): if isinstance(i, nn.Conv2d): j = i.kernel_size[0] * i.kernel_size[1] * i.out_channels i.weight.data.normal_(0, math.sqrt(2 / j)) if i.bias is not None: i.bias.data.zero_() # def _makelayers(self, inchannel, outchannel, block_num, stride=1): # layers = [] # for i in range(0, block_num): # layers.append(_ResBLockDB(inchannel, outchannel)) # return nn.Sequential(*layers) def forward(self, x): res1 = self.resBlock(x) con1 = self.conv1(res1) pixelshuffle1 = self.relu1(self.pixelShuffle1(con1)) con2 = self.conv2(pixelshuffle1) pixelshuffle2 = self.relu2(self.pixelShuffle2(con2)) con3 = self.relu3(self.conv3(pixelshuffle2)) sr_deblur = self.conv4(con3) return sr_deblur class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.deblurMoudle = _DeblurringMoudle() self.srMoudle = _SRMoudle() self.geteMoudle = _GateMoudle() self.reconstructMoudle = _ReconstructMoudle() # self.deblurMoudle = self._make_net(_DeblurringMoudle) # self.srMoudle = self._make_net(_SRMoudle) # self.geteMoudle = self._make_net(_GateMoudle) # self.reconstructMoudle = self._make_net(_ReconstructMoudle) def forward(self, x, gated, isTest): if isTest == True: origin_size = x.size() input_size = (math.ceil(origin_size[2]/4)*4, math.ceil(origin_size[3]/4)*4) out_size = (origin_size[2]*4, origin_size[3]*4) x = nn.functional.upsample(x, size=input_size, mode='bilinear') deblur_feature, deblur_out = self.deblurMoudle(x) sr_feature = self.srMoudle(x) if gated == True: scoremap = self.geteMoudle(torch.cat((deblur_feature, x, sr_feature), 1)) else: scoremap = torch.cuda.FloatTensor().resize_(sr_feature.shape).zero_()+1 repair_feature = torch.mul(scoremap, deblur_feature) fusion_feature = torch.add(sr_feature, repair_feature) recon_out = self.reconstructMoudle(fusion_feature) if isTest == True: recon_out = nn.functional.upsample(recon_out, size=out_size, mode='bilinear') return deblur_out, recon_out It seems that I did not use any view(-1, xxx) function as in https://stackoverflow.com/questions/56719867/pytorch-expected-input-batch-size-12-to-match-target-batch-size-64 and in https://discuss.pytorch.org/t/valueerror-expected-input-batch-size-324-to-match-target-batch-size-4/24498/3 I am really confused. Can anyone help? Thanks in advance.
st175646
Here is my training procedure: def train(train_gen, model, criterion, optimizer, epoch, contrast_w=0): epoch_loss = 0 for iteration, batch in enumerate(train_gen, 1): torch.cuda.empty_cache() #input, targetdeblur, targetsr LR_Blur = batch[0] LR_Deblur = batch[1] HR = batch[2] LR_Blur = LR_Blur.to(device) LR_Deblur = LR_Deblur.to(device) HR = HR.to(device) if opt.isTest == True: test_Tensor = torch.cuda.FloatTensor().resize_(1).zero_()+1 else: test_Tensor = torch.cuda.FloatTensor().resize_(1).zero_() if opt.gated == True: gated_Tensor = torch.cuda.FloatTensor().resize_(1).zero_()+1 else: gated_Tensor = torch.cuda.FloatTensor().resize_(1).zero_() print("Before", iteration, LR_Blur.shape, HR.shape, LR_Deblur.shape) [lr_deblur, sr] = model(LR_Blur, gated_Tensor, test_Tensor) print("after", lr_deblur.shape, sr.shape, LR_Blur.shape, gated_Tensor.shape, test_Tensor.shape) loss1 = criterion(lr_deblur, LR_Deblur) loss2 = criterion(sr, HR) mse = loss2 + opt.lambda_db * loss1 I print the shape of the data before and after entering the model. Also, I print the shape at the beginning and the end in the forward function of Net. def forward(self, x, gated, isTest): print("IN model", x.shape, gated.shape) ...... ...... ...... print("[return]", deblur_out.shape, recon_out.shape) return deblur_out, recon_out Here are the outputs: Before 1 torch.Size([16, 3, 24, 24]) torch.Size([16, 3, 96, 96]) torch.Size([16, 3, 24, 24]) IN model torch.Size([8, 3, 24, 24]) torch.Size([1]) [return] torch.Size([8, 3, 24, 24]) torch.Size([8, 3, 96, 96]) after torch.Size([8, 3, 24, 24]) torch.Size([8, 3, 96, 96]) torch.Size([16, 3, 24, 24]) torch.Size([1]) torch.Size([1])
st175647
wyboooo: Size([8, 3, 24, 24]) torch.Size([8, 3, 96, 96]) Here is the complete error message: Traceback (most recent call last): File "train_GFN_4x.py", line 191, in <module> train(trainloader, model, criterion, optimizer, epoch, opt.contrast_w) File "train_GFN_4x.py", line 118, in train loss1 = criterion(lr_deblur, LR_Deblur) File "/usr/local/conda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/conda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 445, in forward return F.mse_loss(input, target, reduction=self.reduction) File "/usr/local/conda/envs/py36/lib/python3.6/site-packages/torch/nn/functional.py", line 2647, in mse_loss expanded_input, expanded_target = torch.broadcast_tensors(input, target) File "/usr/local/conda/envs/py36/lib/python3.6/site-packages/torch/functional.py", line 65, in broadcast_tensors return _VF.broadcast_tensors(tensors) RuntimeError: The size of tensor a (8) must match the size of tensor b (16) at non-singleton dimension 0
st175648
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") if opt.resume: if os.path.isfile(opt.resume): print("Loading from checkpoint {}".format(opt.resume)) model = torch.load(opt.resume) model.load_state_dict(model.state_dict()) opt.start_training_step, opt.start_epoch = which_trainingstep_epoch(opt.resume) else: model = Net() mkdir_steptraing(opt.model_name) model = model.to(device) model = nn.DataParallel(model, device_ids=range(opt.n_GPUs))
st175649
No. When running my code without DataParallel, I do not have this error. My code can execute correctly.
st175650
Also, if I pass a different batch size and run on a single GPU, it is still correct. So I think I did not implicitly specified a specific batch size in my code.
st175651
I try to find whether this error is cause by my model.py, so I changed my model to a very simple CNN network. Here is my code: import torch import torch.nn as nn class DemoModel(nn.Module): def __init__(self): super(DemoModel, self).__init__() self.conv1 = nn.Conv2d(3, 64, 7, 2, padding=1) self.conv2 = nn.Conv2d(64, 64, 3, 2, padding=1) self.conv3 = nn.Conv2d(64, 3, 1, padding=1) def forward(self, x, not_used1, not_used2): out = self.conv1(x) out = self.conv2(out) out = self.conv3(out) return x, out model = DemoModel() model = model.to(device) model = nn.DataParallel(model, device_ids=range(opt.n_GPUs)) However, it will also cause error using two GPUs. RuntimeError: The size of tensor a (8) must match the size of tensor b (16) at non-singleton dimension 0 Will this error be related to h5 files? Because I read my data from a .h5 file.
st175652
I modified my dataset and read data from folders but it still failed. I think there is nothing wrong with my device because when using DataParallel in other projects, it worked well.
st175653
I finally found the problem. It is due to this line: test_Tensor = torch.cuda.FloatTensor().resize_(1).zero_()+1
st175654
hi, what is the proper way to troubleshoot the gain in speed with ddp ( DistributedDataParallel) in order to find bottlenecks? i have 2 different models. model 1 (resnet) and model 2 (unet). size of model 2 (23million) > size of model1 (32million) in term of data, model2 does an additional disc access to load more sample-related data, while model1 only load the basic data. we consider using either single gpu or 2 gpus. when using 2 gpus, model 1 gains 50% of speed compared to single gpu. in the other hand, model 2 only gain 15%. i am using datalaoder with distributedsampler and 4 workers per process. every process uses only one gpu. both gpus are in the same machine. use nccl backend. thanks
st175655
did some timing. using multi-gpus makes a type of loss acts up. the loss runs on cpu. so, not sure why is that. when using a single gpu, the loss takes 70ms to forward. when using multigpus, it takes 600ms. looking why this is happening. the observed gain in time of 15% came from the validation that is done in multigpu. thanks
st175656
it is a custom c++ cpu-loss provided in another work. currently looking for a gpu 1 version. from initial run, gpu loss does not seem to be faster than the multi-threaded c++ cpu implementation. still looking to find why and how to further speed up the gpu version. thanks
st175657
I’m converting a DataParallel (DP) model into DistributedDataParallel (DDP). However, the DDP degradates the testing performance and leads to zigzag loss curves during training. Below is the loss curve during training (blue is DDP and orange is DP): 企业微信截图_163056778822342131×414 24.9 KB In general these two behave similar. But when zooming in, the DDP presents periodic zigzags: 企业微信截图_163056796453662150×402 33.5 KB My training configurations are: DP: batch size 48 3 GPUs DDP batchsize per GPU: 16 3 GPUs Random seeds are fixed and learning rate and all other configurations are the same.
st175658
DP’s loss is aggregated for batch size 48, DDP’s loss is calculated independently for each batch size 16, so they are expected to be different if you are looking at losses at rank 0
st175659
Would it be possible to do DDP multi-nodes multi-gpus training if e.g., 1st card got 4 gpus, 2nd card got 4 gpus, and the 3rd got 2 gpus?
st175660
Hi. I’m trying to use DistributedDataParallel in a single GPU node, for practice. I checked my model’s size right after initializing it by sum([p.numel()*4 for p in model.parameters()])/1024/1024, and it says 1140.xxx and then I wrapped the model with DistributedDataParallel by dmodel =DistributedDataParallel(module=model, ...). then I checkout memory allocated by, torch.cuda.memory_allocated()/1024/1024, and it says 2281.xx which is almost double of my model size. I thought it’s due to using variable dmodel different from model. So I retried to wrapping the model with DistributedDataParallel without using variable dmodel. model = DistributedDataParallel(module=model, ...). but torch.cuda.memory_allocated()/1024/1024 still says 2281.xx. Is it expected behavior that memory consumption for model is doubled when using DDP?
st175661
Solved by ptrblck in post #5 That’s great debugging! I’ve checked the behavior with @mcarilli and he confirms that the Reducer will create gradient buckets for each parameter, so that the memory usage after wrapping the model into DDP will be 2 x model_parameter_size. Note that the parameter size of a model is often much smal…
st175662
Could you show how you are using DDP on a single device, please? Based on the memory usage you are seeing, I would guess you might be creating two processes on the same device, which would then also initialize two models.
st175663
Thank you for your endless help, @ptrblck . I’m referencing the way of using DDP from this repository 1 And just in case, let me know you that: my model contains BiLSTM I’m also using a quantizer (but, skipped in the abstract codes below). Abstract of my work’s __main__.py. The order of instance initialization is identical with my source code. import torch from torch import distributed from torch.nn.parallel import DistributedDataParallel def train(args): device = torch.device(f'cuda:{args.rank}') torch.cuda.set_device(device=device) distributed.init_process_group( backend='nccl', init_method=f'tcp://{args.master_url}', world_size=args.world_size, rank=args.rank, ) # Instantiate my torch.utils.data.Dataset object train_dataset = MyDataset() # Instantiate my model model = MyModule() model.to(device) # Augmentation modules which has no parameters. augmentations = [ Augmentation1(), Augmentation2(), Augmentation3(), ] augmentations = torch.nn.Sequential(*augmentations) augmentations.to(device) # And instantiate etc. optimizer = ... criterion = ... # I've checked the memory usage here, and it says 1140.xx MiB. # Wrap the model with DDP. model = DistributedDataParallel( module=model, device_ids=[torch.cuda.current_device()], output_device=torch.cuda.current_device(), ) # I've checked the memory usage here again, and it says 2281.xx MiB. # Instantiate my Trainer class, whose abstract is below. trainer = Trainer( model=model, dataset=train_dataset, criterion=criterion, optimizer=optimizer, batch_size=batch_size, num_workers=args.num_workers, device=device, world_size=args.world_size, ) for epoch in range(epoch): for metrics in trainer.train(epoch) # train 1 epoch print(metrics) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument("--config_file", type=str) parser.add_argument("--epochs", type=int) parser.add_argument("--num_workers", type=int) parser.add_argument("--rank", type=int, default=-1) parser.add_argument("--world_size", type=int, default=1) parser.add_argument("--master_url", type=str) args = parser.parse_args() train(args) Abstracts of my Trainer class class Trainer: def __init__(self, [GIVEN ARGS]): self.model = model self.dataset = dataset self.augmentations = augmentations self.criterion = criterion self.num_workers = num_workers // world_size self.device = device self.world_size = world_size self.batch_size = batch_size // world_size self.sampler = DistributedSampler( dataset=dataset, shuffle=dataset.is_trainset(), ) self.dataloader = DataLoader( dataset=self.dataset, batch_size=self.batch_size if dataset.is_trainset() else 1, num_workers=self.num_workers, pin_memory=True, drop_last=dataset.is_trainset() is False, sampler=self.sampler ) def train(epoch): self.model.train() self.sampler.set_epoch(epoch) for x, y in self.dataloader: self.optimizer.zero_grad() x = x.to(self.device, non_blocking=True) y = y.to(self.device, non_blocking=True) x = self.augmentations(x) y_hat = self.model(x) cost = self.criterion(input=y, target=y_hat) cost.backward() self.optimizer.step() del x, y, y_hat yield cost.item() run.py for running processes. The script is quite identical with that of the referenced repository’s def main(): args = sys.argv[1:] # arguments for __main__.py gpus = torch.cuda.device_count() # supposed to be 1 in my case. free_port = get_free_port() master_url = f'127.0.0.1:{free_port}' args += ["--world_size", str(gpus), "--master_url", f"127.0.0.1:{port}"] tasks = [] for gpu in range(gpus): if gpu > 0: tasks.append(sp.Popen(["python3", "-m", "my_model"] + args + ["--rank", str(gpu)])) tasks[-1].rank = gpu while tasks: for task in tasks: try: exitcode = task.wait(0.1) except sp.TimeoutExpired: continue else: tasks.remove(task) if exitcode: print(f"Task {task.rank} died with exit code " f"{exitcode}", file=sys.stderr) failed = True if failed: break if failed: for task in tasks: task.terminate() sys.exit(1) if __name__ == "__main__": main() Thanks again.
st175664
I’ve debugged my script line by line, and found that the allocated memory get doubled when torch.distributed.Reducer is instantiated in the constructor of DistributedDataParallel. I think the reducer is a necessary component for DDP, because it sums up the result from all the device. But I don’t know how the reducer works, so that I still can’t understand why the memory gets doubled. Is it expected behavior that the reducer takes the additional memory as the local model takes? Does the reducer take the addition memory only for the rank:0 device? I mean the addition memory consumption would not occur in the rank:1 or rank:2?? I can’t check this because I have only one gpu.
st175665
helloybz: I’ve debugged my script line by line, and found that the allocated memory get doubled when torch.distributed.Reducer is instantiated in the constructor of DistributedDataParallel. That’s great debugging! I’ve checked the behavior with @mcarilli and he confirms that the Reducer will create gradient buckets for each parameter, so that the memory usage after wrapping the model into DDP will be 2 x model_parameter_size. Note that the parameter size of a model is often much smaller than the activation size so that this memory increase might or might not be significant.
st175666
DDP maintains one buffer that is the same size as model size in default, so it is expected the memory is double of model size. If you set ‘gradients_as_bucket_view=True’, the peak memory allocation will be reduced around one copy of model size
st175667
Hi, I’m new to DistributedDataParallel(DDP). I have lots of problem with this module… I tried to run code like below.(from Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.9.0+cu102 documentation) import os import tempfile import torch import torch.distributed as dist import torch.multiprocessing as mp import torch.nn as nn import torch.optim as optim from torch.nn.parallel import DistributedDataParallel as DDP def setup(rank, world_size): os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '12355' # initialize the process group dist.init_process_group("gloo", rank=rank, world_size=world_size) def cleanup(): dist.destroy_process_group() class ToyModel(nn.Module): def __init__(self): super(ToyModel, self).__init__() self.net1 = nn.Linear(10, 10) self.relu = nn.ReLU() self.net2 = nn.Linear(10, 5) def forward(self, x): return self.net2(self.relu(self.net1(x))) def demo_basic(rank, world_size): print(f"Running basic DDP example on rank {rank}.") setup(rank, world_size) # create model and move it to GPU with id rank for x in range( 1, 10000): model = ToyModel().to(rank) ddp_model = DDP(model, device_ids=[rank]) loss_fn = nn.MSELoss() optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) optimizer.zero_grad() outputs = ddp_model(torch.randn(20, 10)) labels = torch.randn(20, 5).to(rank) loss_fn(outputs, labels).backward() optimizer.step() def run_demo(demo_fn, world_size): mp.spawn(demo_fn, args=(world_size,), nprocs=world_size, join=True) if __name__ == "__main__": n_gpus = torch.cuda.device_count() if n_gpus < 4: print(f"Requires at least 4 GPUs to run, but got {n_gpus}.") else: run_demo(demo_basic, 4) run with python3 new.py However, when I monitor ‘nvidia-smi’. There are 4 process in GPU 0. Why not one process for each GPU device? Screenshot from 2021-09-06 17-13-06-1708×662 80.9 KB