id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st176068 | Yes! I met the same problem. It would have saved me a lot of time if I could have searched this post:). And thanks for your suggestion, I will look into loguru for an alternative solution. |
st176069 | @hkz I tested this locally and the issue does seem to be in 1.8.0, but it seems like it should be resolved in 1.9.0 (by [Resubmit] Fix for incorrect usage of logging in torch/distributed/distributed_c10d.py by szmigacz · Pull Request #52757 · pytorch/pytorch · GitHub 3). I see the double log in 1.8.0, but not in 1.9.0. Can you double check you PyTorch install using `python -c ‘import torch; print(torch.version)’? |
st176070 | Thanks for your kind reply! I create two conda environments, torch1.6 and torch1.9 respectively.
Running the working example, I can confirm the logging issue exists in torch1.9 while not in torch1.6.
image1805×411 77.2 KB
image2111×671 174 KB
If I replace logging with loguru as suggested by ibro45, the weird behavior disappeared in torch1.9.
I wonder if we are using the same test case. Can you share your test files? |
st176071 | I was using a simpler test case, this is the script I was running:
import logging
import os
import sys
import torch.distributed as dist
import torch
import torch.nn.parallel
import torch.nn as nn
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
dist.init_process_group('nccl', rank=0, world_size=1)
print(logging.root.hasHandlers())
model = torch.nn.parallel.DistributedDataParallel(nn.Linear(10, 10).cuda(), device_ids=[0])
input = torch.rand(10, 10).cuda()
model(input)
class NoOp:
def __getattr__(self, *args):
def no_op(*args, **kwargs):
"""Accept every signature by doing non-operation."""
pass
return no_op
def get_logger(log_dir, log_name=None, resume="", is_rank0=True):
"""Get the program logger.
Args:
log_dir (str): The directory to save the log file.
log_name (str, optional): The log filename. If None, it will use the main
filename with ``.log`` extension. Default is None.
resume (str): If False, open the log file in writing and reading mode.
Else, open the log file in appending and reading mode; Default is "".
is_rank0 (boolean): If True, create the normal logger; If False, create the null
logger, which is useful in DDP training. Default is True.
"""
if is_rank0:
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
# StreamHandler
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setLevel(level=logging.INFO)
logger.addHandler(stream_handler)
# FileHandler
mode = "w+" if resume == "False" else "a+"
if log_name is None:
log_name = os.path.basename(sys.argv[0]).split(".")[0] + (".log")
file_handler = logging.FileHandler(os.path.join(log_dir, log_name), mode=mode)
file_handler.setLevel(level=logging.INFO)
logger.addHandler(file_handler)
else:
logger = NoOp()
return logger
logger = get_logger('/tmp/', 'foo')
logger.info('hello') |
st176072 | Thanks for sharing! I dive into my test code and find the function .backdward() in the training loop (>=2) is the cause of this weird behavior. Based on your minimal test code, I write the following scripts:
import logging
import os
import sys
import torch.distributed as dist
import torch
import torch.nn.parallel
import torch.nn as nn
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
dist.init_process_group('nccl', rank=0, world_size=1)
print(logging.root.hasHandlers())
model = torch.nn.parallel.DistributedDataParallel(nn.Linear(10, 10).cuda(), device_ids=[0])
criterion = nn.CrossEntropyLoss().cuda()
class NoOp:
def __getattr__(self, *args):
def no_op(*args, **kwargs):
"""Accept every signature by doing non-operation."""
pass
return no_op
def get_logger(log_dir, log_name=None, resume="", is_rank0=True):
"""Get the program logger.
Args:
log_dir (str): The directory to save the log file.
log_name (str, optional): The log filename. If None, it will use the main
filename with ``.log`` extension. Default is None.
resume (str): If False, open the log file in writing and reading mode.
Else, open the log file in appending and reading mode; Default is "".
is_rank0 (boolean): If True, create the normal logger; If False, create the null
logger, which is useful in DDP training. Default is True.
"""
if is_rank0:
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
# StreamHandler
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setLevel(level=logging.INFO)
logger.addHandler(stream_handler)
# FileHandler
mode = "w+" if resume == "False" else "a+"
if log_name is None:
log_name = os.path.basename(sys.argv[0]).split(".")[0] + (".log")
file_handler = logging.FileHandler(os.path.join(log_dir, log_name), mode=mode)
file_handler.setLevel(level=logging.INFO)
logger.addHandler(file_handler)
else:
logger = NoOp()
return logger
logger = get_logger('/tmp/', 'foo')
logger.info('hello')
num_epochs = 2
backward = False
logger.info('num_epochs: {}, backward: {}'.format(num_epochs, backward))
for i in range(num_epochs):
input = torch.rand(10, 10).cuda()
target = torch.tensor([1]*10).cuda()
output = model(input)
loss = criterion(output, target)
if backward:
loss.backward()
logger.info('training...')
pytorch version 1.9.0, num_epochs>=2, backward=True:
image948×302 37.9 KB
pytorch version 1.9.0, num_epochs>=2, backward=False:
image946×279 35.7 KB
pytorch version 1.9.0, num_epochs=1, backward=True:
image950×253 34.2 KB
pytorch version 1.6.0, num_epochs>=2, backward=True:
image856×281 34.6 KB |
st176073 | Hi,
I am exploring the use of DistributedDataParallel to train on two GPUs. In all the examples I have found the DataLoader and Model are instanciated separately at each rank. Can I create the model and dataloader outside of the multiprocessing.spaw function and pass them as input arguments to multiprocessing.spawn? I mean something like this:
import torch.multiprocessing as mp
loader = DataLoader(dataset, batch_size=128, shuffle=True)
model = MyNet()
if __name__ == '__main__':
mp.spawn(fit, args=(model, devices, loader), nprocs=len(devices), join=True)
In this case, will be a new model and dataloader created at each rank without shared memory?
I would like to use the same iterable for the dataloader at each rank so each GPU works on the same epoch. Is this possible?
I know that it would be better to use a DistributtedSampler, but I am working with graphs and I cannot do so due to the irregular structure of my data. If you know other similar option I would be nice to hear. |
st176074 | I think that the model’s parameter tensors will have their data moved to shared memory as per Multiprocessing best practices — PyTorch 1.9.0 documentation 4, so you’d essentially be doing Hogwild training and this could cause issues with DistributedDataParallel as usually the model is instantiated individually on each rank. Is there a reason you can’t simply create the model within each subprocess?
Regarding the question about the data loader, I think if the dataloader is serializable this should work, but it would result in each worker training on the same data, and thus DDP would not really help as you’re using the same data and different model. Is it possible for you to do something like pass in a file handle/file name to each worker, and possibly an index/offset so you can shard your data across multiple workers? |
st176075 | Hi,
There are 8 GPU on my computer(1 computer).
I have run 1 process with 2 GPUs(index=0,1).
Could I run 1 more process on this computer with GPU 3 and 4? |
st176076 | Yes, you could execute multiple scripts and might want to mask the GPUs via CUDA_VISIBLE_DEVICES to make sure the current script sees only the desired devices. |
st176077 | Using conda pytorch. Running my code with python -m torch.distributed.launch --use_env --nproc_per_node 2 on a single node with 2 GPUs.
In my code I took care of the logging so that it is only logged by the main process and it used to work for previous PyTorch versions. However, in PyTorch 1.8.0, logging is done with an additional, default-style, logger, both for the main process and the other(s).
This is how it looks like in PyTorch 1.7.0:
[2021-03-08 21:03:51,597][midaGAN.utils.environment][INFO] - PyTorch version: 1.7.0
[2021-03-08 21:03:51,598][midaGAN.utils.environment][INFO] - CUDA 10.2 - cuDNN 7605
[2021-03-08 21:03:51,598][midaGAN.utils.environment][INFO] - Global rank: 0.
[2021-03-08 21:03:51,598][midaGAN.utils.environment][INFO] - Local rank: 0.
And this is for PyTorch 1.8.0. We can see the same messages of the main process being logged twice, once with the logger i defined and once with some default-styled logger that does not occur in 1.7.0.
That same default logger also logs the other (non-main) process.
INFO:midaGAN.utils.environment:PyTorch version: 1.8.0
[2021-03-08 21:05:36,843][midaGAN.utils.environment][INFO] - PyTorch version: 1.8.0
INFO:midaGAN.utils.environment:CUDA 10.2 - cuDNN 7605
[2021-03-08 21:05:36,843][midaGAN.utils.environment][INFO] - CUDA 10.2 - cuDNN 7605
INFO:midaGAN.utils.environment:Global rank: 0.
[2021-03-08 21:05:36,844][midaGAN.utils.environment][INFO] - Global rank: 0.
INFO:midaGAN.utils.environment:Local rank: 0.
[2021-03-08 21:05:36,844][midaGAN.utils.environment][INFO] - Local rank: 0.
.
.
.
# NOTE: the default logger is not enabled for other processes, but PyTorch is somehow logging it with a default-style logger
INFO:midaGAN.utils.environment:PyTorch version: 1.8.0
INFO:midaGAN.utils.environment:CUDA 10.2 - cuDNN 7605
INFO:midaGAN.utils.environment:Global rank: 1.
INFO:midaGAN.utils.environment:Local rank: 1.
Am I missing something? |
st176078 | Solved by ChaseMonsterAway in post #10
I meet same problem with distributed training when i update torch to 1.8.0. And i find that handlers in RootLogger which is the parent of other loggers is not empty. I check the code in pytorch, and i find it build a logger with logging.getLogger() which will return RootLogger. Thus, when we do ‘log… |
st176079 | Actually, in 1.7.0, after the first iter I see this message INFO:root:Reducer buckets have been rebuilt in this iteration. and then this weird logger starts logging like in 1.8.0. |
st176080 | I don’t use PT Lightning. I based my logger setup on https://github.com/directgroup/direct/blob/d1eb9ba8706b25a7aae19b03587cd79cd3675822/direct/environment.py#L78 3, which calls https://github.com/directgroup/direct/blob/d1eb9ba8706b25a7aae19b03587cd79cd3675822/direct/utils/logging.py#L10 4 to setup the logger. |
st176081 | is it possible that torch distributed is defining a root logger itself?
The reason why I think that may the case is because, for example, messages from my code coming through this unwanted logger look like this:
INFO:midaGAN.utils.environment:PyTorch version: 1.8.0
INFO:Validator:Validation started
while the messages from DDP look like this:
INFO:root:Reducer buckets have been rebuilt in this iteration.
INFO:root:Added key: store_based_barrier_key:2 to store for rank: 1
INFO:root:Added key: store_based_barrier_key:2 to store for rank: 0
Could it be something with [DDP Logging] Log comm. hook in ddp logging by rohan-varma · Pull Request #52966 · pytorch/pytorch · GitHub 9? |
st176082 | Also, I checked with one of the persons who developed DIRECT if they have the same issue, and they do. they didn’t have it before neither. |
st176083 | I meet same problem with distributed training when i update torch to 1.8.0. And i find that handlers in RootLogger which is the parent of other loggers is not empty. I check the code in pytorch, and i find it build a logger with logging.getLogger() which will return RootLogger. Thus, when we do ‘logger.info’ it will call the handlers of our logger and then call the handlers of RootLogger.
I solve this problem by empty the handlers in RootLogger before i build a new one. |
st176084 | Could you be more specific about how to
I solve this problem by empty the handlers in RootLogger before i build a new one. |
st176085 | Hi,
I’m trying to train two models A and B on 4 GPUs, each being trained on 2 GPUs (and thus DDP is needed). The two models are independent, but they need to exchange some information during training (not gradients), hence I would like to execute a single command torch.distributed.launch so that the workers can communicate to each other. What is the correct way to do it?
Thanks in advance! |
st176086 | Solved by pritamdamania87 in post #2
You can launch 4 processes (1 per GPU) and initialize a process group of world_size 4. You can use this process group to exchange data between A and B.
Then for each model, create a new subprocess group using Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation on GPU… |
st176087 | You can launch 4 processes (1 per GPU) and initialize a process group of world_size 4. You can use this process group to exchange data between A and B.
Then for each model, create a new subprocess group using Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 11 on GPUs 0,1 and GPUs 2,3 respectively and pass that process group to DDP. |
st176088 | The code can be launched in one node with multiple process correctly. However, when I try to launch the same code with multiple nodes. It will fail with the following error.
Traceback (most recent call last):
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/__main__.py", line 1, in <module>
import run.train
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/run/train.py", line 3, in <module>
from src.read import *
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/src/read.py", line 275, in <module>
dist.init_process_group(backend=DDP_backend)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Permission denied
Traceback (most recent call last):
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/bin/python3', '-u', '/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/', '--local_rank=1']' returned non-zero exit status 1.
Here is my scripts
python3 -m torch.distributed.launch --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr=gpu1 --master_port=22 /share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/
python3 -m torch.distributed.launch --nproc_per_node=2 --nnodes=2 --node_rank=1 --master_addr=gpu1 --master_port=22 /share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/
The “gpu1” is my hostname and I have tried to replace the hostname with IP.
Thanks in advance for any kind help. |
st176089 | I think the problem is that you are using port 22 for master_port. Port 22 is reserved for SSH and usually ports 0-1023 are system ports for which you need root access (probably that is why you see Permission Denied). I’d suggest using a port number > 1024 and ensure no other service is supposed to use that port number. |
st176090 | This can only work when I manually log in the every compute node involved and execute the directive in every compute node
python3 -m torch.distributed.launch --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr=gpu1 --master_port=1027 /share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/ >out
However, it is very inconvenient to do this in a cluster-management system. Do you have any idea to submit this by a general script in cluster-management system? |
st176091 | What sort of cluster management system do you have? Such integrations usually rely on the kind of cluster management systems you use. For example there is a pytoch kubernetes operator: GitHub - kubeflow/pytorch-operator: PyTorch on Kubernetes 7 |
st176092 | Hi ,
I code a distributed data parallel in pytorch (windows 10) , everything goes fine without error . When I load it in Linux , it give me and error:
– Process 0 terminated with the following error:
Traceback (most recent call last):
File “/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py”, line 19, in _wrap
fn(i, *args)
File “/scratch/users/industry/ite/philipso/m/pixelsnail_25062021/main.py”, line 121, in train
rank=args[0] )
File “/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py”, line 455, in init_process_group
barrier()
File “/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py”, line 1960, in barrier
work = _default_pg.barrier()
RuntimeError: flock: Function not implemented
This is how I code it
I used
for machine_rank in range(world_size):
mp.spawn( train,args=(world_size,backend,c_param),nprocs=world_size,join=True)
then follow by:
def train(*args):
if (args[3].training_type==1):
torch.distributed.init_process_group(backend=args[2],init_method=r"file:///"+args[3].classifier+".log",world_size=args[1],rank=args[0] )
I suspect the problem is in join=True in mp.spawn
any help will be much appreciated |
st176093 | This might be a file system related issue (see Run code for training got some errors · Issue #20 · wenet-e2e/wenet · GitHub 4 for a similar issue). What kind of file system are you using for the file in init_method? |
st176094 | The System in Linux is CentOS 6
my laptop is Windows 10
Thank Bro for helping me |
st176095 | It could be possible that the shared file system doesn’t support the flock system call. If you are training on one node, can you use the local file system instead?
If you are training with multiple nodes, you can instead use TCP initialization described here: Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 1. |
st176096 | Also, looking at Mounting a Lustre File System on Client Nodes - Lustre Wiki 3, it seems like there are options to mount Lustre with support for flock, that might be another option. |
st176097 | Hi thank
I have solved all problems except this “flock function not implemented”. Interestingly , the shared file did created however, it did not manage to flock. The problem is with init_process_group. What can I replace with this? |
st176098 | As I mentioned above, you could use TCP initialization as follows:
import torch.distributed as dist
# Use address of one of the machines
dist.init_process_group(backend, init_method='tcp://10.1.1.20:23456',
rank=args.rank, world_size=4) |
st176099 | Hi there,
I’m trying to train my network wrapped with DistributedDataParallel on a single machine with 4 GPUs. It went smoothly until the 43rd epoch. The training process was interrupted by CUDA out of memory error on GPU 2.
Traceback (most recent call last):
File "train_ddp.py", line 247, in <module>
trainer.training(epoch)
File "train_ddp.py", line 171, in training
iter_loss.backward()
File "/scratch/workspace/zsding/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/scratch/workspace/zsding/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 752.00 MiB (GPU 2; 15.77 GiB total capacity; 10.24 GiB already allocated; 518.25 MiB free; 785.63 MiB cached)
Then I shrank the input size and resumed from my previous weight to try to debug the memory footprint. The chart below shows that there were three extra python threads running and occupying 1080 mib
on GPU 2. And I find that they shared same PID with the threads on other GPUs.
Capture.PNG954×811 32.6 KB
And of course, each GPU has only one thread during the first training epoch. No GPU specific operation (like .to(2)) used in my train script, but I applied SyncBatchNorm on my model (can it be the reason?).
How can I figure out what are those three threads? Could you provide some solutions to solve this problem?
Thanks! |
st176100 | Hi, Is it possible for you to provide a snippet of your code/a way to reproduce the issue that you are seeing? Similar to DataParallel imbalanced memory usage 74, it could be the case that the outputs of your forward pass are being gathered onto a single GPU (GPU 2 in your case), causing it to OOM. |
st176101 | Hi, thanks for replying.
Here is the forward pass part. I don’t know if it is helpful.
def training(self, epoch):
train_loss = 0.0
self.model.train()
tbar = tqdm(self.trainloader)
for i, (image, target) in enumerate(tbar):
image, target = image.to(self.device), target.to(self.device)
self.scheduler(self.optimizer, i, epoch, self.best_pred)
self.optimizer.zero_grad()
outputs = self.model(image)
# multi-scale training
iter_loss = 0
for logit in outputs:
_, _, H, W = logit.shape
labels_ = utils.resize_labels(target, size=(H, W))
iter_loss += self.criterion(logit.cuda(), labels_.cuda())
torch.cuda.empty_cache()
iter_loss.backward()
self.optimizer.step()
train_loss += iter_loss.item()
tbar.set_description('Train loss: %.3f' % (train_loss / (i + 1)))
And there is no gather operation explicitly used in my code. |
st176102 | Hi, thanks for your reply!
Here is the forward pass part in the script. I don’t know whether it is helpful.
def training(self, epoch):
train_loss = 0.0
self.model.train()
tbar = tqdm(self.trainloader)
for i, (image, target) in enumerate(tbar):
image, target = image.to(self.device), target.to(self.device)
self.scheduler(self.optimizer, i, epoch, self.best_pred)
self.optimizer.zero_grad()
outputs = self.model(image)
# Multi-size training
iter_loss = 0
for logit in outputs:
_, _, H, W = logit.shape
labels_ = utils.resize_labels(target, size=(H, W))
iter_loss += self.criterion(logit.cuda(), labels_.cuda())
torch.cuda.empty_cache()
iter_loss.backward()
self.optimizer.step()
train_loss += iter_loss.item()
tbar.set_description('Train loss: %.3f' % (train_loss / (i + 1)))
And I’ve not applied gather function (like torch.nn.parallel.scatter_gather.gather) explicitly.
thanks! |
st176103 | Sorry for the late reply here. Just to confirm, are you spawning a single process per device (gpu)? |
st176104 | A bit late here, but I had the exact same issue and the problem was that I was loading a state_dict (saved from the device cuda:0) from four different GPUs, and the resulting effect was that all the GPUs were loading the state_dict in the device cuda:0.
I solved loading the state_dict with:
torch.load(<state dict file path>, map_location=current_device) |
st176105 | I am trying to train and validate model using DistributedDataParallel. Everything is fine during training, but when the model starts validate, the code works several iterations and after crashes due to errors with threads. I do validation only in rank=0. Do I need to put dist.barrier() somewhere? Or do I need to validate in all ranks? |
st176106 | When rank0 validates the model, what do other ranks do? Exit or proceed to the next training phase? And what error did you see?
I am assuming other ranks proceed to the next training phase and then timeout during DDP backward pass. If this is the case, yep, you can try use a dist.barrier() to sync, like:
for _ in range(...):
train()
if rank == 0:
validate()
dist.barrier()
If you still hit timeout at barrier, you can try setting the timeout arg in init_process_group to a larger value. |
st176107 | It seems I solved the problem. I used DistributedSampler in validation dataloader, now I changed sampler to None and now code works. Other ranks do nothing. I don’t know how to share validation loss values across ranks, so I do validation only in rank=0. Is it a good practice to do so? Also I am using wandb to monitor training metrics, but after several epochs I got Timeout error at barrier. Is it correct to increase the value of timeout limit in this situation? |
st176108 | Take a look here 169 on how to share information between your ranks.
Since you’re using wandb, I’m assuming that you’re also only logging with rank 0 as well? I wouldn’t say it’s bad practice to share the validation loss with all your ranks if the other ranks aren’t doing anything with the information. It’s important when you’re doing early stopping or learning rate scheduling based off of the validation loss.
Personally, I had all my ranks participate in computing the validation loss. In this scenario, you wouldn’t have to deal with barriers and instead just share the tensors containing your losses. But I don’t think it’s necessarily wrong to increase the timeout limit. |
st176109 | Thank you, @ayalaa2 ! I understand that if I used all the ranks for validation it would be faster. I will try to share tensor values across ranks.
ayalaa2:
ssuming that you’re also only logging with rank 0 as well?
Yes, I am using wandb only with rank 0. |
st176110 | @ayalaa2 could you give some example code how to use sharing data between ranks, please? |
st176111 | RocketFlash:
could you give some example code how to use sharing data between ranks, please?
One option is to use the collective communication APIs, e.g., all_gather 25 (NCCL and Gloo) or gather 3 (Gloo). Sth like:
# on all ranks
out_tensors = [torch.zeros(2, 2), torch.zeros(2, 2)]
inp_tensor = [torch.ones(2, 2)]
torch.distributed.all_gather(out_tensors, inp_tensor) |
st176112 | Thank you very much, @mrshenli!
Is it possible to store not only tensors, but also list of strings for example? |
st176113 | Is it possible to store not only tensors, but also list of strings for example?
Yep. But the feature is only available on master (will be released in v1.7). See the following code:
github.com
pytorch/pytorch/blob/cb26661fe4faf26386703180a9045e6ac6d157df/test/test_multiprocessing.py#L580-L600 1
def test_event_multiprocess(self):
event = torch.cuda.Event(enable_timing=False, interprocess=True)
self.assertTrue(event.query())
ctx = mp.get_context('spawn')
p2c = ctx.SimpleQueue()
c2p = ctx.SimpleQueue()
p = ctx.Process(
target=TestMultiprocessing._test_event_multiprocess_child,
args=(event, p2c, c2p))
p.start()
c2p.get() # wait for until child process is ready
torch.cuda._sleep(50000000) # spin for about 50 ms
event.record()
p2c.put(0) # notify child event is recorded
self.assertFalse(event.query())
c2p.get() # wait for synchronization in child
self.assertTrue(event.query())
p.join()
If you need it now, you can copy the code change in distributed_c10d.py from this PR 2. It basically is a wrapper that converts strings to tensors. |
st176114 | Hi, I used similar code like this:
for i in range( self._epoch+1, self._niter+1 ):
if self._train_sampler is not None:
self._train_sampler.set_epoch(i)
self._train()
if self.rank==0:
print('rank {} go to validation'.format(self.rank))
self._validate()
if self.distributed:
print('rank {} go to barrier'.format(self.rank))
dist.barrier()
print('rank {} go out of barrier'.format(self.rank))
The output of the code is like:
rank 1 go to barrier
Training…
rank 0 go to validation
start to validate
evaluating…
rank 0 go to barrier
rank 0 go out of barrier
Then it just stopped and rank 1 won’t go out. There is no other error, just freezed. Do you have any idea what happened? When I remove the validation, the code is working. My test dataloader didn’t use the distributedSampler. |
st176115 | Evaluating with DistributedDataParallel should be done with care otherwise the values could be inaccurate. DistributedSampler can pad some replicated data when the number of samples per process is not even. How DistributedSampler works is explained here 25.
This is because DDP checks synchronization at backprops and the number of minibatch should be the same for all the processes. However, at evaluation time it is not necessary.
You can use a custom sampler like DistributedEvalSampler 79 to avoid data padding. Regarding the communication between the DDP processes, you can refer to this example 72. |
st176116 | @mrshenli The dist.barrier() doesn’t work while running validation only on rank 0, as mentioned in this thread 87. |
st176117 | Do you mean how you would go about performing a validation pass using all ranks?
Your validation loop will operate very similar to your training loop where each rank will operate on a subset of the validation dataset. The only difference is that you will want to keep track of the validation metrics you care about and then share the results using the methods in the link I gave.
Sometimes its quicker to just let the main rank perform validation depending on the specific task. |
st176118 | @ayalaa2 thanks for the prompt reply!
Does it mean I need a distributed sampler on the validation set as well?
So, if I want to measure accuracy on my validation, each rank should count the number of “correct” predictions and the number of samples it saw and the “reduce” it?
Would that look something like this:
def validate(val_loader, model):
# one counter for "correct" the other for "total"
counter = torch.zeros((2,), device=torch.device(f'cuda:{args.rank}'))
model.eval()
with torch.no_grad():
for x, y in val_loader: # assuming val_loader has a distributed sampler
pred = model(x)
num_correct = my_accuracy_function(pred, y) # count number of correct predictions
counter[0] += num_correct
counter[1] += x.shape[0] # total number of samples
# done validating for this view
torch.distributed.reduce(counter, 0) # reduce all to rank 0 process
if args.rank == 0:
# only this rank reports accuracy
print(f'total accuracy = {counter[0]}/{counter[1]} = {counter[0]/counter[1]}')
Does that make any sense? |
st176119 | Yes you would need a distributed sampler to make sure each rank gets a unique data split. The code you posted looks about right. |
st176120 | I am training a DDP model. However, the system always reboots.
I use a machine with two RTX3090, 10900k CPU – Ubuntu18, CUDA11.0, Nvidia455.38, Pytorch1.7.1, and Python3.8.
I also test my code on another machine with two 2080ti, it runs well. |
st176121 | Solved by kaka_zhao in post #4
I think this is a hardware issue. I use a 1200w PSU for two RTX3090.
After I limited the GPU power from 350w to 250w by nvidia-smi -i 0,1 -pl 250, the problem has gone away! |
st176122 | @kaka_zhao Regarding system reboots this could be a system issue rather than a PyTorch issue. Probably check system logs like /var/log/messages to see why the system rebooted. |
st176123 | I think this is a hardware issue. I use a 1200w PSU for two RTX3090.
After I limited the GPU power from 350w to 250w by nvidia-smi -i 0,1 -pl 250, the problem has gone away! |
st176124 | Limiting GPU power works for me.
A detailed instruction:
# enable persistence mode
sudo nvidia-smi -pm 1
# limite power from 350W to 250W
sudo nvidia-smi -i 0,1,...,3 -pl 250 |
st176125 | While using PyTorch version 1.9.0, I’m getting the error saying that my tensors are at two different locations. Also, the error trace leads me to the LayerNorm function which has been assigned to the variable h. But when I check -
print(h.is_cuda),
it returns true. Therefore, I’m confused regarding what is causing this error and how to solve it.
File "C:/Users/user/AppData/Roaming/JetBrains/PyCharmCE2020.2/scratches/abc.py", line 206, in forward
h = nn.LayerNorm(h.shape[1])(h)
File "C:\Users\user\anaconda3\envs\paper_2\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\user\anaconda3\envs\paper_2\lib\site-packages\torch\nn\modules\normalization.py", line 174, in forward
input, self.normalized_shape, self.weight, self.bias, self.eps)
File "C:\Users\user\anaconda3\envs\paper_2\lib\site-packages\torch\nn\functional.py", line 2346, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking arugment for argument weight in method wrapper_native_layer_norm)
Update #1:
After following the stack trace, I reached the forward function in normalization.py and checked the variables present over there -
def forward(self, input: Tensor) -> Tensor:
print("Foo")
print("Check if weight is CUDA", self.weight.is_cuda)
print("Check if bias is CUDA", self.bias.is_cuda)
print("Check if input is CUDA", input.is_cuda)
#print("Check if normalized shape is CUDA", self.normalized_shape.is_cuda)
return F.layer_norm(
input, self.normalized_shape, self.weight, self.bias, self.eps)
Check if weight is CUDA False
Check if bias is CUDA False
Check if input is CUDA True
Therefore, it is the weight and the biases within the layernorm function that is causing this issue. A quick hack done by me to get the function running was as follows. However, I am not sure whether is technique is appropriate -
h = h.to(device='cpu')
h = nn.LayerNorm(h.shape[1])(h)
h = h.to(device='cuda')
Here is a minimally reproducible example -
import math, random
from sklearn.datasets import load_sample_images
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autograd
import torch.nn.functional as F
###Obtaining a random image and preprocessing it!##
dataset = load_sample_images()
first_img_data = dataset.images[0]
first_img_data = first_img_data.reshape(-1, 427, 640)
first_img_data = first_img_data[1, :, :]
first_img_data = first_img_data[0:84, 0:84].reshape(-1, 84,84)
first_img_data = torch.tensor(first_img_data)
#################################################################################################################################
USE_CUDA = torch.cuda.is_available()
Variable = lambda *args, **kwargs: autograd.Variable(*args, **kwargs).cuda() if USE_CUDA else autograd.Variable(*args, **kwargs)
class Cnn(nn.Module):
def __init__(self, input_shape):
super(Cnn, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(input_shape[0], 32, kernel_size=8, stride=4),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=4, stride=2),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, stride=1),
nn.ReLU()
)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
# If you uncomment the line below, it'll throw an error!
#x = nn.LayerNorm(x.shape[1])(x)
return x
state = first_img_data
Shape = (1,84, 84)
current_model = Cnn(Shape)
current_model.to('cuda')
state = Variable(torch.FloatTensor(np.float32(state)).unsqueeze(0), volatile=True)
q_value = current_model.forward(state)
P.S There is a similar question over here(pytorch running: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu 8), but I couldn’t obtain an answer by following the steps given. |
st176126 | Solved by desert_ranger in post #6
I figured it out. The LayerNorm needs to be declared in the init_method, rather than the forward method. |
st176127 | ptrblck:
Could you post an executable code snippet
I have extended the question. Please let me know if you still need an executable code snippet - |
st176128 | Yes, an executable code snippet would still be needed, as I cannot reproduce the issue via:
# CPU
input = torch.randn(20, 5, 10, 10)
m = nn.LayerNorm(input.size()[1:])
output = m(input)
# GPU
input = input.cuda()
m.cuda()
output = m(input) |
st176129 | Hello ptrblck,
Thank you for your interest and help. I have included a minimally reproducible example for your reference. Please let me know if you need any more information. |
st176130 | desert_ranger:
LayerNorm
I figured it out. The LayerNorm needs to be declared in the init_method, rather than the forward method. |
st176131 | Hi I have a problem with using convert_sync_batchnorm ,When I was trying to use DDP everything works fine ,but when I turn on the sync_bn mode ,the training process start and get stuck right away…
Here’s some info
pytorch version 1.8.0
#How I run the script :
python -m torch.distributed.launch \
--nproc_per_node 4 \
--master_addr $master_addr \
--master_port $port train.py \
--train.py
--batch 256 --weights yolov5s.pt --device 0,1,2,3 \
--sync_bn
#How I init :
#note : opt.local_rank is -1 here
if opt.local_rank != -1:
assert torch.cuda.device_count() > opt.local_rank
torch.cuda.set_device(opt.local_rank)
device = torch.device('cuda', opt.local_rank)
dist.init_process_group(backend='nccl', init_method='env://')
I would like to know why when I trun on the sync batchnormalization and the training process stop at the beginning of training… Thanks |
st176132 | did not see anything wrong with your init_process_group, also do not think sync batch norm could cause initialization failure in common cases.
do you want to share your reproducible code snippet? |
st176133 | github.com/pytorch/pytorch
Does the NCCL operation use the default stream as other computations? 4
opened
Jun 23, 2021
lhb8125
module: nccl
triaged
## ❓ Questions and Help
### Does the overlap occur between communication and …computation?
Let's take the NCCL backend as an example, if I launch a collective operation, and then another related computation:
```
dist.allreduce(tensor, op=dist.ReduceOp.SUM, group=group)
tensor = tensor * 2
```
Is a CUDA synchronization essential?
### Related documentation
> Synchronous operation - the default mode, when async_op is set to False. When the function returns, it is guaranteed that the collective operation is performed. In the case of CUDA operations, it is not guaranteed that the CUDA operation is completed, since CUDA operations are asynchronous. For CPU collectives, any further function calls utilizing the output of the collective call will behave as expected. For CUDA collectives, function calls utilizing the output on the same CUDA stream will behave as expected. Users must take care of synchronization under the scenario of running under different streams. For details on CUDA semantics such as stream synchronization, see CUDA Semantics. See the below script to see examples of differences in these semantics for CPU and CUDA operations.
Base on the above description, I guess synchronization is unnecessary.
However, my previous investigation shows that the NCCL operations are launched on a separated stream, "ncclStream":
[code](https://github.com/pytorch/pytorch/blob/ed1da5be210c31cc07b033ac0f19f3dd6366feac/torch/lib/c10d/ProcessGroupNCCL.cpp#L1073)
So if I don't specify any stream, will the communication and computation be launched on the same stream? |
st176134 | Responded in Does the NCCL operation use the default stream as other computations? · Issue #60511 · pytorch/pytorch · GitHub 9, please don’t duplicate questions across forums and github. |
st176135 | Is there issue or performance difference between single gpu with ddp and single gpu in single process? |
st176136 | @Kevin_Cha There shouldn’t be significant overhead, but what is the reason to use ddp with a single gpu? Usually it makes sense to use ddp when training with multiple GPUs. |
st176137 | I am trying to do distributed training with PyTorch and encountered a problem.
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
| distributed init (rank 2): env://
| distributed init (rank 1): env://
| distributed init (rank 3): env://
| distributed init (rank 0): env://
yq01-sys-hic-k8s-k40-0163:3412:3412 [0] NCCL INFO Bootstrap : Using [0]xgbe0:10.88.150.11<0>
yq01-sys-hic-k8s-k40-0163:3412:3412 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
yq01-sys-hic-k8s-k40-0163:3412:3412 [0] NCCL INFO NET/IB : No device found.
yq01-sys-hic-k8s-k40-0163:3412:3412 [0] NCCL INFO NET/Socket : Using [0]xgbe0:10.88.150.11<0>
yq01-sys-hic-k8s-k40-0163:3412:3412 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda10.2
yq01-sys-hic-k8s-k40-0163:3413:3413 [1] NCCL INFO Bootstrap : Using [0]xgbe0:10.88.150.11<0>
yq01-sys-hic-k8s-k40-0163:3413:3413 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
yq01-sys-hic-k8s-k40-0163:3413:3413 [1] NCCL INFO NET/IB : No device found.
yq01-sys-hic-k8s-k40-0163:3413:3413 [1] NCCL INFO NET/Socket : Using [0]xgbe0:10.88.150.11<0>
yq01-sys-hic-k8s-k40-0163:3413:3413 [1] NCCL INFO Using network Socket
yq01-sys-hic-k8s-k40-0163:3414:3414 [2] NCCL INFO Bootstrap : Using [0]xgbe0:10.88.150.11<0>
yq01-sys-hic-k8s-k40-0163:3414:3414 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
yq01-sys-hic-k8s-k40-0163:3414:3414 [2] NCCL INFO NET/IB : No device found.
yq01-sys-hic-k8s-k40-0163:3414:3414 [2] NCCL INFO NET/Socket : Using [0]xgbe0:10.88.150.11<0>
yq01-sys-hic-k8s-k40-0163:3414:3414 [2] NCCL INFO Using network Socket
yq01-sys-hic-k8s-k40-0163:3415:3415 [3] NCCL INFO Bootstrap : Using [0]xgbe0:10.88.150.11<0>
yq01-sys-hic-k8s-k40-0163:3415:3415 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
yq01-sys-hic-k8s-k40-0163:3415:3415 [3] NCCL INFO NET/IB : No device found.
yq01-sys-hic-k8s-k40-0163:3415:3415 [3] NCCL INFO NET/Socket : Using [0]xgbe0:10.88.150.11<0>
yq01-sys-hic-k8s-k40-0163:3415:3415 [3] NCCL INFO Using network Socket
yq01-sys-hic-k8s-k40-0163:3415:3441 [3] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
yq01-sys-hic-k8s-k40-0163:3412:3438 [0] NCCL INFO Channel 00/02 : 0 1 2 3
yq01-sys-hic-k8s-k40-0163:3415:3441 [3] NCCL INFO Trees [0] -1/-1/-1->3->2|2->3->-1/-1/-1 [1] -1/-1/-1->3->2|2->3->-1/-1/-1
yq01-sys-hic-k8s-k40-0163:3412:3438 [0] NCCL INFO Channel 01/02 : 0 1 2 3
yq01-sys-hic-k8s-k40-0163:3412:3438 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
yq01-sys-hic-k8s-k40-0163:3412:3438 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1
yq01-sys-hic-k8s-k40-0163:3412:3438 [0] NCCL INFO Setting affinity for GPU 0 to 3f
yq01-sys-hic-k8s-k40-0163:3415:3441 [3] NCCL INFO Setting affinity for GPU 3 to 0fc0
yq01-sys-hic-k8s-k40-0163:3413:3439 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
yq01-sys-hic-k8s-k40-0163:3413:3439 [1] NCCL INFO Trees [0] 2/-1/-1->1->0|0->1->2/-1/-1 [1] 2/-1/-1->1->0|0->1->2/-1/-1
yq01-sys-hic-k8s-k40-0163:3413:3439 [1] NCCL INFO Setting affinity for GPU 1 to 3f
yq01-sys-hic-k8s-k40-0163:3414:3440 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
yq01-sys-hic-k8s-k40-0163:3414:3440 [2] NCCL INFO Trees [0] 3/-1/-1->2->1|1->2->3/-1/-1 [1] 3/-1/-1->2->1|1->2->3/-1/-1
yq01-sys-hic-k8s-k40-0163:3414:3440 [2] NCCL INFO Setting affinity for GPU 2 to 0fc0
yq01-sys-hic-k8s-k40-0163:3412:3438 [0] NCCL INFO Channel 00 : 0[3000] -> 1[4000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3413:3439 [1] NCCL INFO Channel 00 : 1[4000] -> 2[83000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3415:3441 [3] NCCL INFO Channel 00 : 3[84000] -> 0[3000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3414:3440 [2] NCCL INFO Channel 00 : 2[83000] -> 3[84000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3415:3441 [3] NCCL INFO Channel 00 : 3[84000] -> 2[83000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3413:3439 [1] NCCL INFO Channel 00 : 1[4000] -> 0[3000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3412:3438 [0] NCCL INFO Channel 01 : 0[3000] -> 1[4000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3415:3441 [3] NCCL INFO Channel 01 : 3[84000] -> 0[3000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3414:3440 [2] NCCL INFO Channel 00 : 2[83000] -> 1[4000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3413:3439 [1] NCCL INFO Channel 01 : 1[4000] -> 2[83000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3414:3440 [2] NCCL INFO Channel 01 : 2[83000] -> 3[84000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3415:3441 [3] NCCL INFO Channel 01 : 3[84000] -> 2[83000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3413:3439 [1] NCCL INFO Channel 01 : 1[4000] -> 0[3000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3415:3441 [3] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
yq01-sys-hic-k8s-k40-0163:3412:3438 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
yq01-sys-hic-k8s-k40-0163:3415:3441 [3] NCCL INFO comm 0x7f40b8000e00 rank 3 nranks 4 cudaDev 3 busId 84000 - Init COMPLETE
yq01-sys-hic-k8s-k40-0163:3414:3440 [2] NCCL INFO Channel 01 : 2[83000] -> 1[4000] via direct shared memory
yq01-sys-hic-k8s-k40-0163:3415:3415 [3] enqueue.cc:215 NCCL WARN Cuda failure 'invalid device function'
yq01-sys-hic-k8s-k40-0163:3415:3415 [3] NCCL INFO group.cc:282 -> 1
yq01-sys-hic-k8s-k40-0163:3413:3439 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
Traceback (most recent call last):
File "main.py", line 420, in <module>
main(args)yq01-sys-hic-k8s-k40-0163:3412:3438 [0] NCCL INFO comm 0x7f5388000e00 rank 0 nranks 4 cudaDev 0 busId 3000 - Init COMPLETE
File "main.py", line 172, in main
utils.init_distributed_mode(args)
File "/root/paddlejob/workspace/deit/utils.py", line 236, in init_distributed_mode
yq01-sys-hic-k8s-k40-0163:3412:3412 [0] NCCL INFO Launch mode Parallel
world_size=args.world_size, rank=args.rank)
File "/usr/local/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 455, in init_process_group
barrier()
File "/usr/local/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1960, in barrier
yq01-sys-hic-k8s-k40-0163:3412:3412 [0] enqueue.cc:215 NCCL WARN Cuda failure 'invalid device function'
yq01-sys-hic-k8s-k40-0163:3412:3412 [0] NCCL INFO group.cc:282 -> 1
work = _default_pg.barrier()
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:31, unhandled cuda error, NCCL version 2.7.8
Traceback (most recent call last):
File "main.py", line 420, in <module>
main(args)
File "main.py", line 172, in main
utils.init_distributed_mode(args)
File "/root/paddlejob/workspace/deit/utils.py", line 236, in init_distributed_mode
world_size=args.world_size, rank=args.rank)
File "/usr/local/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 455, in init_process_group
barrier()
File "/usr/local/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1960, in barrier
yq01-sys-hic-k8s-k40-0163:3414:3440 [2] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
work = _default_pg.barrier()
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:31, unhandled cuda error, NCCL version 2.7.8
yq01-sys-hic-k8s-k40-0163:3413:3439 [1] NCCL INFO comm 0x7fd948000e00 rank 1 nranks 4 cudaDev 1 busId 4000 - Init COMPLETE
yq01-sys-hic-k8s-k40-0163:3413:3413 [1] enqueue.cc:215 NCCL WARN Cuda failure 'invalid device function'
yq01-sys-hic-k8s-k40-0163:3413:3413 [1] NCCL INFO group.cc:282 -> 1
Traceback (most recent call last):
File "main.py", line 420, in <module>
main(args)
File "main.py", line 172, in main
utils.init_distributed_mode(args)
File "/root/paddlejob/workspace/deit/utils.py", line 236, in init_distributed_mode
world_size=args.world_size, rank=args.rank)
File "/usr/local/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 455, in init_process_group
barrier()
File "/usr/local/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1960, in barrier
work = _default_pg.barrier()
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:31, unhandled cuda error, NCCL version 2.7.8
yq01-sys-hic-k8s-k40-0163:3414:3440 [2] NCCL INFO comm 0x7fe3b0000e00 rank 2 nranks 4 cudaDev 2 busId 83000 - Init COMPLETE
yq01-sys-hic-k8s-k40-0163:3414:3414 [2] enqueue.cc:215 NCCL WARN Cuda failure 'invalid device function'
yq01-sys-hic-k8s-k40-0163:3414:3414 [2] NCCL INFO group.cc:282 -> 1
Traceback (most recent call last):
File "main.py", line 420, in <module>
main(args)
File "main.py", line 172, in main
utils.init_distributed_mode(args)
File "/root/paddlejob/workspace/deit/utils.py", line 236, in init_distributed_mode
world_size=args.world_size, rank=args.rank)
File "/usr/local/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 455, in init_process_group
barrier()
File "/usr/local/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1960, in barrier
work = _default_pg.barrier()
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:31, unhandled cuda error, NCCL version 2.7.8
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/usr/local/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/_internal/cpython-3.7.0/bin/python3', '-u', 'main.py', '--model', 'deit_small_patch16_224', '--batch-size', '64', '--data-path', 'data/ILSVRC2012', '--output_dir', 'checkpoints']' returned non-zero exit status 1.
Full environment:
node: 1
gpus: 4
gpu: k40m
pytorch version: 1.7.1 |
st176138 | @itisianlee Could you share your complete training script that reproduces the problem? |
st176139 | Hi guys,
I don’t know someone asked it before or not, but I really wanna to make sure everything I did is correct. Says we have learning rate lr, epoch e, and batch size b as normal setting. And now, we apply it to the ddp based on 2 gpus:
1). If we wanna the batch size keep unchanged by compared with the single card, we don’t change the lr and b = b/2
2). If we wanna apply double batch size, given we have 2 gpus, we don’t need to modify the b, but lr = lr * 2 because of the larger batch size.
3). Should we modify the epoch number?
4). I’m doing a semi-supervised learning, which including two losses (i.e., supervised loss and semi loss). During training, there is a weight that apply for the semi-supervised loss, and we don’t need to change it right?
Cheers, |
st176140 | Solved by Yanli_Zhao in post #5
batch size will impact accuracy, this is normal |
st176141 | and 2) are correct.
if you have 1) setting, I guess you do not need to modify the epoch number; if you have 2) setting, I guess you may need to adjust the number a little bit as it trains faster? |
st176142 | That’s wired. When I double up the batch size, the convergence speed of the algorithm turns down… I’m testing the b=4 and b=8 (per gpu) for two cards. After the first epoch, the smaller batch size always archives better accuracy. Is that normal? |
st176143 | Thanks so much for you help.
After the first epoch, the smaller batch size always archives better accuracy.
Do you have any suggestion with this? |
st176144 | Hi, correct me if I’m wrong but I found that Dropout behaves similarly (correlated) across different GPUs when using DDP. In other word, cells at same position in tensor in different GPUs get all dropped out or not dropped out.
I believe this (might) makes the training loss reduce slower than when using single GPU training.
Is this a bug? and is there a quick fix to make dropout layers operate independently across GPUs?
Example code
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from torch.utils.data import Dataset, DataLoader
from torch.utils.data.distributed import DistributedSampler
from torch.nn.parallel import DistributedDataParallel as DDP
import random
import torch.distributed as dist
import os
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
class CustomDataset(Dataset):
def __init__(self, size=10) -> None:
super().__init__()
self.size=10
def __getitem__(self, i):
return torch.Tensor([i]), torch.Tensor([1, i , 2])
def __len__(self):
return self.size
class CustomModel(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(1, 3)
self.dropout = nn.Dropout(p=0.5)
def forward(self, x):
return self.dropout(self.linear1(x))
def main():
local_process_index = int(os.environ.get("LOCAL_RANK", -1))
dist.init_process_group(backend="nccl",
world_size=2, # Use 2 GPUs
rank=local_process_index)
set_seed(66)
device = torch.device("cuda", local_process_index)
dataset = CustomDataset(size=10)
dataloader = DataLoader(dataset, batch_size=5, shuffle=False,
sampler=DistributedSampler(dataset))
model = DDP(CustomModel().to(device),
device_ids=[local_process_index],
output_device=local_process_index)
model.train()
for input_, output_ in dataloader:
input_, output_ = input_.to(device), output_.to(device)
res = model(input_)
print(f'{res} : {input_}')
if __name__ == '__main__':
main()
-------
Result:
tensor([[ 0.0000, -0.5181, -0.2358],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -8.9689],
[ 0.2071, 0.0000, 0.0000],
[ 0.0000, -2.4261, 0.0000]], device='cuda:1',
grad_fn=<FusedDropoutBackward>)
tensor([[ 0.0000, -6.2421, -3.5107],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -2.4191],
[ 0.2832, 0.0000, 0.0000],
[ 0.0000, -10.0581, 0.0000]], device='cuda:0',
grad_fn=<FusedDropoutBackward>) |
st176145 | Solved by ptrblck in post #2
I think this would be expected, since you are manually seeding the script, wouldn’t it? |
st176146 | I think this would be expected, since you are manually seeding the script, wouldn’t it? |
st176147 | I see, thanks. Then I guess setting seed with local process seed(local_process_index) can make it still reproducible and the dropout operates independently across process. Would there be any hidden issue with this? |
st176148 | I don’t think there would be any hidden issue with it, as each process would then get its own seed.
Nevertheless, I would recommend to check the results using a small code snippet and make sure the results are expected. |
st176149 | Is there any pytorch convention about using which random number generator?
e.g Are all pytorch operations by default use torch RNG?, or are there operations use python built-in random RNG ? … And how can I look for which RNG is used? …
Thanks |
st176150 | You can grep -r "import random" in the pytorch source to check the usage in the code base.
Currently a lot of tests are using the Python random package (which should be uninteresting for you), the worker_init_fn seeds the random package for each worker in the DataLoader, and the distributed/elastic/rendezvous methods seem to use it for a random delay.
Besides that the internal methods should use the PyTorch pseudorandom number generator. |
st176151 | I am currently developing an drl framework that can run on a cluster with mpi. i am able to perform synchronous training using DDP over MPI. Now, I want to explore a different structure using a parameter sever and MPI. I saw that RPC would be the right tool, but I cannot figure out how/if rpc can run with mpi.
I saw this example 2, but it only works when all ranks are running on the same node. Is there a way to accomplish this with pytorch alone or is an additional tool needed? |
st176152 | you do not have to run rpc on MPI, pytorch distributed provides gloo and nccl backends, you can pass ‘gloo’ or ‘nccl’ to init_process_group().
for rpc, to get better performance, you can use tensor pipe as backend option |
st176153 | How can I check whether DDP is working properly or not?
I compared the speed in the environment below, and DDP was 2 times slower than DP.
Can such a case exist? Is it a code problem?
:+ Is all_reduce mandatory?
Train dataset: 3,000 images
ResNet-101
Batch_size: 32
2 GPUs
workers: 8 |
st176154 | you can compare DDP training convergency or evaluate its accuracy or loss.
in terms of performance, 2 times slower is a little surprised… depends on your codes and hardware |
st176155 | Hello,
I have a use case where I create one process per available gpu along with multiple e.g. 15 processes that only run on the CPU. Here is the minimalistic working example that works in pytorch 1.7.0 but fails in 1.9.0. However, if I only use 3 or less GPUs rather than 4 while keeping the number of CPU processes the same it works on version 1.9.0 too.
Could you please guide me towards why this is happening and how it can be resolved?
Thanks!
Code:
import os
import time
import torch
torch.multiprocessing.set_sharing_strategy('file_system')
import torch.multiprocessing as mp
import torch.distributed.rpc as rpc
no_of_saver_processes = 15
world_size = torch.cuda.device_count()
def cpu_process_initialization(rank):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9867'
rpc.init_rpc(f"{rank}",
rank=rank,
world_size = world_size + no_of_saver_processes,
backend=rpc.BackendType.TENSORPIPE,
rpc_backend_options=rpc.TensorPipeRpcBackendOptions(rpc_timeout=0,
init_method='env://')
)
print(f"Started CPU process {rank}")
print(f"Process {rank}: avaialable device {torch.cuda.current_device()}")
# Do something rather than sleeping example disk or cpu bound operations
time.sleep(30)
rpc.shutdown()
return
def cuda_process_initialization(rank):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9867'
rpc.init_rpc(f"{rank}",
rank=rank,
world_size=world_size + no_of_saver_processes,
backend=rpc.BackendType.TENSORPIPE,
rpc_backend_options=rpc.TensorPipeRpcBackendOptions(#num_send_recv_threads=args.world_size*3,
rpc_timeout=0,
init_method='env://')
)
torch.cuda.set_device(rank)
os.environ["CUDA_VISIBLE_DEVICES"] = f"{rank}"
print(f"Started CUDA process on gpu {rank}")
# Do some cuda operations
print(f"Process {rank}: avaialable device {torch.cuda.current_device()}")
time.sleep(30)
rpc.shutdown()
return
if __name__ == "__main__":
mp.set_start_method('forkserver', force=True)
trainer_processes = mp.spawn(cuda_process_initialization,
nprocs=world_size,
join=False)
cpu_processes = []
for rank in range(world_size,world_size+no_of_saver_processes):
p = mp.Process(target=cpu_process_initialization,
args=(rank,))
p.start()
cpu_processes.append(p)
for p in cpu_processes: p.join()
trainer_processes.join()
Error:
terminate called after throwing an instance of 'std::runtime_error'
what(): In handleEventInFromLoop at tensorpipe/transport/shm/connection_impl.cc:235 "errCouldn't access ringbuffer of connection outbox: fstat: Bad file descriptor (this error originated at tensorpipe/common/shm_segment.cc:153)"
[W tensorpipe_agent.cpp:653] RPC agent for 4 encountered error when accepting incoming pipe: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:843] RPC agent for 4 encountered error when reading incoming request from 0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:653] RPC agent for 2 encountered error when accepting incoming pipe: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:843] RPC agent for 2 encountered error when reading incoming request from 0: async error on socket: Connection reset by peer (this error originated at tensorpipe/transport/shm/connection_impl.cc:187)
[W tensorpipe_agent.cpp:843] RPC agent for 1 encountered error when reading incoming request from 0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
[W tensorpipe_agent.cpp:843] RPC agent for 6 encountered error when reading incoming request from 0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
[W tensorpipe_agent.cpp:843] RPC agent for 8 encountered error when reading incoming request from 0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
[W tensorpipe_agent.cpp:843] RPC agent for 3 encountered error when reading incoming request from 0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
[W tensorpipe_agent.cpp:843] RPC agent for 7 encountered error when reading incoming request from 0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
[W tensorpipe_agent.cpp:843] RPC agent for 5 encountered error when reading incoming request from 0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
[W tensorpipe_agent.cpp:1049] RPC agent for 7 encountered error when reading incoming response from 0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
[W tensorpipe_agent.cpp:1049] RPC agent for 1 encountered error when reading incoming response from 0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
[W tensorpipe_agent.cpp:1049] RPC agent for 8 encountered error when reading incoming response from 0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
Failed to respond to 'Shutdown Proceed' in time, got error eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
Failed to respond to 'Shutdown Proceed' in time, got error eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
Failed to respond to 'Shutdown Proceed' in time, got error eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
Process Process-9:
Traceback (most recent call last):
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/backend_registry.py", line 317, in _tensorpipe_init_backend_handler
agent.join()
RuntimeError: [/opt/conda/conda-bld/pytorch_1623448255797/work/third_party/gloo/gloo/transport/tcp/pair.cc:589] Read error [127.0.0.1]:39602: Connection reset by peer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/net/home/store/home/user/The/Feature_Distribution/test_mp.py", line 14, in cpu_process_initialization
rpc.init_rpc(f"{rank}",
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/__init__.py", line 203, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/__init__.py", line 237, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/backend_registry.py", line 99, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/backend_registry.py", line 319, in _tensorpipe_init_backend_handler
api.shutdown()
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 79, in wrapper
return func(*args, **kwargs)
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 313, in shutdown
_get_current_rpc_agent().join(shutdown=True)
RuntimeError: [/opt/conda/conda-bld/pytorch_1623448255797/work/third_party/gloo/gloo/transport/tcp/pair.cc:589] Read error [127.0.0.1]:39602: Connection reset by peer
[W tensorpipe_agent.cpp:1025] RPC agent for 6 encountered error when sending outgoing request #1 to 0: eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
Failed to respond to 'Shutdown Proceed' in time, got error eof (this error originated at tensorpipe/transport/shm/connection_impl.cc:259)
Process Process-8:
Traceback (most recent call last):
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/backend_registry.py", line 317, in _tensorpipe_init_backend_handler
agent.join()
RuntimeError: [/opt/conda/conda-bld/pytorch_1623448255797/work/third_party/gloo/gloo/transport/tcp/pair.cc:589] Read error [127.0.0.1]:6868: Connection reset by peer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/net/home/store/home/user/The/Feature_Distribution/test_mp.py", line 14, in cpu_process_initialization
rpc.init_rpc(f"{rank}",
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/__init__.py", line 203, in init_rpc
_init_rpc_backend(backend, store, name, rank, world_size, rpc_backend_options)
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/__init__.py", line 237, in _init_rpc_backend
rpc_agent = backend_registry.init_backend(
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/backend_registry.py", line 99, in init_backend
return backend.value.init_backend_handler(*args, **kwargs)
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/backend_registry.py", line 319, in _tensorpipe_init_backend_handler
api.shutdown()
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 79, in wrapper
return func(*args, **kwargs)
File "/home/user/anaconda3/envs/pytorch1.9/lib/python3.9/site-packages/torch/distributed/rpc/api.py", line 313, in shutdown
_get_current_rpc_agent().join(shutdown=True)
RuntimeError: [/opt/conda/conda-bld/pytorch_1623448255797/work/third_party/gloo/gloo/transport/tcp/pair.cc:589] Read error [127.0.0.1]:6868: Connection reset by peer |
st176156 | Solved by mrshenli in post #5
Looks like this is an issue with the SHM transport. I bumped up the UV transport priority to over-shadow SHM, and this problem disappeared.
Hey @akshay-raj-dhamija to unblock you, could you please try adding a _transports=["uv"] kill-switch to rpc_backend_options to disable SHM transport?
Below … |
st176157 | I confirm that I can reproduce the same behavior 3 CUDA process works but 4 does not work, with 15 CPU processes.
Could you please guide me towards why this is happening and how it can be resolved?
The difference between 1.9 and 1.7 is that we introduced RPC CUDA RMDA in v1.9. Direct Device-to-Device Communication with TensorPipe CUDA RPC — PyTorch Tutorials 1.9.0+cu102 documentation 3
So the first thing I tried is setting os.environ["CUDA_VISIBLE_DEVICES"] = "" in cpu_process_initialization. But I hit the following error:
RuntimeError: In create at tensorpipe/common/cuda_lib.h:117 "lib.init(0)(100) CUDA_ERROR_NO_DEVICE (no CUDA-capable device is detected)"
@lcw any opinion on whether we should let TensorPipe or TensorPipe_Agent to detect the CUDA device runtime availability? (Created an issue to track: init_rpc fails after setting CUDA_VISIBLE_DEVICES env var to "" · Issue #60578 · pytorch/pytorch · GitHub 2)
Then I played with the number of processes, it looks like it crashes as long as CUDA process + CPU process > 18. (i.e., 8 CUDA process + 10 CPU process also works). I don’t recall there is any hard limit on the number of processes in RPC. @lcw Does TensorPipe has restrictions on that?
Below is the repro I used locally.
import os
import time
import torch
#torch.multiprocessing.set_sharing_strategy('file_system')
import torch.multiprocessing as mp
import torch.distributed.rpc as rpc
no_of_saver_processes = 10
#world_size = torch.cuda.device_count()
world_size = 8
def cpu_process_initialization(rank):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9867'
os.environ["CUDA_VISIBLE_DEVICES"] = f"{rank % torch.cuda.device_count()}"
rpc.init_rpc(f"{rank}",
rank=rank,
world_size = world_size + no_of_saver_processes,
backend=rpc.BackendType.TENSORPIPE,
rpc_backend_options=rpc.TensorPipeRpcBackendOptions(rpc_timeout=6000,
init_method='env://')
)
print(f"Started CPU process {rank}")
print(f"Process {rank}: avaialable device {torch.cuda.current_device()}")
# Do something rather than sleeping example disk or cpu bound operations
time.sleep(30)
rpc.shutdown()
return
def cuda_process_initialization(rank):
#torch.cuda.set_device(rank)
os.environ["CUDA_VISIBLE_DEVICES"] = f"{rank}"
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9867'
rpc.init_rpc(f"{rank}",
rank=rank,
world_size=world_size + no_of_saver_processes,
backend=rpc.BackendType.TENSORPIPE,
rpc_backend_options=rpc.TensorPipeRpcBackendOptions(#num_send_recv_threads=args.world_size*3,
rpc_timeout=6000,
init_method='env://')
)
print(f"Started CUDA process on gpu {rank}")
# Do some cuda operations
print(f"Process {rank}: avaialable device {torch.cuda.current_device()}")
time.sleep(30)
rpc.shutdown()
return
if __name__ == "__main__":
mp.set_start_method('spawn', force=True)
trainer_processes = mp.spawn(cuda_process_initialization,
nprocs=world_size,
join=False)
cpu_processes = []
for rank in range(world_size,world_size+no_of_saver_processes):
p = mp.Process(target=cpu_process_initialization,
args=(rank,))
p.start()
cpu_processes.append(p)
for p in cpu_processes: p.join()
trainer_processes.join() |
st176158 | Looks like this is an issue with the SHM transport. I bumped up the UV transport priority to over-shadow SHM, and this problem disappeared.
github.com
pytorch/pytorch/blob/ad1041576aeb7cd9f8065acb26b596ade8e6ecaa/torch/csrc/distributed/rpc/tensorpipe_agent.h#L46 1
namespace distributed {
namespace rpc {
// These priorities instruct TensorPipe on which transport/channel to pick
// during handshake. Higher priorities will take precedence over lower ones.
// The transport with lowest priority will be the one used to bootstrap pipes.
constexpr int64_t kShmTransportPriority = 200;
constexpr int64_t kIbvTransportPriority = 100;
// The UV transport just uses TCP and should work everywhere, thus keep it last.
constexpr int64_t kUvTransportPriority = 0;
constexpr int64_t kCmaChannelPriority = 1200;
constexpr int64_t kMultiplexedUvChannelPriority = 1100;
// The basic channel reuses a transport as a channel, and is thus our fallback.
constexpr int64_t kBasicChannelPriority = 1000;
// CPU channel have higher priority than CUDA channels, since the latter might
// handle CPU-to-CPU transfers, but will always be less efficient than their
// CPU-only counterparts.
constexpr int64_t kCudaIpcChannelPriority = 300;
Hey @akshay-raj-dhamija to unblock you, could you please try adding a _transports=["uv"] kill-switch to rpc_backend_options to disable SHM transport?
Below is the code:
import os
import time
import torch
#torch.multiprocessing.set_sharing_strategy('file_system')
import torch.multiprocessing as mp
import torch.distributed.rpc as rpc
no_of_saver_processes = 15
#world_size = torch.cuda.device_count()
world_size = 4
def cpu_process_initialization(rank):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9867'
os.environ["CUDA_VISIBLE_DEVICES"] = f"{rank % torch.cuda.device_count()}"
rpc.init_rpc(f"{rank}",
rank=rank,
world_size = world_size + no_of_saver_processes,
backend=rpc.BackendType.TENSORPIPE,
rpc_backend_options=rpc.TensorPipeRpcBackendOptions(
rpc_timeout=6000,
init_method='env://',
_transports=["uv"],
)
)
print(f"Started CPU process {rank}")
print(f"Process {rank}: avaialable device {torch.cuda.current_device()}")
# Do something rather than sleeping example disk or cpu bound operations
time.sleep(30)
rpc.shutdown()
return
def cuda_process_initialization(rank):
#torch.cuda.set_device(rank)
os.environ["CUDA_VISIBLE_DEVICES"] = f"{rank}"
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9867'
rpc.init_rpc(f"{rank}",
rank=rank,
world_size=world_size + no_of_saver_processes,
backend=rpc.BackendType.TENSORPIPE,
rpc_backend_options=rpc.TensorPipeRpcBackendOptions(
#num_send_recv_threads=args.world_size*3,
rpc_timeout=6000,
init_method='env://',
_transports=["uv"],
)
)
print(f"Started CUDA process on gpu {rank}")
# Do some cuda operations
print(f"Process {rank}: avaialable device {torch.cuda.current_device()}")
time.sleep(30)
rpc.shutdown()
return
if __name__ == "__main__":
mp.set_start_method('spawn', force=True)
trainer_processes = mp.spawn(cuda_process_initialization,
nprocs=world_size,
join=False)
cpu_processes = []
for rank in range(world_size,world_size+no_of_saver_processes):
p = mp.Process(target=cpu_process_initialization,
args=(rank,))
p.start()
cpu_processes.append(p)
for p in cpu_processes: p.join()
trainer_processes.join() |
st176159 | Thanks @mrshenli using _transports=["uv"] solved the problem.
Similar to setting os.environ["CUDA_VISIBLE_DEVICES"] = "" I tried setting the option devices=["cpu"] but as you noted it resulted in the following error:
ValueError: `set_devices` expect a list of CUDA devices, but got device type cpu. |
st176160 | You’re likely hitting the limit of open file descriptors in your process. You can find this limit through ulimit -n and you can check how many file descriptors are in use by your process through ls /proc/$PID/fd | wc -l or lsof -p $PID | wc -l (ideally this should be done “just before” that error happens). Could you provide us those values so we can check this is indeed the cause? |
st176161 | Thanks @lcw you are right looks like v1.9 uses a lot more file descriptors (fds) compared to v1.7.
While v1.7 was close to 135 fds/process, it was close to 270 fds/process for version 1.9, with the zero rank process having 960 fds. These numbers are for 3 GPU and 5 CPU processes.
Yes, increasing the ulimit fixed the issue for v1.9, which points to another possible improvement
In v1.7 we have a very clear error message for such a scenario, the error message looks like
OSError: [Errno 24] Too many open files: '/home/user/anaconda3/envs/pytorch1.7/lib/python3.8/multiprocessing/queues.py'
Probably such an error message for v1.9 and future versions will be useful |
st176162 | 960 file descriptors is still a couple orders of magnitude lower than what I’ve typically seen: from my experience the default appears for each process to have space for up to 64k file descriptors. Are you running in some constrained environment where this limit is much slower? What’s the output of ulimit -n that I asked for earlier?
Note that lack of file descriptors can appear at any time, during any operation, in many forms: it’s unrealistic to catch it consistently and provide a uniform error message. |
st176163 | It seems the default ulimit -n for both ubuntu 18.04 and 20.04 server edition is 1024 unless explicitly changed, mine was the same.
I am also a bit unsure why the number of file descriptors needs to be so large when compared between v1.7 and v1.9, probably need to read a bit more about the changes made but any insights would be helpful. |
st176164 | TensorPipe isn’t very “frugal” when it comes to file descriptors (simply because we believe that the system limits are high enough (or can be raised) for this to not be a problem in the vast majority of cases). Hence each RPC “link” between two processes uses multiple file descriptors, out of simplicity, and to better leverage the kernel’s own capabilities for multiplexing and backpressure. With the advent of CUDA channels in TensorPipe, these channels also need some new file descriptors, hence the consumption increased. It is possible to optimize this usage, though I’m not sure when we’ll get to it.
If you have a limit of 1024 on your system then yes indeed 960 is too worringly close to it. Though I assume you’re able to increase that limit and thus unblock yourself? |
st176165 | BTW @mrshenli the fact that the file descriptor usage is much higher on node 0 is probably due to the fact that the _all_gather function basically constructs a tree with node 0 at the root, hence node 0 always opens N-1 links. I don’t think any action is needed now, but perhaps we should think if there are other approaches that don’t put all that pressure on a single node. |
st176166 | I’m running a piece of code (attached below) that is very similar to the one listed here: Writing Distributed Applications with PyTorch — PyTorch Tutorials 1.9.0+cu102 documentation - in particular, the section about sending a tensor that is incremented by 1. However, instead of having only 2 ranks, I am using 5, and instead of using send, I am broadcasting the incremented tensor from the source (rank 0) to the remaining ranks, but the script doesn’t terminate (note - I have also tried using send to all the other ranks and it hasn’t worked). I’m pretty sure the line that uses dist.broadcast is the source of the issue, as the script never gets past that line. I’m not sure if this is because of some fundamental misunderstanding (as I’m new to distributed training) so it would be appreciated if someone could shed some light onto this.
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
def run(rank, size):
tensor = torch.zeros(1)
if rank == 0:
tensor += 1
# Send the tensor to process 1
dist.broadcast(tensor=tensor, src=0)
else:
# Receive tensor from process 0
print("receiving")
dist.recv(tensor=tensor, src=0)
print('Rank ', rank, ' has data ', tensor[0])
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 5
processes = []
mp.set_start_method("spawn")
for rank in range(size):
p = mp.Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join() |
st176167 | It seems that from the doc here: Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 1, that GLOO backended lib on CUDA tensors doesn’t support reduce, only support all_reduce and broadcast.
But as I followed the tutorial here: Writing Distributed Applications with PyTorch — PyTorch Tutorials 1.9.0+cu102 documentation 1.
With a simple run function implemented bellow:
def run4(rank, size):
""" run4: CUDA reduction. """
n_gpus = torch.cuda.device_count()
t = torch.ones(1).cuda(rank % n_gpus)
for _ in range(1):
c = t.clone()
# dist.all_reduce(c, dist.ReduceOp.SUM)
dist.reduce(c, dst=0, op=dist.ReduceOp.SUM)
t.set_(c)
print('[{}] After reduction: rank {} has data {}, backend is {}'.format(os.getpid(), rank, t, dist.get_backend()))
And I tried with a world_size of 4 on a machine with only 2 GPU cards, here is the result:
[30425] After reduction: rank 2 has data tensor([2.], device='cuda:0'), backend is gloo
[30424] After reduction: rank 1 has data tensor([3.], device='cuda:1'), backend is gloo
[30426] After reduction: rank 3 has data tensor([1.], device='cuda:1'), backend is gloo
[30423] After reduction: rank 0 has data tensor([4.], device='cuda:0'), backend is gloo
Based on the results above, I assume reduce has worked on GPU because my tensors are put on CUDA deivces, am I right? However, I noticed that all the ranks have participated in the reduce algo. Meaning that rank 1, 2, 3 tensor values have also changed when performing the reduction. It seems that the reduction algo is quite naive, if I have 4 processes, it would run 3 rounds.
1st round, add rank 3 to rank 2.
2nd round, add rank 2 to rank 1.
3rd round, add rank 1 to rank 0.
So when I print the end result, not only rank 0 has the desired reduced sum value, but also rank 1 ~ (world_size -2) has also changed the value. Is this the supposed result of reduction? I thought rank 1 ~ (world_size -2) doesn’t store the imtermediate results. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.