id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st176968 | when i used dataparell ,i meet :\anaconda3\lib\site-packages\torch\cuda\nccl.py:16: UserWarning: PyTorch is not compiled with NCCL support warnings.warn(‘PyTorch is not compiled with NCCL support’)
But I used to use it normally ,when i update torch1.5-torch1.7,the question is coming. why?
my code is
if t.cuda.device_count() > 1:
model = nn.DataParallel(model)
if opt.use_gpu: model.cuda()
i meet the answer :Win10+PyTorch+DataParallel got warning:"PyTorch is not compiled with NCCL support" 67
i want to konw why torch 1.5.1 can be used dataparallel ,but 1.7.0 doesnt.
could someone answer this question for me?
I went back to 1.5.1 and couldn’t use it either
:\anaconda3\lib\site-packages\torch\cuda\nccl.py:24: UserWarning: PyTorch is not compiled with NCCL support
warnings.warn('PyTorch is not compiled with NCCL support')
0%| | 0/12097 [00:06<?, ?it/s]
Traceback (most recent call last):
File "main.py", line 188, in <module>
fire.Fire()
File "d:\anaconda3\lib\site-packages\fire\core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "d:\anaconda3\lib\site-packages\fire\core.py", line 468, in _Fire
target=component.__name__)
File "d:\anaconda3\lib\site-packages\fire\core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "main.py", line 103, in train
optimizer.step()
File "d:\anaconda3\lib\site-packages\torch\autograd\grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "d:\anaconda3\lib\site-packages\torch\optim\adam.py", line 107, in step
denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
RuntimeError: CUDA out of memory. Tried to allocate 2.18 GiB (GPU 0; 15.92 GiB total capacity; 13.74 GiB already allocated; 666.81 MiB free; 14.59 GiB reserved in total by PyTorch)
I would be very grateful
thank you
is anybody there , i need help。 |
st176969 | Solved by tianle-BigRice in post #3
the question be solved ,when i remove all pkgs.and then reinstall pkgs.the question be solved. |
st176970 | the question be solved ,when i remove all pkgs.and then reinstall pkgs.the question be solved. |
st176971 | Hey could you maybe elaborate on how you fixed it? I´m running into the same issue and am a total noob. |
st176972 | Hi,
I am trying to run the example code from the pytorch distributed tutorial (dist_tuto.html 8).
Here is my exact code:
import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
def run(rank, size):
tensor = torch.zeros(1)
if rank == 0:
tensor += 1
dist.send(tensor=tensor, dst=1)
else:
dist.recv(tensor=tensor, src=0)
print("Rank ", rank, " has data ", tensor[0])
def init_process(rank, size, fn, backend="gloo"):
""" Initialize the distributed environment. """
os.environ["MASTER_ADDR"] = "127.0.0.0"
os.environ["MASTER_PORT"] = "29511"
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 2
processes = []
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join()
In terminal, I run it with python run.py. It runs successfully both on my server and on my friend’s Mac. But, it hangs there forever on my own Mac. I tried to restart my Mac, and reinstall python(v3.8.2)/torch(v1.5.0). No success.
If we interrupt it with “control-c”, the last line of trace is:
File "/usr/local/anaconda3/envs/p382/lib/python3.8/subprocess.py", line 1762, in _try_wait
(pid, sts) = os.waitpid(self.pid, wait_flags)
I suspect there is a deadlook. Any approach to avoid it? |
st176973 | Hey @Hao_Yuan
A few things could cause this issue.
127.0.0.0 is not a valid IP in some envs. Could you please check that. If it is indeed invalid, can you try 127.0.0.1 or localhost or other valid IP addresses?
Some other process is occupying that port. Can you try a different port number? |
st176974 | Hi @mrshenli.
Somehow, my Mac is better today. I got a warning and the results.
Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (operator() at ../torch/lib/c10d/ProcessGroupGloo.cpp:496)
Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (operator() at ../torch/lib/c10d/ProcessGroupGloo.cpp:496)
Rank 1 has data tensor(1.)
Rank 0 has data tensor(1.)
I think the issue is that it takes long time to “resolve hostname”. After export GLOO_SOCKET_IFNAME=en0. I got the result quickly. |
st176975 | Curious, how is the hostname configured in your env? Can you try:
getent hosts `hostname` |
st176976 | No getnet command in my mac. I googled a similar command.
$ dscacheutil -q host -a name 'hostname'
Not show any result. |
st176977 | I see. The hostname wasn’t configured, so that it was unable to resolve when GLOO_SOCKET_IFNAME wasn’t present. |
st176978 | @mrshenli @Hao_Yuan but how did you configure it? I also the same error and neither 127.0.0.1 nor localhost work for me… |
st176979 | @ptrblck sorry for calling you out like this…but I was wondering, is there an explanation of what the error:
[W ProcessGroupGloo.cpp:558] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())
means?
Just in case you need it here is a self contained code causing the error:
import os
from typing import Tuple, List
import torch
from torch import nn, optim, Tensor
from torch.distributed import rpc
import torch.distributed.autograd as dist_autograd
from torch.distributed.optim import DistributedOptimizer
from torch.multiprocessing import Pool, Process
import torch.multiprocessing as mp
world_size = 2 # three chunks, one for each process
num_epochs = 1 # this doesn't really matter, we only need to test if it can process a big batch and a small batch
iters = 5 # iters for 1 epoch
iters = 5
Din, Dout = 10, 5
class ToyModel(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(Din, Dout)
self.criterion = nn.MSELoss()
def forward(self, batch_x, batch_y):
y_ = self.lin(batch_x)
loss = self.criterion(batch_y, y_)
return loss
def get_ast_batch(batch_size: int) -> List[Tuple[Tensor]]:
"""
Returns a list of size batch_size with each individual example.
- 1 example in a batch is a task with K examples with dim=D.
Note:
num_proc = 3
batch_size = 8 # chunk_size = 2 rem 1 (have three chunks of size 2 and one of size 1) 8/3 = 2.666 = 2 rem 1
batch_size = 2 # chunk_size = 1 for each process since 2/3 <= 1
:return:
"""
data_x, data_y = torch.randn(Din), torch.randn(Dout)
batch = [(data_x, data_y) for _ in range(batch_size)]
return batch
def get_meta_batch(batch_size: int, k=15) -> Tuple[Tensor]:
"""
Returns Tuple(torch.Tensor([B, K, D])) where each element in the batch is a task.
- 1 example in a batch is a task with K examples with dim=D.
:return:
"""
data_x, data_y = torch.randn(batch_size, k, Din), torch.randn(batch_size, k, Dout)
batch = data_x, data_y
return batch
class Worker:
def __init__(self, args, kwargs):
self.args = args
self.kwargs = kwargs
self.id = rpc.get_worker_info().id
# self.env.seed(args.seed)
class Master:
def __init__(self, world_size):
self.world_size = world_size
self.master_rank = 0
self.num_workers = world_size - 1
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
rpc.init_rpc(f'worker{self.master_rank}', rank=self.master_rank, world_size=world_size)
self.model = ToyModel()
self.optimizer = optim.Adam(self.model.parameters(), lr=1e-2)
self.saved_losses = {}
# create rrefs (remote referneces) for calling rpc calls
self.workers = []
for worker_rank in range(1, world_size):
worker_info = rpc.get_worker_info(worker_name=f'worker{worker_rank}')
# Make a remote call to run func on worker to and return an RRef to the result value immediately.
# worker_rrf = rpc.remote(to=worker_info, func=Worker, args='args', kwargs='kwargs')
self.workers.append(worker_info)
#
self.saved_losses[worker_info.id] = []
def forward_parallel(self, batch):
"""
:param batch:
- List[Tensor([B, K, D])]
- List[Tuple[Tensor(D), Tensor(D)]]
"""
# batch_size = len(batch)
batch_size = batch.size(0) # num_tasks
Sx, Sy = batch
chunk_size = batch_size // self.num_workers # the number of examples to give each worker/proc
futures = []
if chunk_size <= 0:
# give each worker a data point and thats it
for t in range(batch_size):
worker_info = self.workers[t]
# makes non blocking rpc call on worker to and immediately returns a future object to wait the result
future = rpc.rpc_async(to=worker_info, func=self.model.forward, args=(Sx[t], Sy[t]))
futures.append(future)
else:
# each worker receives a chunk of size chunk_size
chunk_idx = 0
for worker_idx in range(self.num_workers):
chunk_x = Sx[chunk_idx:chunk_idx+chunk_size]
chunk_y = Sy[chunk_idx:chunk_idx+chunk_size]
chunk_idx += chunk_size
worker_info = self.workers[worker_idx]
future = rpc.rpc_async(to=worker_info, func=self.model.forward, args=(chunk_x, chunk_y))
futures.append(future)
loss = 0
for future in futures:
loss = future.wait()
loss += (1.0/batch_size)*loss
return loss
def finish(self):
def end_master():
rpc.shutdown()
master_proc = Process(target=end_master, args=())
master_proc.start()
def run_worker_process(rank, world_size):
print('-- Worker run -- ')
print(f'current process: {mp.current_process()}')
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
# other ranks are the workers (1 to world_size)
rpc.init_rpc(f'worker{rank}', rank=rank, world_size=world_size)
# block until all rpcs finish, and shutdown the RPC instance
print('running shutdown')
rpc.shutdown()
print(f'shutdown complete rank {rank}')
print(f'shutdown complete for process: {mp.current_process()}')
def master(world_size):
print('-- Master run -- ')
print(f'current process: {mp.current_process()}')
# -- init master process --
master = Master(world_size=world_size)
# -- do training loop --
optimizer = optim.Adam(params=master.model.parameters(), lr=0.1)
# optimizer = DistributedOptimizer(optim.Adam, )
for epoch in range(num_epochs):
for batch_idx in range(iters):
batch = get_ast_batch(batch_size=batch_idx)
with dist_autograd.context() as context_id:
loss = master.forward_parallel(batch)
dist_autograd.backward(context_id, loss)
optimizer.step()
optimizer.zero_grad()
# block until all rpcs finish, and shutdown the RPC instance, once master gets here everything is shut down
# rpc.shutdown()
master.finish()
if __name__ == '__main__':
print('starting __main__')
for rank in range(1, world_size):
worker_proc = Process(target=run_worker_process, args=(rank, world_size))
print(f'creating process object serially: pro_obj is = {worker_proc}')
# worker_proc.start()
master(world_size=2)
print('Done!\a\n')
Solved it by making sure each of my rpc initialized things have a different port number. I guess initializing an rpc vs a distributed group is different…?
related: python - How does one set the pytorch distributed hostname, port and GLOO_SOCKET_IFNAME so that DDP works? - Stack Overflow 13 |
st176980 | Hey @Brando_Miranda, you can configure which network interface to use by setting the GLOO_SOCKET_IFNAME env var. |
st176981 | sorry for calling you out like this…but I was wondering, is there an explanation of what the error:
It means Gloo tries to solve the returned value of hostname but failed, and it recommends you to set the GLOO_SOCKET_IFNAME env var to explicitly configure it.
Solved it by making sure each of my rpc initialized things have a different port number.
Hmm, you mean master and worker are using different port number? I would assume this won’t work as they use that master port and ip to conduct rendezvous and find each other. So if they set configured with different port number, they won’t be able to find each other.
I guess initializing an rpc vs a distributed group is different…?
RPC currently uses distributed group (ProcessGroup) internally, so the same initialization configuration should work for both. |
st176982 | Hi Thanks for your replies!
I keep getting the error:
[W ProcessGroupGloo.cpp:558] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())
I am unsure why.
My setting is this:
# set up the master's ip address so this child process can coordinate
# os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# - use NCCL if you are using gpus: https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends
if torch.cuda.is_available():
backend = 'nccl'
# Initializes the default distributed process group, and this will also initialize the distributed package.
dist.init_process_group(backend, rank=rank, world_size=world_size)
does this look wrong to you? Is what you mainly suggest to change the port number? |
st176983 | The recommendation is to check what network interface your machine uses by running something like ifconfig. Then set the environment variable GLOO_SOCKET_IFNAME=eth0 (if eth0 is the right network interface for your host). |
st176984 | Brando_Miranda:
what does eth0 mean?
eth0 is just an example, but it is a very common name used for network interfaces. You need to lookup the right network interface name for your machine and replace it with eth0 in my example above. |
st176985 | I was running the imagenet using DDP.
At the DGX station 4*V100, they make many processes
But i running the same option and same program on RTX2080Ti*8 server. They make just only 9 Process
I running the same code and same program… If I think the same case on RTX2080TI*8 server, I expect DGX to also make 5 processes. But it was not.
Why this problem, I used the same PyTorch docker on both systems.
And If I want to drive the number of processes, how do I contorl it? |
st176986 | What are these processes assigned to?
Could you extend the drop down menu and compare both machines?
Also, how many CPUs does the RTX2080Ti workstation have? Note that your DGX station reports 40, which might also be the reason why it’s able to use more processes. |
st176987 | Now I guess, this problem is xorg or other process default program which was running at the same time. Thanks to helping me.
Can I ask another problem…?
image882×756 140 KB
Now I guess very long allreduce and Broadcast which caused by late Memcpy H2D. If you see the above figure GPU 1,3,4 call the H2D(=green bar) at the similar time, but GPU 2 called the H2D at late. These unsync behavior make inefficient at multi-GPU training.
So all GPU sync on GPU2, can I solve this problem…? For example, I allocated the memcpyH2D to new stream or any nice idea…?
Thanks. |
st176988 | Hi,
I am using DataParallel in a SGE enviroment with cudatoolkit 10.1 and NCCL version 2.7. I get the following error
/miniconda3/envs/cs21/lib/python3.8/site-packages/torch/nn/parallel/comm.py", line 56, in broadcast_coalesced
return torch._C._broadcast_coalesced(tensors, devices, buffer_size)
RuntimeError: NCCL Error 3: internal error
The NCCL debug info is as follows
node20:22347:22347 [0] NCCL INFO Bootstrap : Using [0]ib0:10.10.9.220<0>
node20:22347:22347 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
node20:22347:22347 [0] NCCL INFO NET/IB : Using [0]qib0:1/IB ; OOB ib0:10.10.9.220<0>
node20:22347:22347 [0] NCCL INFO Using network IB
NCCL version 2.7.8+cuda10.1
node20:22347:22436 [1] graph/xml.h:77 NCCL WARN Attribute class of node nic not found
node20:22347:22436 [1] NCCL INFO graph/topo.cc:312 -> 3
node20:22347:22436 [1] NCCL INFO graph/topo.cc:348 -> 3
node20:22347:22436 [1] NCCL INFO graph/topo.cc:395 -> 3
node20:22347:22436 [1] NCCL INFO graph/topo.cc:467 -> 3
node20:22347:22436 [1] NCCL INFO graph/topo.cc:570 -> 3
node20:22347:22435 [0] graph/xml.h:77 NCCL WARN Attribute class of node nic not found
node20:22347:22436 [1] NCCL INFO init.cc:581 -> 3
node20:22347:22435 [0] NCCL INFO graph/topo.cc:312 -> 3
node20:22347:22436 [1] NCCL INFO init.cc:840 -> 3
node20:22347:22435 [0] NCCL INFO graph/topo.cc:348 -> 3
node20:22347:22435 [0] NCCL INFO graph/topo.cc:395 -> 3
node20:22347:22435 [0] NCCL INFO graph/topo.cc:467 -> 3
node20:22347:22435 [0] NCCL INFO graph/topo.cc:570 -> 3
node20:22347:22435 [0] NCCL INFO init.cc:581 -> 3
node20:22347:22435 [0] NCCL INFO init.cc:840 -> 3
node20:22347:22436 [1] NCCL INFO group.cc:73 -> 3 [Async thread]
node20:22347:22435 [0] NCCL INFO group.cc:73 -> 3 [Async thread]
node20:22347:22347 [0] NCCL INFO init.cc:906 -> 3
Does anyone have any idea what this issue might be caused by?
Cheers in advance |
st176989 | Solved by William_Ravenscroft in post #3
To follow up, I think I actually had 2 issues firstly I had to set
export NCCL_SOCKET_IFNAME=<VALUE>
export NCCL_IB_DISABLE=1
Replacing with your relevant interface - use the ifconfig to find it. And I think my second issue was using a dataloader with multiple workers but I hadn’t allocated enou… |
st176990 | William_Ravenscroft:
RuntimeError: NCCL Error 3: internal error
NCCL error 3 seems to be either a bug in NCCL or some memory corruption: Types — NCCL 2.8.3 documentation 7. Maybe you can create an issue at GitHub - NVIDIA/nccl: Optimized primitives for collective multi-GPU communication 12 to see if the NCCL team has some guidelines on how to debug this. |
st176991 | To follow up, I think I actually had 2 issues firstly I had to set
export NCCL_SOCKET_IFNAME=<VALUE>
export NCCL_IB_DISABLE=1
Replacing with your relevant interface - use the ifconfig to find it. And I think my second issue was using a dataloader with multiple workers but I hadn’t allocated enough processes to the job in my job submission. |
st176992 | Minimal example code:
import os
import torch.distributed as dist
os.environ['MASTER_ADDR'] = '148.251.86.243' # My master server IP
print(f"[ {os.getpid()} ] Initializing process group")
dist.init_process_group(backend="nccl")
print(f"[ {os.getpid()} ] world_size = {dist.get_world_size()}, " + f"rank = {dist.get_rank()}, backend={dist.get_backend()}")
The master address is hardcoded to my server’s IP: os.environ['MASTER_ADDR'] = '148.251.86.243'
Launching with launch.py in master server hangs:
$ export NCCL_DEBUG=INFO ; export NCCL_DEBUG_SUBSYS=ALL ; python3 /home/dario/.local/lib/python3.6/site-packages/torch/distributed/launch.py --nnode=2 --node_rank=0 --nproc_per_node=1 msg3d/MSG3D/tuturial_DistributedDataParallel.py --local_world_size=1
[26236] Initializing process group with: {'MASTER_ADDR': '148.251.86.243', 'MASTER_PORT': '29500', 'RANK': '0', 'WORLD_SIZE': '2'}
And in second server it also hangs:
$ export NCCL_DEBUG=INFO ; export NCCL_DEBUG_SUBSYS=ALL ; python3 /home/dario/.local/lib/python3.6/site-packages/torch/distributed/launch.py --nnode=2 --node_rank=1 --nproc_per_node=1 msg3d/MSG3D/tuturial_DistributedDataParallel.py --local_world_size=1
[17186] Initializing process group with: {'MASTER_ADDR': '148.251.86.243', 'MASTER_PORT': '29500', 'RANK': '1', 'WORLD_SIZE': '2'}
No difference between dist.init_process_group(backend="nccl") and "gloo"
No difference when setting os.environ['NCCL_SOCKET_IFNAME'] = 'lo' 1
Setting export NCCL_DEBUG=INFO ; export NCCL_DEBUG_SUBSYS=ALL ; produces no new output
No difference between PyTorch 1.7.1 and 1.8.0-rc4
dist.init_process_group(backend=“nccl”, init_method=‘env://’) didn’t help
What else should I try?
Environment:
PyTorch version: 1.7.1 or 1.8.0.dev20210208+cu110
Is debug build: True
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.17.1
Python version: 3.6 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 11.2.142
GPU models and configuration: GPU 0: GeForce GTX 1080
Nvidia driver version: 460.32.03
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.18.4
[pip3] torch==1.7.1 or 1.8.0.dev20210208+cu110
[pip3] torchvision==0.8.1
[conda] Could not collect |
st176993 | Solved by WurmD in post #9
Ok, managed to open a port (29500) that allows init_process_group to run correctly.
In my firewall settings it looks like
Source IP | Destination IP | Source port | Destination port | Protocol | TCP flags | Action
| | | 29500 | tcp | … |
st176994 | It looks like you’re using two processes but setting WORLD_SIZE to 1, I think setting WORLD_SIZE to 2 should fix this issue. |
st176995 | Good spot!
But now it hangs on both =/, editted above
$ python3 /home/dario/.local/lib/python3.6/site-packages/torch/distributed/launch.py --nnode=2 --node_rank=0 --nproc_per_node=1 msg3d/MSG3D/tuturial_DistributedDataParallel.py --local_world_size=1
[23616] Initializing process group with: {'MASTER_ADDR': '148.251.86.243', 'MASTER_PORT': '29500', 'RANK': '0', 'WORLD_SIZE': '2'} |
st176996 | Continuing to try debug this
Setting export NCCL_DEBUG=INFO ; export NCCL_DEBUG_SUBSYS=ALL ; produces no new output
Editted in above |
st176997 | Next attempt:
torch-1.8.0.dev20210208%2Bcu110-cp36-cp36m-linux_x86_64.whl
With
backend="nccl"
NCCL_DEBUG: INFO
NCCL_DEBUG_SUBSYS: ALL
Absolutely no change, dist.init_process_group(backend="nccl") hangs in both machines and no extra output is given |
st176998 | Deleted down to a minimal example:
import os
import torch.distributed as dist
os.environ['MASTER_ADDR'] = '148.251.86.243'
print(f"[ {os.getpid()} ] Initializing process group")
dist.init_process_group(backend="nccl")
print(f"[ {os.getpid()} ] world_size = {dist.get_world_size()}, " + f"rank = {dist.get_rank()}, backend={dist.get_backend()}")
editted above |
st176999 | Created issue init_process_group with launch.py --nnode=2 hangs always in all machines · Issue #52848 · pytorch/pytorch · GitHub 17 |
st177000 | Ok, managed to open a port (29500) that allows init_process_group to run correctly.
In my firewall settings it looks like
Source IP | Destination IP | Source port | Destination port | Protocol | TCP flags | Action
| | | 29500 | tcp | | accept
After master process is running (that starts listening on 29500), then doing telnet 148.251.86.243 29500 says
Trying 148.251.86.243...
Connected to gpu14.
Escape character is '^]'.
confirming the port is correctly open.
And running the minimal example above on the second server makes init_process_group return correctly and print my second print above
[ 8172 ] world_size = 2, rank = 0, backend=nccl |
st177001 | Hello,
I am using Pytorch (Pytorch Lightning Framework) to train the Text to Text Transformer model (google/mt5-base at main 4).
I trained them on 1, 4, 5, 8 gpu environment using DDP.
However, all of 8gpu and 5gpu training attempts, are stuck and failed at a specific point in a specific epoch (54).
This is the last log before stuck, as it seems, its end of an epoch, so I assume that training is stuck due to data loading for next epoch in 8gpu or 5gpu environment.
This issue also occurred regardless of num_worker in DataLoader or different batch_size (32, 16)
Epoch 54: 100%|█████████▉| 2921/2931 [43:38<00:08, 1.12it/s, loss=.., v_num=0]
Epoch 54: 100%|█████████▉| 2925/2931 [43:41<00:05, 1.12it/s, loss=.., v_num=0]
Validating: 99%|█████████▉| 280/282 [02:32<00:01, 1.59it/s]e[A
Epoch 54: 100%|██████████| 2931/2931 [44:01<00:00, 1.11it/s, loss=.., v_num=0]
Any comment or suggestion would be appreciated.
Thank you. |
st177002 | Do you have even number of batches across all GPUs? If not, the training could get stuck since DDP performs a collective sync across all GPUs in the backward pass. If you have uneven batches, you can try using the join 20 API to get around this. |
st177003 | @pritamdamania87,
First of all, thank you for your reply.
If I understand your comment correctly, It seems I both had training with even and uneven batch size across all GPUs.
The log shows 2931 which is the uneven number of batches assigned for a single GPU. however, in another training session, a even number of batches (like 1466) have also experienced the same issue.
Could you suggest how to correctly measure the number of batches per GPU? in case I was wrong in measure?
I will check to join API try to get around this issue of course.
Thank you.! |
st177004 | Could you suggest how to correctly measure the number of batches per GPU? in case I was wrong in measure?
You would probably have to add some logging code to your application which records the number of batches on each rank as part of the training loop. |
st177005 | @pritamdamania87
Thank you for your comment
I am now still in debug, and experienced the same error on DataParallel setting (without hang; just training loop stops at the end of 54 epoch)
Is this good evidence for the issue? |
st177006 | This issue is caused by my code mistake.
I accidentally give a small (54) to max epoch value, after adjust the max epoch value, works fine on DP or CPU environment.
DDP also hangs after the max epoch which seems is interpretable behavior, but not normal behavior. (currently I don’t know why.) |
st177007 | I try to get a free port in DDP initialization of PyTorch. However, my code get stuck. The following snippet could repeat my description:
def get_open_port():
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return s.getsockname()[1]
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
port = get_open_port()
os.environ['MASTER_PORT'] = str(port) # '12345'
# Initialize the process group.
dist.init_process_group('NCCL', rank=rank, world_size=world_size)
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 5)
def forward(self, x):
print(f'x device={x.device}')
return self.net1(x)
def demo_basic(rank, world_size):
setup(rank, world_size)
logger = logging.getLogger('train')
logger.setLevel(logging.DEBUG)
logger.info(f'Running DPP on rank={rank}.')
# Create model and move it to GPU.
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) # optimizer takes DDP model.
optimizer.zero_grad()
inputs = torch.randn(20, 10) # .to(rank)
print(f'inputs device={inputs.device}')
outputs = ddp_model(inputs)
print(f'output device={outputs.device}')
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_func, world_size):
mp.spawn(
demo_func,
args=(world_size,),
nprocs=world_size,
join=True
)
run_demo(demo_basic, 4)
The function get_open_port is supposed to free the port after invocation. My questions are: 1. How does it happen? 2. How to fix it? |
st177008 | Solved by pritamdamania87 in post #2
The problem here is that each of the processes end up with a different port due to get_open_port. As a result, the processes can’t rendezvous properly. The MASTER_PORT env variable needs to be the same on all processes and you probably need to choose a fixed port for this. The other option is to fin… |
st177009 | The problem here is that each of the processes end up with a different port due to get_open_port. As a result, the processes can’t rendezvous properly. The MASTER_PORT env variable needs to be the same on all processes and you probably need to choose a fixed port for this. The other option is to find a free port on the master, then communicate this port to all processes using some out of band mechanism to ensure all processes use the same port.
For example, if you’re running all processes on a single host (which is what your example code does). You can call get_open_port in the run_demo function and pass the free port to all processes as an argument to the demo_func method. |
st177010 | Is is supported to re-initialize the pytorch distributed RPC framework after it has been shut down previously, from the same process?
e.g.
rpc.init_rpc(...)
rpc.shutdown()
rpc.init_rpc(...) |
st177011 | it should work with FileStore, there is still some issue for using TCPStore if you call init_rpc() twice |
st177012 | Hi, @mrshenli. I have read your post model parallel tutorial 1. Here is my problem, the code is avaiable here: cyclegan. The model is too large to fit in a single gpu, and the network uses multi loss, so I cannot easily split the model into 2 or more gpus which may make errors such as ‘Runtime Error: expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0’. Could you have a look at my codes and give some advice? Thanks.
My codes are as follows, the key codes have be given comments above and the command is ‘CUDA_VISIBLE_DEVICES=0,1 python cyclegan.py’:
import argparse
import os
import numpy as np
import math
import itertools
import datetime
import time
import torchvision.transforms as transforms
from torchvision.utils import save_image, make_grid
from torch.utils.data import DataLoader
from torchvision import datasets
from torch.autograd import Variable
from models import *
from datasets import *
from utils import *
import torch.nn as nn
import torch.nn.functional as F
import torch
parser = argparse.ArgumentParser()
parser.add_argument("--epoch", type=int, default=0, help="epoch to start training from")
parser.add_argument("--n_epochs", type=int, default=200, help="number of epochs of training")
parser.add_argument("--dataset_name", type=str, default="monet2photo", help="name of the dataset")
parser.add_argument("--batch_size", type=int, default=1, help="size of the batches")
parser.add_argument("--lr", type=float, default=0.0002, help="adam: learning rate")
parser.add_argument("--b1", type=float, default=0.5, help="adam: decay of first order momentum of gradient")
parser.add_argument("--b2", type=float, default=0.999, help="adam: decay of first order momentum of gradient")
parser.add_argument("--decay_epoch", type=int, default=100, help="epoch from which to start lr decay")
parser.add_argument("--n_cpu", type=int, default=8, help="number of cpu threads to use during batch generation")
parser.add_argument("--img_height", type=int, default=256, help="size of image height")
parser.add_argument("--img_width", type=int, default=256, help="size of image width")
parser.add_argument("--channels", type=int, default=3, help="number of image channels")
parser.add_argument("--sample_interval", type=int, default=100, help="interval between saving generator outputs")
parser.add_argument("--checkpoint_interval", type=int, default=-1, help="interval between saving model checkpoints")
parser.add_argument("--n_residual_blocks", type=int, default=9, help="number of residual blocks in generator")
parser.add_argument("--lambda_cyc", type=float, default=10.0, help="cycle loss weight")
parser.add_argument("--lambda_id", type=float, default=5.0, help="identity loss weight")
opt = parser.parse_args()
print(opt)
# Create sample and checkpoint directories
os.makedirs("images/%s" % opt.dataset_name, exist_ok=True)
os.makedirs("saved_models/%s" % opt.dataset_name, exist_ok=True)
# Losses
criterion_GAN = torch.nn.MSELoss()
criterion_cycle = torch.nn.L1Loss()
criterion_identity = torch.nn.L1Loss()
cuda = torch.cuda.is_available()
input_shape = (opt.channels, opt.img_height, opt.img_width)
# Initialize generator and discriminator
G_AB = GeneratorResNet(input_shape, opt.n_residual_blocks)
G_BA = GeneratorResNet(input_shape, opt.n_residual_blocks)
D_A = Discriminator(input_shape)
D_B = Discriminator(input_shape)
if cuda:
#G_AB = G_AB.cuda()
#G_BA = G_BA.cuda()
#D_A = D_A.cuda()
#D_B = D_B.cuda()
# here is my codes
G_AB = G_AB.cuda("cuda:0")
G_BA = G_BA.cuda("cuda:0")
D_A = D_A.cuda("cuda:1")
D_B = D_B.cuda("cuda:1")
criterion_GAN.cuda()
criterion_cycle.cuda()
criterion_identity.cuda()
if opt.epoch != 0:
# Load pretrained models
G_AB.load_state_dict(torch.load("saved_models/%s/G_AB_%d.pth" % (opt.dataset_name, opt.epoch)))
G_BA.load_state_dict(torch.load("saved_models/%s/G_BA_%d.pth" % (opt.dataset_name, opt.epoch)))
D_A.load_state_dict(torch.load("saved_models/%s/D_A_%d.pth" % (opt.dataset_name, opt.epoch)))
D_B.load_state_dict(torch.load("saved_models/%s/D_B_%d.pth" % (opt.dataset_name, opt.epoch)))
else:
# Initialize weights
G_AB.apply(weights_init_normal)
G_BA.apply(weights_init_normal)
D_A.apply(weights_init_normal)
D_B.apply(weights_init_normal)
# Optimizers
optimizer_G = torch.optim.Adam(
itertools.chain(G_AB.parameters(), G_BA.parameters()), lr=opt.lr, betas=(opt.b1, opt.b2)
)
optimizer_D_A = torch.optim.Adam(D_A.parameters(), lr=opt.lr, betas=(opt.b1, opt.b2))
optimizer_D_B = torch.optim.Adam(D_B.parameters(), lr=opt.lr, betas=(opt.b1, opt.b2))
# Learning rate update schedulers
lr_scheduler_G = torch.optim.lr_scheduler.LambdaLR(
optimizer_G, lr_lambda=LambdaLR(opt.n_epochs, opt.epoch, opt.decay_epoch).step
)
lr_scheduler_D_A = torch.optim.lr_scheduler.LambdaLR(
optimizer_D_A, lr_lambda=LambdaLR(opt.n_epochs, opt.epoch, opt.decay_epoch).step
)
lr_scheduler_D_B = torch.optim.lr_scheduler.LambdaLR(
optimizer_D_B, lr_lambda=LambdaLR(opt.n_epochs, opt.epoch, opt.decay_epoch).step
)
Tensor = torch.cuda.FloatTensor if cuda else torch.Tensor
# Buffers of previously generated samples
fake_A_buffer = ReplayBuffer()
fake_B_buffer = ReplayBuffer()
# Image transformations
transforms_ = [
transforms.Resize(int(opt.img_height * 1.12), Image.BICUBIC),
transforms.RandomCrop((opt.img_height, opt.img_width)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
# Training data loader
dataloader = DataLoader(
ImageDataset("../../data/%s" % opt.dataset_name, transforms_=transforms_, unaligned=True),
batch_size=opt.batch_size,
shuffle=True,
num_workers=opt.n_cpu,
)
# Test data loader
val_dataloader = DataLoader(
ImageDataset("../../data/%s" % opt.dataset_name, transforms_=transforms_, unaligned=True, mode="test"),
batch_size=5,
shuffle=True,
num_workers=1,
)
def sample_images(batches_done):
"""Saves a generated sample from the test set"""
imgs = next(iter(val_dataloader))
G_AB.eval()
G_BA.eval()
real_A = Variable(imgs["A"].type(Tensor))
fake_B = G_AB(real_A)
real_B = Variable(imgs["B"].type(Tensor))
fake_A = G_BA(real_B)
# Arange images along x-axis
real_A = make_grid(real_A, nrow=5, normalize=True)
real_B = make_grid(real_B, nrow=5, normalize=True)
fake_A = make_grid(fake_A, nrow=5, normalize=True)
fake_B = make_grid(fake_B, nrow=5, normalize=True)
# Arange images along y-axis
image_grid = torch.cat((real_A, fake_B, real_B, fake_A), 1)
save_image(image_grid, "images/%s/%s.png" % (opt.dataset_name, batches_done), normalize=False)
# ----------
# Training
# ----------
prev_time = time.time()
for epoch in range(opt.epoch, opt.n_epochs):
for i, batch in enumerate(dataloader):
# Set model input
# here is my codes
real_A = Variable(batch["A"].type(Tensor)).to("cuda:0")
real_B = Variable(batch["B"].type(Tensor)).to("cuda:0")
# Adversarial ground truths
valid = Variable(Tensor(np.ones((real_A.size(0), *D_A.output_shape))), requires_grad=False).to("cuda:0")
fake = Variable(Tensor(np.zeros((real_A.size(0), *D_A.output_shape))), requires_grad=False).to("cuda:0")
# ------------------
# Train Generators
# ------------------
G_AB.train()
G_BA.train()
optimizer_G.zero_grad()
# Identity loss
loss_id_A = criterion_identity(G_BA(real_A), real_A)
loss_id_B = criterion_identity(G_AB(real_B), real_B)
loss_identity = (loss_id_A + loss_id_B) / 2
# GAN loss
fake_B = G_AB(real_A)
loss_GAN_AB = criterion_GAN(D_B(fake_B.to("cuda:1")), valid.to("cuda:1"))
fake_A = G_BA(real_B)
loss_GAN_BA = criterion_GAN(D_A(fake_A.to("cuda:1")), valid.to("cuda:1"))
loss_GAN = (loss_GAN_AB + loss_GAN_BA) / 2
# Cycle loss
recov_A = G_BA(fake_B)
loss_cycle_A = criterion_cycle(recov_A, real_A)
recov_B = G_AB(fake_A)
loss_cycle_B = criterion_cycle(recov_B, real_B)
loss_cycle = (loss_cycle_A + loss_cycle_B) / 2
# Total loss
# error occurs here for loss_GAN is on device 1, and it seems that 'loss_GAN.to("cuda:0")' does not work!
# Runtime Error: expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0
loss_G = loss_GAN + opt.lambda_cyc * loss_cycle + opt.lambda_id * loss_identity
loss_G.backward()
optimizer_G.step()
# -----------------------
# Train Discriminator A
# -----------------------
optimizer_D_A.zero_grad()
# Real loss
loss_real = criterion_GAN(D_A(real_A), valid)
# Fake loss (on batch of previously generated samples)
fake_A_ = fake_A_buffer.push_and_pop(fake_A)
loss_fake = criterion_GAN(D_A(fake_A_.detach()), fake)
# Total loss
loss_D_A = (loss_real + loss_fake) / 2
loss_D_A.backward()
optimizer_D_A.step()
# -----------------------
# Train Discriminator B
# -----------------------
optimizer_D_B.zero_grad()
# Real loss
loss_real = criterion_GAN(D_B(real_B), valid)
# Fake loss (on batch of previously generated samples)
fake_B_ = fake_B_buffer.push_and_pop(fake_B)
loss_fake = criterion_GAN(D_B(fake_B_.detach()), fake)
# Total loss
loss_D_B = (loss_real + loss_fake) / 2
loss_D_B.backward()
optimizer_D_B.step()
loss_D = (loss_D_A + loss_D_B) / 2
# --------------
# Log Progress
# --------------
# Determine approximate time left
batches_done = epoch * len(dataloader) + i
batches_left = opt.n_epochs * len(dataloader) - batches_done
time_left = datetime.timedelta(seconds=batches_left * (time.time() - prev_time))
prev_time = time.time()
# Print log
sys.stdout.write(
"\r[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f, adv: %f, cycle: %f, identity: %f] ETA: %s"
% (
epoch,
opt.n_epochs,
i,
len(dataloader),
loss_D.item(),
loss_G.item(),
loss_GAN.item(),
loss_cycle.item(),
loss_identity.item(),
time_left,
)
)
# If at sample interval save image
if batches_done % opt.sample_interval == 0:
sample_images(batches_done)
# Update learning rates
lr_scheduler_G.step()
lr_scheduler_D_A.step()
lr_scheduler_D_B.step() |
st177013 | Hey @shaoming20798, are G_AB and G_BA the two large models you referred to? Will it work if you put your G_AB and G_BA models on two different GPUs, move the computed loss into the same GPU, then compute loss_G on that GPU and run backward from there?
BTW, for distributed training discussions, please consider adding a “distributed” tag. People working on distributed training will actively checking that category. |
st177014 | I am trying to train MaskRCNN (class from Torchvision) on 3 machines (master, worker-1 and worker-2) (also multiple GPUs per machine) using distributed RPC & with model parallelism. So far I was able to divide the whole architecture onto 2 machines (manually). So far I have used below resources extensively to do this,
Finetuning MaskRCNN in general: TorchVision Object Detection Finetuning Tutorial — PyTorch Tutorials 1.7.1 documentation 2
Distributed RPC: Distributed Pipeline Parallelism Using RPC — PyTorch Tutorials 1.7.1 documentation 1
Torchvision github
Now only thing left is to write the training loop, where I am stuck. I have observed the training loop used in this 2. And I need to convert this code to distributed training loop. How should I edit https://github.com/pytorch/vision/tree/master/references/detection/engine.py (Which contains training loop for training mask-rcnn as linked previously) to achieve this? Also training loop for distributed rpc is also given in this example 1 (As mentioned above) for classification task. How to combine these 2 to train mask rcnn in distributed way (with model parallelism)?
EDIT :
Question-2:
Second shard of the model which resides on the second machine, has 2 modules. Each of them is allocated 1 GPU on that machine (meaning module 1 on GPU-1, module 2 on GPU-2). And I have other 2 GPUs on the same machine, which are ideal. Dividing these 2 modules on 4 GPUs equally mean overriding their forward methods, which are very complicated (Talking about RPN and RoI Heads module). Can I do something like this, put module-1 on GPU-1 & GPU-2, module-2 on GPU-3 & GPU-4, now when input batch comes split that equally into 2 parts, process them like this, part1 → module-1 (GPU:1) → module-2 (GPU:3) & part2 → module-1 (GPU:2) → module-2 (GPU:4) (of course, these 2 workflows will run concurrently, at the end their losses will be averaged and avg. loss will be returned to master node). I have already referred to following articles,
https://pytorch.org/tutorials/advanced/rpc_ddp_tutorial.html
https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html
Is it possible to perform above explained case in PyTorch? |
st177015 | To be clear, since you already know how to split the model into two model shards, is your question how to convert the following lines in https://github.com/pytorch/vision/blob/master/references/detection/engine.py:
optimizer.zero_grad()
losses.backward()
optimizer.step()
into the code in RPC Framework Tutorial?
with dist_autograd.context() as context_id:
outputs = model(inputs)
dist_autograd.backward(context_id, [loss_fn(outputs, labels)])
opt.step(context_id)
cc: @mrshenli |
st177016 | I know about the conversion you mentioned in your answer. But what should be done to these lines in engine.py. Should I keep it as it is?
# reduce losses over all GPUs for logging purposes
loss_dict_reduced = utils.reduce_dict(loss_dict)
losses_reduced = sum(loss for loss in loss_dict_reduced.values())
I have added another question please take look at that. Thanks for answering my questions. |
st177017 | I know about the conversion you mentioned in your answer. But what should be done to these lines in engine.py. Should I keep it as it is?
Hey @matrix, reduce_dict internally uses collective communication. If you need that, you will need to setup the process group with init_process_group properly in the way that only the processes that hold loss_dict form a gang.
However, since you already use RPC, do you still need reduce_dict? Will it work if you send the loss to the same process, reduce those loss their locally (without using collective communication), and then launch distributed backward from there? |
st177018 | I’ve been using DDP for all my distributed training and now would like to use tensorboard for my visualization/logging. The only solution I can think of is to use “gather” in the rank 0 process each time I want to log an item to the board, since each process/GPU only has a subset of the data and statistics. Apparently this is somewhat cumbersome and I’m not sure if this hurts distributed efficiency. I wonder:
if doing so indeed hurts distributed efficiency
if there are recommended practices when it comes to use tensorboard in the DDP setting? If so, what are they?
Thanks! |
st177019 | @CDhere 1. all_gather involves communication sync among all ranks, so it has overhead. Depending on your application latency, you can measure the ratio btw the tensor board all gather overhead and total application latency. Based on the measured overhead, you can decide how frequently you want to dump data to tensor board. 2. as far as I know Tensorboard does not natively all gather info from multiple ranks. All gather by application should be a good way for now. |
st177020 | Hi, I am using distributed data parallel with nccl as backend for the following workload.
There are 2 nodes, node 0 will send tensors to node 1.
The send / recv process will run 100 times in a for loop.
The problem is node 0 will finish send 100 times, but node 1 will get stuck around 40 - 50.
Here is the code:
def main():
args = parser.parse_args()
main_worker(args.local_rank, args)
def main_worker(rank, args):
dist.init_process_group(backend='nccl')
data_processing_group = dist.new_group([0])
training_group = dist.new_group([1])
torch.cuda.set_device(rank)
if rank == 0:
func_0(rank, args)
else:
func_1(rank, args)
def func_0(rank, args):
for tile_id in range(100):
print('GPU: 0 started sending tile %d to GPU: 1.'%(tile_id))
tile = torch.zeros([1, 3, 3072, 3072], dtype=torch.float, device=rank)
tile.fill_(tile_id)
dist.send(tensor=tile, dst=1, tag=tile_id)
def func_1(rank, args):
for tile_id in range(100):
print('GPU: 1 started receiving tile %d from GPU: 0.'%(tile_id))
tile = torch.zeros([1, 3, 3072, 3072], dtype=torch.float, device=torch.device('cuda', rank))
dist.recv(tensor=tile, src=0, tag=tile_id)
print(torch.mean(tile))
if __name__ == '__main__':
main()
Here is the running command:
python -m torch.distributed.launch --nproc_per_node=1 --nnode=2 --node_rank=0 --master_addr="127.0.0.1" --master_port=29500 test.py --local_rank 0
python -m torch.distributed.launch --nproc_per_node=1 --nnode=2 --node_rank=1 --master_addr="127.0.0.1" --master_port=29500 test.py --local_rank 1
Here are the results:
node 0:
GPU: 0 started sending tile 90 to GPU: 1.
GPU: 0 started sending tile 91 to GPU: 1.
GPU: 0 started sending tile 92 to GPU: 1.
GPU: 0 started sending tile 93 to GPU: 1.
GPU: 0 started sending tile 94 to GPU: 1.
GPU: 0 started sending tile 95 to GPU: 1.
GPU: 0 started sending tile 96 to GPU: 1.
GPU: 0 started sending tile 97 to GPU: 1.
GPU: 0 started sending tile 98 to GPU: 1.
GPU: 0 started sending tile 99 to GPU: 1.
algorithms7@oyysuoctr1613739854048-vvjzp:~/hms2-pytorch-ddp/hms2_pseudo$
node 1:
GPU: 1 started receiving tile 39 from GPU: 0.
tensor(39., device='cuda:1')
GPU: 1 started receiving tile 40 from GPU: 0.
tensor(40., device='cuda:1')
GPU: 1 started receiving tile 41 from GPU: 0.
tensor(41., device='cuda:1')
GPU: 1 started receiving tile 42 from GPU: 0.
tensor(42., device='cuda:1')
GPU: 1 started receiving tile 43 from GPU: 0.
tensor(43., device='cuda:1')
GPU: 1 started receiving tile 44 from GPU: 0.
tensor(44., device='cuda:1')
GPU: 1 started receiving tile 45 from GPU: 0.
tensor(45., device='cuda:1')
GPU: 1 started receiving tile 46 from GPU: 0.
Node 0 fills the tile (tensor) with the same value of its tile_id before sending, and node 1 will check the values of the received tiles. From the results, I can conclude that all the tiles received indeed have the correct value.
I also changed using isend / irecv and called req.wait(), but the same problem exists as well.
I am also confused of some parts of send / recv, from PyTorch distributed docs 2, it says nccl doesn’t support send / recv, but there is actual data transmission indeed. So what is the underlying protocol of send / recv?
My nccl version is 2.8.02 and PyTorch version is 1.8.0, thank you so much! |
st177021 | Solved by Yanli_Zhao in post #3
@richguybobby NCCL supports point to point send/recv since 2.7.3 version, we added send and recv API on top of it in PT 1.8.0. The NCCL operations should be async, to synchronize the stream, do you want to try to call cudaDeviceSynchronize() to synchronize all streams with the host device, or call … |
st177022 | One current solution is to add:
dist.barrier()
at the end of for loop, but send / recv should perform blocking itself IMO. |
st177023 | @richguybobby NCCL supports point to point send/recv since 2.7.3 version, we added send and recv API on top of it in PT 1.8.0. The NCCL operations should be async, to synchronize the stream, do you want to try to call cudaDeviceSynchronize() to synchronize all streams with the host device, or call cudaEventSynchronize to synchronize one event with host device. Meanwhile, we will update the PyTorch Distributed docs. Thanks |
st177024 | Hi,
I had a few models training with distributed pytorch (DistributedSampler + DistributedDataParallel + multiprocessing). While the models were training for a few days, I changed a part of the data transformation code, where I renamed a file and changed all the necessary imports.
After I changed this part of the code, the models that were training all suddenly crashed when initializing the next epoch. They all crashed with error messages along the lines of “No module named __”.
What’s weird is that this module is only loaded when initializing each process, and the training loop is confined within each spawned process. Thus, I’m not sure why changing the name of this module caused my code to crash mid-training. Is this a common issue in multiprocessing? Am I misunderstanding something here?
PS. In case it helps to know which module it was…
The module I changed was a file named transformations.transforms. I changed it to transformations.single_transforms since it seemed to be interfering with torchvision.transforms. As usual, loading transformations only occurs once in the code just before loading the dataset.
Also, it wasn’t like the training crashed as soon as I made this change - it crashed after finishing 1 epoch of training which is also weird…
Thanks in advance! |
st177025 | Do you have some code that we can examine? I put together a very simple program with multiprocessing and tried changing one of the module names during execution, but there was no crash. Are you potentially loading the module within the Dataset or Dataloader classes? Given that the program code is compiled into Python bytecode, and only then interpreted, I’m not sure why changing the Python code mid-execution will affect the program since it’s just the bytcode being interpreted. In fact, even deleting the bytecode during execution shouldn’t make a difference since the program has been loaded into memory. Any thoughts on what could cause this behavior @mrshenli?
As an aside, I would recommend checkpointing (and potentially torchelastic) so you don’t lose training progress for long-running jobs. |
st177026 | So my entire code base is actually quite large, but here are some necessary details.
As I said, the module import error occurred after changing transformations.transforms to transformations.single_transforms. This module is only directly imported in transformations.__init__ where my transform_factory code resides.
The multiprocessing code is structured as follows. As usual, I have some relevant setup in the main function, where I spawn the main_process function.
In the main_process function, the code is structured as:
Init process group
Obtain transformations (through transform_factory - probably the module in question)
Create Datasets, Distributed samplers, and Dataloaders
Create distributed model, loss fns, optimizers, etc.
Initialize the trainer class
Run training
The last step - run training - is basically just a nested for loop where I run one round of training followed by one round of evaluation. Simplified example:
def run(self):
self.model.train()
for epoch in range(0, self.num_epochs):
if self.train_sampler is not None:
self.train_sampler.set_epoch(epoch)
for phase in ['train', 'val']:
if phase == 'train':
self.model.train()
train_results = self.train_one_epoch(epoch)
print(train_results)
else:
self.model.eval()
val_results = self.validate(epoch)
print(val_results)
In no part of the training do I reference or try to import from transformations.single_transforms. And as you said, even if I did, it shouldn’t matter because of the python Bytecode.
Some other details that may help…
I’ve been using PyTorch for around 3 years now, mostly using DataParallel, and I’ve never encountered this issue before. I only switched to Distributed training a few weeks ago, and it’s my first time seeing this problem.
The training doesn’t bug out as soon as the change is made. In fact, it will finish its current phase of training (until the dataloader is finished iterating), then die during the transition from train --> evaluation or vice versa.
Finally, thanks for the suggestion. I do in fact checkpoint every epoch, so I can resume training. I just wanted to get to the bottom of this because I just can’t understand why the code would crash mid-training. I talked to one of my colleagues about this, and he said that he’s experienced something similar in distributed training. He also noted that changing the model structure will also cause the code to crash, but I haven’t checked this for myself. |
st177027 | Well I use DDP to speed up my training, so I’m not sure if reducing the num_workers to 0 - and thus, slowing down the code - would be a solution |
st177028 | Hi!
Does multiprocessing works in CPU?
I need to predict 200 images (200x200) with CPU, but when I use multiprocessing to create some process the execution time don’t change (or the execution time increase).
Thanks! |
st177029 | Solved by tom in post #2
PyTorch uses multiple threads in the default configuration, if you want PyTorch to only use one thread per process, you would want to disable that:
https://pytorch.org/docs/stable/torch.html#parallelism |
st177030 | PyTorch uses multiple threads in the default configuration, if you want PyTorch to only use one thread per process, you would want to disable that:
https://pytorch.org/docs/stable/torch.html#parallelism 8 |
st177031 | Hello,
I am relatively new to PyTorch Distributed Parallel and I have access to GPU nodes with Infiniband so I think I can use the NCCL Backend. I am using Slurm scripts to submit my jobs on these resources. The following is an example of a SLURM script that I am using to submit a job. NOTE HERE that I am using OpenMPI to launch multiple instances of my docker container on the different nodes in the job. The docker container that I am using for this job is linked here.
github.com
LordVoldemort28/docker-deep-machine-learning/blob/master/Dockerfile 3
FROM unlhcc/cuda-ubuntu:9.2
MAINTAINER Rahul Prajapati <[email protected]>
#github https://github.com/LordVoldemort28/docker-deep-machine-learning
#Credit - waleedka/modern-deep-learning
#https://hub.docker.com/r/waleedka/modern-deep-learning/
# Supress warnings about missing front-end. As recommended at:
# http://stackoverflow.com/questions/22466255/is-it-possibe-to-answer-dialog-questions-when-installing-under-docker
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends \
apt-utils \
git \
curl \
vim \
unzip \
wget \
build-essential cmake \
This file has been truncated. show original
SLURM FILE
#!/bin/sh
#SBATCH --ntasks-per-node=2
#SBATCH --time=168:00:00
#SBATCH --partition=gpu
#SBATCH --mem=80gb
#SBATCH --nodes=2
#SBATCH --gres=gpu:2
#SBATCH --constraint=gpu_32gb
#SBATCH --job-name=binary_classification
#SBATCH --output=binary_classification.out
pwd; hostname; date
env | grep SLURM | sort
ulimit -s unlimited
ulimit -c unlimited
export PYTHONPATH=$WORK/tf-gpu-pkgs
module purge
module load singularity compiler/gcc/4.8 openmpi
module list
mpirun singularity exec $WORK/pyopencv.sif python3 -u $@ --multiprocessing_distributed --dist_backend='nccl' --rank=0 --use_adam=1 --benchmarks=1 --benchmark_arch='vgg19' --batch_size=128 --test=1 --transfer=0 --dataset='binary_dataset'
cgget -r memory.max_usage_in_bytes /slurm/uid_${UID}/job_${SLURM_JOBID}/
mem_report
When I run the job, I get the following error and I am not sure what is exactly causing this issue. My implementation is similar to the ImageNet example on PyTorch distributed. I have not been able to make this work for weeks now and would really appreciate any help on this since I don’t have much experience with distributed systems.
ERROR RECEIVED
Traceback (most recent call last):
File "distributed_main.py", line 391, in <module>
main()
File "distributed_main.py", line 138, in main
args=(ngpus_per_node, args))
File "/usr/local/lib/python3.5/dist-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "/usr/local/lib/python3.5/dist-packages/torch/multiprocessing/spawn.py", line 107, in join
(error_index, name)
Exception: process 0 terminated with signal SIGKILL
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[53625,1],0]
Exit code: 1
Thank you,
Ayush |
st177032 | Have you tried other backend types (Gloo, MPI), do they fail with the same error?
How do you initialize process group and construct DistributedDataParallel?
For debugging, we would first try a minimum DDP example like this one 85 and make sure it works correctly with the given environment. And then switch to more complex models. |
st177033 | trying mpi is probably a really bad idea see my error:
$ python playground/multiprocessing_playground/ddp_basic_example.py
starting __main__
running main()
current process: <_MainProcess name='MainProcess' parent=None started>
pid: 4060
world_size=1
/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py:404: UserWarning: For MPI backend, world_size (1) and rank (0) are ignored since they are assigned by the MPI runtime.
warnings.warn(
Traceback (most recent call last):
File "playground/multiprocessing_playground/ddp_basic_example.py", line 153, in <module>
main()
File "playground/multiprocessing_playground/ddp_basic_example.py", line 148, in main
mp.spawn(run_parallel_training_loop, args=(world_size,), nprocs=world_size)
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
while not context.join():
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/miranda9/ML4Coq/playground/multiprocessing_playground/ddp_basic_example.py", line 107, in run_parallel_training_loop
setup_process(rank, world_size)
File "/home/miranda9/ML4Coq/playground/multiprocessing_playground/ddp_basic_example.py", line 58, in setup_process
dist.init_process_group(backend, rank=rank, world_size=world_size)
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 409, in init_process_group
_default_pg = _new_process_group_helper(
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 482, in _new_process_group_helper
raise RuntimeError(
RuntimeError: Distributed package doesn't have MPI built in. MPI is only included if you build PyTorch from source on a host that has MPI installed.
I also tried both gloo and nccl both with different error e.g.
$ python playground/multiprocessing_playground/ddp_basic_example.py
starting __main__
running main()
current process: <_MainProcess name='MainProcess' parent=None started>
pid: 4175
world_size=1
Start running DDP with model parallel example on rank: 0.
current process: <SpawnProcess name='SpawnProcess-1' parent=4175 started>
pid: 4198
End running DDP with model parallel example on rank: 0.
End current process: <SpawnProcess name='SpawnProcess-1' parent=4175 started>
End pid: 4198
*** Error in `/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python': free(): invalid size: 0x000055b4ffbb8800 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x81299)[0x2ac3ca566299]
/usr/local/cuda/lib64/libcublasLt.so.11(free_gemm_select+0x4d)[0x2ac3ee03de7d]
/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/lib/../../../../libcublas.so.11(cublasDestroy_v2+0x165)[0x2ac3e7929af5]
/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so(+0xc13a3d)[0x2ac3ff678a3d]
/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so(+0xc13b41)[0x2ac3ff678b41]
/lib64/libc.so.6(+0x39ce9)[0x2ac3ca51ece9]
/lib64/libc.so.6(+0x39d37)[0x2ac3ca51ed37]
/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python(+0x25fe29)[0x55b4ab5b4e29]
/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python(+0x25fe5d)[0x55b4ab5b4e5d]
/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python(+0x25feb4)[0x55b4ab5b4eb4]
/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python(PyRun_SimpleStringFlags+0x66)[0x55b4ab5b7d66]
/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python(Py_RunMain+0x165)[0x55b4ab5b7ed5]
/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python(Py_BytesMain+0x39)[0x55b4ab5b82d9]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x2ac3ca507555]
/home/miranda9/miniconda3/envs/automl-meta-learning/bin/python(+0x203493)[0x55b4ab558493]
======= Memory map: ========
200000000-200200000 ---p 00000000 00:00 0
200200000-200400000 rw-s 00000000 00:05 2897 /dev/nvidiactl
200400000-202400000 rw-s 00000000 00:05 2897 /dev/nvidiactl
202400000-205400000 rw-s 00000000 00:05 2897 /dev/nvidiactl
...
0000 r--p 00534000 00:31 9522865797 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/libmkl_rt.so
2ac46c6e0000-2ac46c6e2000 rw-p 0053a000 00:31 9522865797 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/libmkl_rt.so
2ac46c6e2000-2ac46c6f6000 rw-p 00000000 00:00 0
2ac46c6f6000-2ac46c6fc000 r--p 00000000 00:31 17557268491 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/mkl/_py_mkl_service.cpython-38-x86_64-linux-gnu.so
2ac46c6fc000-2ac46c712000 r-xp 00006000 00:31 17557268491 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/mkl/_py_mkl_service.cpython-38-x86_64-linux-gnu.so
2ac46c712000-2ac46c715000 r--p 0001c000 00:31 17557268491 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/mkl/_py_mkl_service.cpython-38-x86_64-linux-gnu.so
2ac46c715000-2ac46c716000 ---p 0001f000 00:31 17557268491 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/mkl/_py_mkl_service.cpython-38-x86_64-linux-gnu.so
2ac46c716000-2ac46c717000 r--p 0001f000 00:31 17557268491 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/mkl/_py_mkl_service.cpython-38-x86_64-linux-gnu.so
2ac46c717000-2ac46c71a000 rw-p 00020000 00:31 17557268491 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/mkl/_py_mkl_service.cpython-38-x86_64-linux-gnu.so
2ac46c71a000-2ac46c71b000 rw-p 00000000 00:00 0
2ac46c71b000-2ac46c745000 r--p 00000000 00:31 25797916 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so
2ac46c745000-2ac46c9b7000 r-xp 0002a000 00:31 25797916 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so
2ac46c9b7000-2ac46ca3f000 r--p 0029c000 00:31 25797916 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so
2ac46ca3f000-2ac46ca42000 r--p 00323000 00:31 25797916 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so
2ac46ca42000-2ac46ca5e000 rw-p 00326000 00:31 25797916 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so
2ac46ca5e000-2ac46cabf000 rw-p 00000000 00:00 0
2ac46cabf000-2ac46cac4000 r--p 00000000 00:31 30757369024 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/lib-dynload/_datetime.cpython-38-x86_64-linux-gnu.so
...
0000 00:05 2897 /dev/nvidiactl
2ac46cc64000-2ac46cc65000 rw-s 00000000 00:05 2897 /dev/nvidiactl
2ac46cc65000-2ac46cc66000 rw-s 00000000 00:05 2897 /dev/nvidiactl
2ac46cc6f000-2ac46cc78000 r--p 00000000 00:31 25797914 /home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/numpy/core/_multiarray_tests.cpython-38-x86_64-linux-gnu.so
2ac46cc78000-2ac46cc8b000 r-xp 00009000 00:31 25797914 /home/miranda9/miniconda3/envs/automl-meta-learninTraceback (most recent call last):
File "playground/multiprocessing_playground/ddp_basic_example.py", line 153, in <module>
main()
File "playground/multiprocessing_playground/ddp_basic_example.py", line 148, in main
mp.spawn(run_parallel_training_loop, args=(world_size,), nprocs=world_size)
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
while not context.join():
File "/home/miranda9/miniconda3/envs/automl-meta-learning/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 105, in join
raise Exception(
Exception: process 0 terminated with signal SIGABRT |
st177034 | this is my minimum self contained example that runs out of the box:
"""
Based on: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html
Correctness of code: https://stackoverflow.com/questions/66226135/how-to-parallelize-a-training-loop-ever-samples-of-a-batch-when-cpu-is-only-avai
Note: as opposed to the multiprocessing (torch.multiprocessing) package, processes can use
different communication backends and are not restricted to being executed on the same machine.
"""
import time
from typing import Tuple
import torch
from torch import nn, optim
import torch.distributed as dist
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
import os
num_epochs = 5
batch_size = 8
Din, Dout = 10, 5
data_x = torch.randn(batch_size, Din)
data_y = torch.randn(batch_size, Dout)
data = [(i*data_x, i*data_y) for i in range(num_epochs)]
class PerDeviceModel(nn.Module):
"""
Toy example for a model ran in parallel but not distributed accross gpus
(only processes with their own gpu or hardware)
"""
def __init__(self):
super().__init__()
self.net1 = nn.Linear(Din, Din)
self.relu = nn.ReLU()
self.net2 = nn.Linear(Din, Dout)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def setup_process(rank, world_size, backend='nccl'):
"""
Initialize the distributed environment (for each process).
gloo: is a collective communications library (https://github.com/facebookincubator/gloo). My understanding is that
it's a library/API for process to communicate/coordinate with each other/master. It's a backend library.
"""
# set up the master's ip address so this child process can coordinate
# os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# - use NCCL if you are using gpus: https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends
# if torch.cuda.is_available():
# backend = 'nccl'
# Initializes the default distributed process group, and this will also initialize the distributed package.
dist.init_process_group(backend, rank=rank, world_size=world_size)
def cleanup():
""" Destroy a given process group, and deinitialize the distributed package """
dist.destroy_process_group()
def get_batch(batch: Tuple[torch.Tensor, torch.Tensor], rank):
x, y = batch
if torch.cuda.is_available():
x, y = x.to(rank), y.to(rank)
else:
x, y = x.share_memory_(), y.share_memory_()
return x, y
def get_ddp_model(model: nn.Module, rank):
"""
Moves the underlying storage to shared memory.
This is a no-op if the underlying storage is already in shared memory
and for CUDA tensors. Tensors in shared memory cannot be resized.
:return:
TODO: does this have to be done outside or inside the process? my guess is that it doesn't matter because
1) if its on gpu once it's on the right proc it moves it to cpu with id rank via mdl.to(rank)
2) if it's on cpu then mdl.share_memory() or data.share_memory() is a no op if it's already in shared memory o.w.
"""
# if gpu avail do the standard of creating a model and moving the model to the GPU with id rank
if torch.cuda.is_available():
# create model and move it to GPU with id rank
model = model.to(rank)
ddp_model = DDP(model, device_ids=[rank])
else:
# if we want multiple cpu just make sure the model is shared properly accross the cpus with shared_memory()
# note that op is a no op if it's already in shared_memory
model = model.share_memory()
ddp_model = DDP(model) # I think removing the devices ids should be fine...?
return ddp_model
# return OneDeviceModel().to(rank) if torch.cuda.is_available() else OneDeviceModel().share_memory()
def run_parallel_training_loop(rank, world_size):
"""
Distributed function to be implemented later.
This is the function that is actually ran in each distributed process.
Note: as DDP broadcasts model states from rank 0 process to all other processes in the DDP constructor,
you don’t need to worry about different DDP processes start from different model parameter initial values.
"""
setup_process(rank, world_size)
print()
print(f"Start running DDP with model parallel example on rank: {rank}.")
print(f'current process: {mp.current_process()}')
print(f'pid: {os.getpid()}')
# get ddp model
model = PerDeviceModel()
ddp_model = get_ddp_model(model, rank)
# do training
for batch_idx, batch in enumerate(data):
x, y = get_batch(batch, rank)
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(x)
# Gradient synchronization communications take place during the backward pass and overlap with the backward computation.
loss_fn(outputs, y).backward() # When the backward() returns, param.grad already contains the synchronized gradient tensor.
optimizer.step() # TODO how does the optimizer know to do the gradient step only once?
print()
print(f"End running DDP with model parallel example on rank: {rank}.")
print(f'End current process: {mp.current_process()}')
print(f'End pid: {os.getpid()}')
# Destroy a given process group, and deinitialize the distributed package
cleanup()
def main():
print()
print('running main()')
print(f'current process: {mp.current_process()}')
print(f'pid: {os.getpid()}')
# args
if torch.cuda.is_available():
world_size = torch.cuda.device_count()
else:
world_size = mp.cpu_count()
world_size = 1
print(f'world_size={world_size}')
mp.spawn(run_parallel_training_loop, args=(world_size,), nprocs=world_size)
if __name__ == "__main__":
print('starting __main__')
start = time.time()
main()
print(f'execution length = {time.time() - start}')
print('Done!\a\n') |
st177035 | looks like your DDP model is trained as expected, crashed when it exits. Do you want to pass “join=True” to mp.spawn()? |
st177036 | Hi,
I am trying to leverage parallelism with distributed training but my process seems to be hanging or getting into ‘deadlock’ sort of issue.
So I ran the below code snippet to test it and it is hanging again. Can anyone plz help on this…
import os
import torch
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9994'
os.environ['RANK'] = "0"
os.environ['WORLD_SIZE'] = "1"
torch.distributed.init_process_group(backend='nccl') ##hanging here
I’m running my code in a kubeflow server’s jupyter notebook and here are the versions that I’m using -
PyTorch version: 1.7.1
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Python version: 3.6 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB
Nvidia driver version: 450.51.06
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] kubeflow-pytorchjob==0.1.3
[pip3] numpy==1.18.5
[pip3] torch==1.7.1
[pip3] torchvision==0.8.2
[conda] Could not collect
Plz suggest how to proceed further…
Thanks |
st177037 | Solved by stas in post #7
Good observation, @ptrblck! TF most likely uses a system-wide CUDA runtime - at least it did last time I checked.
USE_TF=0 should prevent TF from loading in the current incarnation of transformers, it’s very possible that this was at least a partial culprit.
The issue got fully resolved using NCCL… |
st177038 | I changed torch from 1.7.1 to 1.7.1+cu101 as shown below -
Collecting environment information...
PyTorch version: 1.7.1+cu101
Is debug build: False
CUDA used to build PyTorch: 10.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: Could not collect
Python version: 3.6 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB
Nvidia driver version: 450.51.06
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] kubeflow-pytorchjob==0.1.3
[pip3] numpy==1.18.5
[pip3] torch==1.7.1+cu101
[pip3] torchaudio==0.7.2
[pip3] torchvision==0.8.2+cu101
[conda] Could not collect
After this, torch.distributed.init_process_group(backend=‘nccl’) worked but it again hanged with script. Below is the complete log -
2021-02-18 19:00:28.946359: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Some weights of the model checkpoint at /home/jovyan/models/roberta-large/ were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at /home/jovyan/models/roberta-large/ and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
loaded df
Encoding done
fastai-c2-0:13993:13993 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
fastai-c2-0:13993:13993 [0] NCCL INFO NCCL_SOCKET_IFNAME set to eth0
fastai-c2-0:13993:13993 [0] NCCL INFO Bootstrap : Using [0]eth0:10.244.2.134<0>
fastai-c2-0:13993:13993 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
fastai-c2-0:13993:13993 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
fastai-c2-0:13993:13993 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
fastai-c2-0:13993:13993 [0] NCCL INFO NCCL_SOCKET_IFNAME set to eth0
fastai-c2-0:13993:13993 [0] NCCL INFO NET/Socket : Using [0]eth0:10.244.2.134<0>
fastai-c2-0:13993:13993 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda10.1
Note : At the time, we created this VM, we named it as fastai-c2-0. But fastai has nothing to do with this issue as I’m not using it at all
I’m using this command to launch from notebook -
!python -m torch.distributed.launch --nproc_per_node=1 ./Deepspeed.py --output_dir ./out_dir/results --overwrite_output_dir --do_train \
--do_eval --per_device_train_batch_size 10 --per_device_eval_batch_size 10 --learning_rate 3e-5 --weight_decay 0.01 \
--num_train_epochs 1 --load_best_model_at_end
Here is the simple script that I’m using -
from transformers import RobertaForSequenceClassification, RobertaTokenizerFast, Trainer, TrainingArguments, HfArgumentParser
import pandas as pd
import numpy as np
import torch
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
os.environ['NCCL_DEBUG']='INFO'
os.environ['NCCL_DEBUG_SUBSYS']='ALL'
os.environ['NCCL_IB_DISABLE']='1'
os.environ['NCCL_SOCKET_IFNAME']='eth0'
tok = RobertaTokenizerFast.from_pretrained('/home/jovyan/models/roberta-large/')
model = RobertaForSequenceClassification.from_pretrained('/home/jovyan/models/roberta-large/', num_labels=2)
df_full = pd.read_csv('IMDB_Dataset.csv')
print("loaded df")
df_full = df_full.sample(frac=1).reset_index(drop=True)
df_req = df_full.head(1000)
df_train = df_req.head(800)
df_eval = df_req.tail(200)
train_text, train_labels_raw, val_text, val_labels_raw = df_train.review.values.tolist(), df_train.sentiment.values.tolist(), df_eval.review.values.tolist(), df_eval.sentiment.values.tolist(),
train_encodings = tok(train_text, padding=True, truncation=True, max_length=512)
val_encodings = tok(val_text, padding=True, truncation=True, max_length=512)
train_labels = [1 if i=='positive' else 0 for i in train_labels_raw]
val_labels = [1 if i=='positive' else 0 for i in val_labels_raw]
class IMDbDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = IMDbDataset(train_encodings, train_labels)
val_dataset = IMDbDataset(val_encodings, val_labels)
print("Encoding done")
parser = HfArgumentParser(TrainingArguments)
train_args = parser.parse_args_into_dataclasses()
print('parser and args created')
trainer = Trainer(
model=model,
args=train_args[0],
train_dataset=train_dataset,
eval_dataset=val_dataset
)
if train_args[0].do_train:
print('------------TRAINING-------------')
trainer.train()
if train_args[0].do_eval:
print('------------EVALUATING-------------')
trainer.evaluate()
Plz someone suggest how to proceed further… |
st177039 | Also tried with nightly build(1.9.0.dev20210218+cu101) , but again hanged after processing 2 more lines -
2021-02-18 19:28:13.170701: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Some weights of the model checkpoint at /home/jovyan/models/roberta-large/ were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at /home/jovyan/models/roberta-large/ and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
loaded df
Encoding done
parser and args created
------------TRAINING-------------
fastai-c2-0:14431:14431 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
fastai-c2-0:14431:14431 [0] NCCL INFO NCCL_SOCKET_IFNAME set to eth0
fastai-c2-0:14431:14431 [0] NCCL INFO Bootstrap : Using [0]eth0:10.244.2.134<0>
fastai-c2-0:14431:14431 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
fastai-c2-0:14431:14431 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
fastai-c2-0:14431:14431 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
fastai-c2-0:14431:14431 [0] NCCL INFO NCCL_SOCKET_IFNAME set to eth0
fastai-c2-0:14431:14431 [0] NCCL INFO NET/Socket : Using [0]eth0:10.244.2.134<0>
fastai-c2-0:14431:14431 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda10.1
used the same script for both .
Am I making any mistake ? plz let me know… |
st177040 | Could you try to add following codes in your script?
torch.distributed.init_process_group(backend=‘YOUR BACKEND’,
init_method=‘env://’)
torch.cuda.set_device(args.local_rank) # before your code runs
also python -m torch.distributed.launch --help to understand more about how to use torch.distributed.launch |
st177041 | @Yanli_Zhao , I suggested to @Saichandra_Pandraju to run this simple test:
echo 'import os, torch; print(os.environ["LOCAL_RANK"]); torch.distributed.init_process_group("nccl")' > test.py
python -m torch.distributed.launch --nproc_per_node=1 test.py
and it hangs in his kubeflow environment, whereas it should work just fine.
This is the simplest reproducible script I can think of that doesn’t involve any other parts.
For context I have been trying to help @Saichandra_Pandraju in this Issue: Model Parallelism for Bert Models · Issue #10151 · huggingface/transformers · GitHub 8 but no matter what I suggested init just hangs there.
DeepSpeed on single gpu with ZeRO-Offload still needs distributed env, due to its implementation, so that’s why we are trying to figure out this problem. And I don’t think gloo which does work for @Saichandra_Pandraju will work with deepspeed as it’s lacking features from nccl.
But reading his last follow up, once he matched cuda versions of pytorch and system-wide one the basic launcher now works. Which is odd that he needed to match the two, as pytorch’s distributed shouldn’t be impacted by the system-wide cuda install.
But then it gets stuck at a later stage will investigate the rest now. |
st177042 | stas:
But reading his last follow up, once he matched cuda versions of pytorch and system-wide one the basic launcher now works. Which is odd that he needed to match the two, as pytorch’s distributed shouldn’t be impacted by the system-wide cuda install.
If you’ve installed the PyTorch binaries, the shipped CUDA runtime should be used by NCCL.
However, based on the output it seems that TF is again loaded (which created other issues in the past, but we never got an update from TF):
2021-02-18 19:28:13.170701: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Do you know, if TF is using the locally installed CUDA runtime? If so, I would assume that you need to install PyTorch with the matching CUDA runtime if TF also imports the system-wise installation. |
st177043 | Good observation, @ptrblck! TF most likely uses a system-wide CUDA runtime - at least it did last time I checked.
USE_TF=0 should prevent TF from loading in the current incarnation of transformers, it’s very possible that this was at least a partial culprit.
The issue got fully resolved using NCCL_SOCKET_IFNAME=lo as reported here 51 based on this thread 14. |
st177044 | Hi there,
So I was playing around with this 1 tutorial and took the code from here 1, and got it working fine locally (single machine without the GCP).
So my idea was to actually have rank=0 running in the VM on GCP and have rank=1 run on my laptop. This means that both of these workers are on completely different networks.
I am unable to get this setup working, it seems to be hanging when I run the worker with rank=1.
This is my setup and the changes I made in as much detail as possible.
For the tutorial code, the only change I made was changing port 29500 to port 5000.
The /etc/hosts file got a new entry on my local machine (laptop). Specifically the IP of the NIC, for me, it was wlp2s0 and let’s assume the address was 11.22.33.44. So the /etc/hosts file would have the new entry 11.22.33.44 mycomputer.
The /etc/hosts file got a new entry in my VM on GCP, but this is set by default. The NIC for the VM seems to be ens4? And let’s assume the IP address is 44.33.22.11 and let’s also assume the IP address of the VM is 33.33.33.33. So the new entry of the /etc/hosts file would be 44.33.22.11 gcp-vm
I also made sure the ports of the VM are open and listening, so I updated the firewall settings and to verify this I simply created a flask server and queried the IP, in my case it would be http:// 33.33.33.33:5000
I’m not surprises it’s hanging I think the way how I’m running the parameter server isn’t the correct way, but I’m unsure. What are the correct changes I should make to get this working properly? |
st177045 | Do you want to have a way to let the two machines ping each other and make sure the network connection between them is working? |
st177046 | Hi there @Yanli_Zhao! Thanks for the reply. Yes, so basically it seems the two machines are not communicating with each other. I want to figure out how to get these two machines (one running on GCP and the other being my laptop (in a different network)) to be able to communicate and train the MNIST model using the RPC Framework. Or is this something that the Pytorch RPC is unable to do currently in this manner? |
st177047 | Did that answer your question @Yanli_Zhao? Or did I miss something? I can give any extra information that is needed. |
st177048 | I think the laptop and VM should physically be in the same network, otherwise the two nodes will not get hand shake and get connected via RPC framework. |
st177049 | Hi,
I used the example of the following link " 💥 Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups | by Thomas Wolf | HuggingFace | Medium 16"
I would like to put the test dataset in multiple GPU since it is very large, However, the problem is I cannot gather all the results from GPU after finishing the test on each GPU.
Is there any coder that I can use?
Thanks |
st177050 | Solved by farhadzadeh in post #6
Thanks to respond. However, I found out all_reduce, is for some reduceOP such as Max, SUM and so on.
The one that I was looking for is torch.distribution.all.gather(list_tensor, tenso) |
st177051 | Hi @farhadzadeh,
Could you give me a code snippet of what you are doing at the moment?
That way it will be easier to help |
st177052 | the evaluation which works in each GPU
.....
with torch.no_grad():
val_outputs=[]
val_labels=[]
for batch_idx, (images_id, images, _, _, labels) in tqdm(enumerate(self.val_dataloader_2), desc="validation"):
labels = labels.to(self.device)
images = images.to(self.device)
# #step=0
output_0 = self.model_image2text(images, mode='sample')
##### required for the next step
output_from_image2text = self.image2text_tokenizer.decode_batch(output_0.cpu().numpy())
input_encodings = self.tokenizer.batch_encode_plus(output_from_image2text, return_tensors="pt",
pad_to_max_length=True,
max_length=self.args.max_seq_length,
truncation=True)
input_ids = input_encodings['input_ids']
attention_mask = input_encodings['attention_mask']
outputs = self.model(input_ids.to(self.device), attention_mask.to(self.device), mode='sample')
for output in outputs:
val_outputs.append(output)
for label in labels:
val_labels.append(label)
self.val_outputs.append(torch.stack(val_outputs).to(self.device))
self.val_labels.append(torch.stack(val_labels).to(self.device))
After that I would like to gather all val_outputs:
.....
torch.distributed.barrier()
if torch.distributed.get_rank() in [-1,0]:
print(f"all: {len(self.val_outputs)}")
torch.distributed.all_reduce_multigpu(self.val_outputs)
print(f"all: {len(self.val_outputs)}")
torch.distributed.all_reduce_multigpu(self.val_labels) |
st177053 | Yanli_Zhao:
all_reduce
Thanks to respond. However, I found out all_reduce, is for some reduceOP such as Max, SUM and so on.
The one that I was looking for is torch.distribution.all.gather(list_tensor, tenso) |
st177054 | Hi All,
I am using torch.nn.DataParallel for the model in multiple gpu setup (4 GPUs)as defined below and find that there is some mismatch between the input and weight. I am using the following Module for a Decoder kind of a network.
class DepthDecoder(nn.Module):
def __init__(self, num_ch_enc, scales=range(4), num_output_channels=1, max_depth=10, use_skips=True, use_bn=False):
super(DepthDecoder, self).__init__()
self.num_output_channels = num_output_channels
self.max_depth = max_depth
self.use_skips = use_skips
self.upsample_mode = 'nearest'
self.scales = scales
self.num_ch_enc = num_ch_enc
self.num_ch_dec = np.array([16, 32, 64, 128, 256])
# decoder
self.convs = OrderedDict()
for i in range(4, -1, -1):
# upconv_0
num_ch_in = self.num_ch_enc[-1] if i == 4 else self.num_ch_dec[i + 1]
num_ch_out = self.num_ch_dec[i]
if use_bn:
self.convs[("upconv", i, 0)] = ConvBlock_bn(num_ch_in, num_ch_out)
else:
self.convs[("upconv", i, 0)] = ConvBlock(num_ch_in, num_ch_out)
# upconv_1
num_ch_in = self.num_ch_dec[i]
if self.use_skips and i > 0:
num_ch_in += self.num_ch_enc[i - 1]
num_ch_out = self.num_ch_dec[i]
if use_bn:
self.convs[("upconv", i, 1)] = ConvBlock_bn(num_ch_in, num_ch_out)
else:
self.convs[("upconv", i, 1)] = ConvBlock(num_ch_in, num_ch_out)
for s in self.scales:
# self.convs[("dispconv", s)] = Conv3x3(self.num_ch_dec[s], self.num_output_channels)
self.convs[("depthconv", s)] = Conv3x3(self.num_ch_dec[s], self.num_output_channels)
self.decoder = nn.ModuleList(list(self.convs.values()))
self.sigmoid = nn.Sigmoid()
def forward(self, input_features):
self.outputs = {}
# decoder
x = input_features[-1]
for i in range(4, -1, -1):
x = self.convs[("upconv", i, 0)](x)
x = [upsample(x)]
if self.use_skips and i > 0:
x += [input_features[i - 1]]
x = torch.cat(x, 1)
x = self.convs[("upconv", i, 1)](x)
if i in self.scales:
# self.outputs[("disp", i)] = self.sigmoid(self.convs[("dispconv", i)](x))
self.outputs[("depth", i)] = self.sigmoid(self.convs[("depthconv", i)](x)) * self.max_depth
return self.outputs
ConvBlock and Conv3x3 are just simple convultional class performing 2D conv.
class ConvBlock(nn.Module):
"""Layer to perform a convolution followed by ELU
"""
def __init__(self, in_channels, out_channels):
super(ConvBlock, self).__init__()
self.conv = Conv3x3(in_channels, out_channels)
self.nonlin = nn.ELU(inplace=True)
def forward(self, x):
out = self.conv(x)
out = self.nonlin(out)
return out
I have seen similar post with the similar error, and what I get is this is something to do with redefinition of a function in init function, and is about putting as much as functionality in forward function. But I was quite not able to pin-point, what exactly was causing this issue in my case.
Any suggestion would be really helpful! |
st177055 | Solved by ptrblck in post #4
Thanks for the update.
The issue is raised, because self.convs is using and OrderedDict instead of an nn.ModuleDict, which will not properly register these modules.
Change it to the latter and adapt the indexing, as nn.ModuleDict expects strings (e.g. to "upconv{}{}".format(i, 0)).
Also, note tha… |
st177056 | Could you post an executable code snippet as the current code doesn’t define the num_ch_enc (shape and values), the upsample usage as well as the input shape? |
st177057 | Hi @ptrblck, Please find my response inline:
define the num_ch_enc (shape and values)
-------> Here in this case, num_ch_enc defines the number of channel which is a <class 'numpy.ndarray'> containing the value [64 64 128 256 512] of shape (5,)
Input shape to forward function
-------> Input to forward function is list of Len 5, where each element is a tensor of size:
torch.Size([8, 64, 240, 320])
torch.Size([8, 64, 120, 160])
torch.Size([8, 128, 60, 80])
torch.Size([8, 256, 30, 40])
torch.Size([8, 512, 15, 20])
Where 8 is the Batch Size (which rightly reduces to 2, when I use multiple GPUs(4) with Dataparallel)
Definition of Upsample and Conv3x3
def upsample(x):
"""Upsample input tensor by a factor of 2
"""
return F.interpolate(x, scale_factor=2, mode="nearest")
class Conv3x3(nn.Module):
"""Layer to pad and convolve input
"""
def __init__(self, in_channels, out_channels, use_refl=True):
super(Conv3x3, self).__init__()
if use_refl:
self.pad = nn.ReflectionPad2d(1)
else:
self.pad = nn.ZeroPad2d(1)
self.conv = nn.Conv2d(int(in_channels), int(out_channels), 3)
def forward(self, x):
out = self.pad(x)
out = self.conv(out)
return out
Data Parallel part goes without any hassle for Encoder Part but it fails and gives this error, when it goes through the DepthDecoder network. (Output of Encoder goes-to DepthDecoder class specified before. |
st177058 | Thanks for the update.
The issue is raised, because self.convs is using and OrderedDict instead of an nn.ModuleDict, which will not properly register these modules.
Change it to the latter and adapt the indexing, as nn.ModuleDict expects strings (e.g. to "upconv{}{}".format(i, 0)).
Also, note that nn.DataParallel will split each element in your list in dim0, so that each GPU will receive a list of 5 tensors in the shape:
torch.Size([1, 64, 240, 320])
torch.Size([1, 64, 120, 160])
torch.Size([1, 128, 60, 80])
torch.Size([1, 256, 30, 40])
torch.Size([1, 512, 15, 20])
using 8 GPUs. |
st177059 | i have a script in which train two models in an independent way, on the same dataset and using the same initialization
pseudo code would read something like
initialize warm-up model
train and store warm-up model
load warm-up model and train using 1st strategy
load warm-up model and train using 2nd strategy
compare performances
is there a way to assign in the script the training to different GPUs so that steps 3&4 happen in parallel without calling a different .py file for each step? |
st177060 | Hi,
I am not aware of any way to do this in PyTorch. However, it seems like your usecase is very easily fixed writing a bash script. You can use the same .py script using different arguments and call it in a bash script as such:
python train.py --warmup --model_out "weights.ckpt" # warmup
python train.py --strategy_1 --model_in "weights.ckpt" --model_out "strategy_1.ckpt"
python train.py --strategy_2 --model_in "weights.ckpt" --model_out "strategy_2.ckpt"
python train.py --compare --models_to_compare "strategy_1.ckpt" "strategy_2.ckpt"
Then you just run the above script with bash run.sh.
Hope this helps! |
st177061 | that seems to be the only way to do it, but the problem is that i want to call the script many times and with different values of hyperparameters, so it gets way too complicated…
thanks! |
st177062 | Hi Everyone,
I am using 4 GPUs for training a model, which was earlier being trained on single gpu, for leveraging the data parallelism and speeding up the training process. For my code, I have set the batch size as 8, and was expecting that while training on 4 GPUs the data would evenly distribute among the 4gpus as individual batch size of 2. But I find that all the inputs are always placed on GPU 0.
Code for Data Parallelism
print ("GPU Count ", torch.cuda.device_count())
models["encoder"] = ResnetEncoder(
num_layers=options.resnet_num_layers, pretrained=True)
models["depth"] = DepthDecoder(
models["encoder"].num_ch_enc, scales=range(args.numScales), use_bn=args.use_bn)
if torch.cuda.device_count() > 1:
models["encoder"] = torch.nn.DataParallel(models["encoder"])
models["decoder"] = torch.nn.DataParallel(models["depth"])
models["encoder"].cuda()
models["depth"].cuda()
Data Loading Phase
for epoch in range(options.numEpochs):
data_iterator = tqdm(dataloader)
optimizer.zero_grad()
for sampleIndex, inputs in enumerate(data_iterator):
for k in inputs:
print (inputs[k].size())
# predict depth maps, now only apply on the first frame
outputs = {}
for seq_index in range(options.numFrames):
# images are supposed to be normalized to [0, 1] in dataloader
inputs['image', seq_index] = inputs['image', seq_index].cuda()
inputs['extrinsic', seq_index].cuda()
print (inputs['image', seq_index].device)
The inputs is a dictionary which has tensors of following size:
torch.Size([8, 3, 480, 640])
torch.Size([8, 3, 480, 640])
torch.Size([8, 480, 640])
torch.Size([8, 4, 4])
torch.Size([8, 3, 480, 640])
torch.Size([8, 3, 480, 640])
torch.Size([8, 480, 640])
torch.Size([8, 4, 4])
torch.Size([8, 3, 480, 640])
torch.Size([8, 3, 480, 640])
torch.Size([8, 480, 640])
torch.Size([8, 4, 4])
torch.Size([8, 6])
The following print statement print (inputs['image', seq_index].device) always print Cuda: 0. I was expecting it should have printed 1, 2 or 3 too but looks like the data is not getting splitted between all the GPUs. Am I missing something here ? |
st177063 | Solved by ptrblck in post #6
I don’t quite understand the “each element of the list (which is a tensor) has to shared across multiple gpu(in Dataparallel case)”.
nn.DataParallel will split the input tensor in dim0 and will send each chunk to a GPU. The elements won’t be duplicated, which implies the “sharing”.
Since the list … |
st177064 | The data will be split in the forward method of the model (actually before calling it), so you might want to check the shape and device of the data inside the model not in the DataLoader loop. |
st177065 | @ptrblck Thank you! That works absolutely as expected now . I had an added query to my previous question. In my network model as printout out in the below code snippet:
for seq_index in range(options.numFrames):
# images are supposed to be normalized to [0, 1] in dataloader
inputs['image', seq_index] = inputs['image', seq_index].cuda()
inputs['extrinsic', seq_index].cuda()
if seq_index == 0:
# use color augmented image for training (not to use for loss computing)
features = models["encoder"](inputs['image_aug', seq_index].cuda())
print ("Type of feature ", len(features))
multi_scale_depth_pred = models["depth"](features)
Line features = models["encoder"](inputs['image_aug', seq_index].cuda()) returns a list of size 5 where each of the element of the list itself is a 4-dimensional tensor.(Batch Size x Channel x H xW) Now this features is fed to another model which is model["depth"] Now at this point I had two question:
Since I again wish to split this feature in between my multiple GPUs, Do I need to perform any specific operation on features since features.cuda() is an invalid command. (since features is a list)
The second part is: features which I am receiving from models["encoder"] would come back from multiple GPUs , how do I make sure the features coming from a given gpu goes to same gpu in step multi_scale_depth_pred = models["depth"](features)
Thank you,
Nitin |
st177066 | I’m not sure, if nn.DataParallel works with lists as inputs and I would recommend to check it with a very simple model.
nn.DataParallel will split the input batch in dim0 and send each chunk to the corresponding device. The result will be gathered on the default device again. Passing this tensor into the next nn.DataParallel module will perform the same splitting. The general workflow is explained here 14 in more detail. If you need more control how the data is split etc. you could try to use a manual approach by cloning some logic from the nn.DataParallel implementation. |
st177067 | Thanks @ptrblck for the prompt reply. I actually printed the size of the features in the model["decoder"] forward function but I found that batch_size was not reduced (implying the data not being distributed among multiple GPUs. To be specific features is a list of size 5,and each of its element is a 4 dimensional tensor(Batch Size x Channel x h x w). Point I am getting confused is, each element of the list (which is a tensor) has to shared across multiple gpu(in Dataparallel case), Which
I am not sure , how shall I go about this, since features.cuda() is an obvious invalid statement ( AttributeError: 'list' object has no attribute 'cuda') |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.