id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st175968 | Hi I’m trying to do a learning a certain way and I need a big batch sufficiently big for it.
Indeed I need to compute a mean vector that converges weekly in O(1/sqrt(n)) (cf. CLT).
I have a powerful machine to use: 8 RTXA6000
When I run my model with a certain batch n it will share n/8 sub-batch on each GPU and then compute 8 different vectors. I’d like to compute only one vector to optimize the weak 1/sqrt(n)-convergence I deal with is it possible to use a trick to do that? |
st175969 | @DidierDeschamps thanks for posting, sorry I don’t quite get your problem, is this a pytorch distributed framework related issue? or an algorithm question? |
st175970 | @wanchaol Pytorch automatically splits my batch across the 8 GPUs and compute 8 Losses with 8 backpropagations. Is there a way to prevent this and compute only 1 Loss with 1 backpropagation?
eg. if my batch is n=80 pytorch will split n/8=10 sub batch on each GPU and compute 8 Losses, unfortunately a loss computed on 10 datapoint is less consistent than a loss computed on 80 datapoint, that’s why I’d like to avoid this to happen. |
st175971 | Hello,
I understand that with PyTorch DDP, each process loads its own instance of data from the disk. However, my dataset is very large (a very large parquet file that loads into a dataframe) and I can’t have each process load it into memory with limited RAM capacity. Is there a shared memory implementation so that one process loads the data into RAM and then each process uses the same loaded data from the first process?
I also thought of splitting but I can’t split (.iloc) data until after all the data is loaded. |
st175972 | Does your use case need the entire dataset to be available in RAM to start off training? For DDP and general PyTorch training, usually it’s doing batched gradient descent and it only needs a batch of data available in the memory at a time, so the memory requirement on each worker should be fairly small when using PyTorch’s dataloader, could you try streaming the parquet file and use PyTorch’s dataloader to load the data? |
st175973 | Yes, the data needs to be in RAM. That’s what makes my application complicated.
How do I stream a parquet file? |
st175974 | Hi, It is strange that after upgrade torch from 1.4 to 1.9, the DDP training hangs at dist.barrier() rather than kill when some error happend.
Below is an sample of the code:
model_prepare()
dist.barrier()
train_epoch()
dist.barrier()
validate()
It occurs OOM error duing training progress. However the DDP process hangs as below rather than just stop and killed:
RuntimeError: CUDA out of memory. Tried to allocate 330.00 MiB (GPU 0; 10.92 GiB total capacity; 8.75 GiB already allocated; 146.38 MiB free; 9.01 GiB reserved in total by PyTorch)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 11607) of binary: /xxx/miniconda3/envs/torch190cu111/bin/python3
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 1/1 attempts left; will restart worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=1
master_addr=127.0.0.1
master_port=29500
group_rank=0 group_world_size=1 local_ranks=[0, 1]
role_ranks=[0, 1] global_ranks=[0, 1] role_world_sizes=[2, 2]
global_world_sizes=[2, 2]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_dl0c_xte/none_rwikf9e7/attempt_1/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_dl0c_xte/none_rwikf9e7/attempt_1/1/error.json
[2021-07-21 17:36:32] INFO (torch.distributed.distributed_c10d/MainThread) Added key: store_based_barrier_key:1 to store for rank: 1
[2021-07-21 17:36:32] INFO (torch.distributed.distributed_c10d/MainThread) Added key: store_based_barrier_key:1 to store for rank: 0
[2021-07-21 17:36:42] INFO (torch.distributed.distributed_c10d/MainThread) Waiting in store based barrier to initialize
process group for rank: 1, key: store_based_barrier_key:1 (world_size=2, worker_count=4, timeout=0:30:00)
[2021-07-21 17:36:42] INFO (torch.distributed.distributed_c10d/MainThread) Waiting in store based barrier to initialize
process group for rank: 0, key: store_based_barrier_key:1 (world_size=2, worker_count=4, timeout=0:30:00)
[2021-07-21 17:36:52] INFO (torch.distributed.distributed_c10d/MainThread) Waiting in store based barrier to initialize
process group for rank: 1, key: store_based_barrier_key:1 (world_size=2, worker_count=4, timeout=0:30:00)
[2021-07-21 17:36:52] INFO (torch.distributed.distributed_c10d/MainThread) Waiting in store based barrier to initialize
process group for rank: 0, key: store_based_barrier_key:1 (world_size=2, worker_count=4, timeout=0:30:00)
[2021-07-21 17:37:02] INFO (torch.distributed.distributed_c10d/MainThread) Waiting in store based barrier to initialize
process group for rank: 1, key: store_based_barrier_key:1 (world_size=2, worker_count=4, timeout=0:30:00)
[2021-07-21 17:37:02] INFO (torch.distributed.distributed_c10d/MainThread) Waiting in store based barrier to initialize
process group for rank: 0, key: store_based_barrier_key:1 (world_size=2, worker_count=4, timeout=0:30:00)
[2021-07-21 17:37:12] INFO (torch.distributed.distributed_c10d/MainThread) Waiting in store based barrier to initialize
process group for rank: 1, key: store_based_barrier_key:1 (world_size=2, worker_count=4, timeout=0:30:00)
[2021-07-21 17:37:12] INFO (torch.distributed.distributed_c10d/MainThread) Waiting in store based barrier to initialize
process group for rank: 0, key: store_based_barrier_key:1 (world_size=2, worker_count=4, timeout=0:30:00)
...
I start DDP by running bash command:
CUDA_VISIBLE_DEVICES="0,1" python3 -m torch.distributed.launch --nproc_per_node 2 train.py <args>
How can I deal with this problem. I want the training process just killed after met some error like OOM rather than hanging forever. |
st175975 | Thanks for posting @sunshichen Could it be the case that only some process get OOM while other process still not and it just get hangs because it’s waiting for dist.barrier()? Or you are observing all processes get OOM and all hangs?
It would also be good if you can have a self contained small script to repro the issue so that we can help you debug it. |
st175976 | I am trying to train a video classification model. I wrote a custom video dataset which essentially reads pre-extracted video frames from SSD. I want to train on a cluster of GPU machines with 4 GPU per node.
While training on 1 machine with 4 GPUs, I have following observations under two settings
Case 1. DistributedDataParallel: with 4 threads for a machine (1 thread per GPU) the data loading time for the first batch of every epoch is a lot (~110 seconds)
Case 2. DataParallel: with 4 threads for the machine, the data loading time is significantly lower (for first batch of every epoch) than Case 1 (~1.5 seconds)
I still want to use DistributedDataParallel as I want to train on multiple machines. But the extra 110 seconds every epoch is too much. How should I improve Distributed setting?
Logs for reference.
Dataparallel 4 threads
Epoch: [0][0/7508] Time 13.270 (13.270) Data 1.521 (1.521) Loss 6.2721 (6.2721) Acc@1 0.000 (0.000) Acc@5 0.000 (0.000)
Epoch: [0][10/7508] Time 0.265 (1.459) Data 0.000 (0.138) Loss 17.9221 (17.1892) Acc@1 0.000 (0.284) Acc@5 0.000 (2.273)
Epoch: [0][20/7508] Time 0.265 (0.890) Data 0.000 (0.077) Loss 20.7100 (14.7189) Acc@1 0.000 (0.149) Acc@5 0.000 (1.786)
DistributedDataparallel 4 threads 1 thread each gpu
Epoch: [0][0/7508] Time 117.339 (117.339) Data 114.749 (114.749) Loss 6.3962 (6.3962) Acc@1 0.000 (0.000) Acc@5 0.000 (0.000)
Epoch: [0][0/7508] Time 117.070 (117.070) Data 110.291 (110.291) Loss 6.3759 (6.3759) Acc@1 0.000 (0.000) Acc@5 0.000 (0.000)
Epoch: [0][0/7508] Time 117.479 (117.479) Data 114.120 (114.120) Loss 6.3918 (6.3918) Acc@1 0.000 (0.000) Acc@5 0.000 (0.000)
Epoch: [0][0/7508] Time 116.495 (116.495) Data 112.885 (112.885) Loss 6.0654 (6.0654) Acc@1 0.000 (0.000) Acc@5 0.000 (0.000)
Epoch: [0][10/7508] Time 0.248 (10.814) Data 0.000 (10.262) Loss 13.6280 (14.8321) Acc@1 0.000 (0.000) Acc@5 0.000 (0.000)
Epoch: [0][10/7508] Time 0.248 (10.870) Data 0.000 (10.030) Loss 12.6716 (16.3162) Acc@1 12.500 (1.136) Acc@5 12.500 (2.273)
Epoch: [0][10/7508] Time 0.252 (10.904) Data 0.000 (10.375) Loss 6.9328 (14.4093) Acc@1 0.000 (1.136) Acc@5 25.000 (3.409)
Epoch: [0][10/7508] Time 0.251 (10.891) Data 0.000 (10.432) Loss 12.2168 (13.2482) Acc@1 0.000 (0.000) Acc@5 0.000 (0.000)
Epoch: [0][20/7508] Time 0.252 (5.813) Data 0.000 (5.260) Loss 6.3584 (13.0522) Acc@1 0.000 (0.595) Acc@5 0.000 (1.190)
Epoch: [0][20/7508] Time 0.254 (5.831) Data 0.000 (5.440) Loss 7.1645 (12.1273) Acc@1 0.000 (0.595) Acc@5 0.000 (1.786)
Epoch: [0][20/7508] Time 0.250 (5.825) Data 0.000 (5.470) Loss 6.9019 (12.8164) Acc@1 0.000 (0.595) Acc@5 0.000 (0.595)
Epoch: [0][20/7508] Time 0.252 (5.784) Data 0.000 (5.381) Loss 6.9181 (11.9140) Acc@1 0.000 (0.000) Acc@5 0.000 (0.000)
For training script I am using a modified version of https://github.com/pytorch/examples/blob/master/imagenet/main.py 9 |
st175977 | @C_Ashraf Is the data loading time for your case really slow or the time to actually execute DistributedDataParallel? If you have a small self contained example demonstrating the problem, it would be easier to narrow down the issue. |
st175978 | My workflow is kind of complex and I do not have a self contained example. But, I will try to explain it in as much detail as possible.I have a very large dataset that I can not load into memory. So I wrote a custom dataset class
class BigDataset(torch.utils.data.Dataset):
#def __init__(self, data_paths, target_paths):
def __init__(self, data_paths):
self.data_memmaps = [np.load(path, mmap_mode='r') for path in data_paths]
#self.target_memmaps = [np.load(path, mmap_mode='r') for path in target_paths]
self.start_indices = [0] * len(data_paths)
self.data_count = 0
for index, memmap in enumerate(self.data_memmaps):
self.start_indices[index] = self.data_count
self.data_count += memmap.shape[0]
def __len__(self):
return self.data_count
def __getitem__(self, index):
memmap_index = bisect(self.start_indices, index) - 1
index_in_memmap = index - self.start_indices[memmap_index]
data = self.data_memmaps[memmap_index][index_in_memmap]
return index, torch.from_numpy(data)
Next, I read the locations of all the files (my data is separated over multiple files)
data_paths = [os.path.join(file_path, f'data/feature{index}.npy')
for index in range(2)]
dataset = BigDataset(data_paths)
Since this dataset has both the train and validation data, I need to split it. Thus, I generate train and val indices and use the following code for train and val dataloader
if args.distributed:
#train_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
train_sampler = torch.utils.data.distributed.DistributedSampler(torch.utils.data.Subset(dataset, train_indices))
val_sampler = torch.utils.data.distributed.DistributedSampler(torch.utils.data.Subset(dataset, val_indices))
else:
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size,
num_workers=args.workers, sampler=train_sampler,
pin_memory=True)
val_loader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size,
num_workers=args.worker, sampler=val_sampler,
pin_memory=True)
I am pretty sure that the dataloader is causing the issue
Epoch: [0][ 0/51] Time 220.202 (220.202) Data 205.658 (205.658) Loss 39.61639 (39.61639) Accuracy 0.00 ( 0.00)
Epoch: [0][ 0/51] Time 220.181 (220.181) Data 205.639 (205.639) Loss 43.61139 (43.61139) Accuracy 0.00 ( 0.00)
Epoch: [0][ 0/51] Time 220.229 (220.229) Data 205.687 (205.687) Loss 35.34707 (35.34707) Accuracy 0.00 ( 0.00)
Epoch: [0][ 0/51] Time 220.228 (220.228) Data 205.683 (205.683) Loss 56.56057 (56.56057) Accuracy 0.00 ( 0.00)
Epoch: [0][ 1/51] Time 0.917 (110.549) Data 0.000 (102.820) Loss 20.94585 (32.27862) Accuracy 0.00 ( 0.00)
Epoch: [0][ 1/51] Time 0.917 (110.560) Data 0.000 (102.829) Loss 63.88563 (51.75101) Accuracy 0.00 ( 0.00)
Epoch: [0][ 1/51] Time 0.917 (110.573) Data 0.000 (102.844) Loss 23.30010 (29.32359) Accuracy 0.00 ( 0.00)
Epoch: [0][ 1/51] Time 0.917 (110.572) Data 0.000 (102.842) Loss 33.03528 (44.79793) Accuracy 0.00 ( 0.00)
I followed the same procedure described [here].(https://github.com/pytorch/examples/blob/master/imagenet/main.py 3)
Is the way I am loading my data (not directly in the memory) causing this issue? |
st175979 | I am able to reproduce the same behavior using imagenet example with tiny_image_dataset. Using a batch size of 256 in two gpus I get
image1390×262 39.8 KB
While using a batch size of 512 in two gpus, it gives
image1384×246 39.8 KB
Also, I assume this could be due to dataloader memory leak. If I use my entire dataset (120GB), I see out of memory (oom) kill before any batch is trained. I looked at pytorch discussion forum and looks like it is a very open issue. Any help solving this issue will be appreciated. Thanks. |
st175980 | @VitalyFedyunin I was wondering if you could help out here since this seems like a dataloader issue? |
st175981 | Hi! There is no (known) leak, but more like problem with misunderstanding how memory works in python+forking world, we are aware of this issue and planning to fix it this year (or sooner).
Some work around discussed here: https://github.com/pytorch/pytorch/issues/13246#issuecomment-612396143 24 |
st175982 | I don’t know if you solved this problem, but I have noticed this happening to me as well. I could actually tell something was wrong just by comparing the time between various length datasets, reducing the size of the same dataset, and reducing/increasing the number of workers.
Basically, using COCO dataset wrapped into a torch.DataSet, I noticed that by changing the total number of images the time was not reduced; also changing the batch size was not affecting the initial batch loading time, and, definitely, increasing the number of workers was dramatically raising this loading time.
Finally, I understood that the cause of this was the way of spawning the processes at the beginning, which I replicated by looking at the sample code of PyTorch Imagenet. In this, “torch.multiprocessing.Spawn” is used, but it should be avoided. The best practice, and might be unique to date, is to rely on the official launch module, which relies on subprocess.Popen function. I do not know the precise difference and this should be further investigated.
Hope this helps someone else spare some time either in training or in looking for a solution. |
st175983 | Data loading depends highly on the parameters like num_workers, pin_memory and the batch_size that is able to fit into your GPU. Feel free to change num_workers in the fraction of 2 and notice the speed of iterations/epochs. Using the maximum number of num_workers will also cause large overheads and slow down your data loading. |
st175984 | For a similar issue, using args.workers=8 worked for me. My data load times went down from 3 seconds to ~0 seconds. I’m using a modified version of the pytorch imagenet example, with 3 GPUs on the same server. The num_workers passed to the DataLoader is computed as int((args.workers + gpus_per_node - 1)/gpus_per_node), which is int((8 + 3 - 1)/3), or 3 DataLoader workers per GPU. |
st175985 | Hi, when I use torch.distributed.rpc to implement pipeline parallelism for transformer-based inference, the memory consumption increases with each forward pass. The code for 2 nodes is like this,
First, I define two classes for transformer shard
import os
import sys
import threading
import time
import torch
import torch.nn as nn
import torch.distributed.rpc as rpc
from torch.distributed.rpc import RRef
from transformers import ViTFeatureExtractor, ViTForImageClassification
#####################################
# Define Transformer Shard #
####################################
class TransformerShard1(nn.Module):
def __init__(self, device, config,num_layers):
super().__init__()
# self.config = config
self.model = ViTForImageClassification.from_pretrained(config)
self.model.vit.encoder.layer = nn.Sequential(*[self.model.vit.encoder.layer[i] for i in range(num_layers)])
self.model.vit= nn.Sequential(*list(self.model.vit.children())[:-1])
self.model = nn.Sequential(*list(self.model.children())[:-1])
self._lock = threading.Lock()
self.device = device
def forward_kernel(self, x):
x = self.model(x).to_tuple()[0]
end = time.time()
return x
@torch.no_grad()
def forward(self, pixel_values=None,
head_mask=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None):
x = pixel_values.to_here().to(self.device)
with self._lock:
x = self.forward_kernel(x)
return x.cpu()
class TransformerShard2(nn.Module):
def __init__(self,device, config, num_layers):
super().__init__()
self.model = ViTForImageClassification.from_pretrained(config)
self.model.vit.encoder.layer = nn.Sequential(*[self.model.vit.encoder.layer[i] for i in range(num_layers, 2*num_layers)])
self.model.vit= nn.Sequential(*list(self.model.vit.children())[1:-1])
self.model = nn.Sequential(*list(self.model.children())[:-1])
self._lock = threading.Lock()
self.device = device
def forward_kernel(self, x):
x = self.model(x)[0]
return x
@torch.no_grad()
def forward(self, x_rref=None,
head_mask=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None):
x = x_rref.to_here().to(self.device)
with self._lock:
x = self.forward_kernel(x)
return x.cpu()
Then I stitch them into one class for forwarding:
class DistViT(nn.Module):
def __init__(
self,
split_size,
workers,
config,
num_layers,
*args, **kwargs
):
super().__init__()
self.split_size = split_size # for microbatch
self.num_layers = num_layers
self.p1_rref = rpc.remote(
workers[0],
TransformerShard1,
args = ("cpu", config, int(num_layers/2)) + args,
kwargs = kwargs
)
self.p2_rref = rpc.remote(
workers[1],
TransformerShard2,
args = ("cpu", config, int(num_layers/2)) + args,
kwargs = kwargs
)
def forward(self, pixel_values=None,
head_mask=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None):
out_futures = []
start = time.time()
id = 0
for x in iter(pixel_values.split(self.split_size, dim=0)):
x_rref = RRef(x)
y_rref = self.p1_rref.remote().forward(x_rref)
z_fut = self.p2_rref.rpc_async().forward(y_rref)
out_futures.append(z_fut)
torch.futures.wait_all(out_futures)
return out_futures
def parameter_rrefs(self):
remote_params = []
remote_params.extend(self.p1_rref.remote().parameter_rrefs().to_here())
remote_params.extend(self.p2_rref.remote().parameter_rrefs().to_here())
return remote_params
Then run RPC process like this:
######################################
# Run RPC Processes #
######################################
config = 'google/vit-base-patch16-224'
num_layers = 12
num_batches = 1
batch_size = 256
img = torch.randn(3, 384, 384)
imgs = [img for i in range(batch_size)]
feature_extractor = ViTFeatureExtractor.from_pretrained(config)
def run_master(split_size):
# put the two model parts on worker1 and worker2 respectively
print("Run mastering \n")
for si in range(len(split_size)):
model = DistViT(split_size[si], ["worker0", "worker1"], config, num_layers)
inputs = feature_extractor(images=imgs, return_tensors="pt")
for i in range(num_batches):
# generate random inputs and labels
outputs = model(**inputs)
def run_worker(rank, world_size, num_split):
# run on local host
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29501'
# Higher timeout is added to accommodate for kernel compilation time in case of ROCm.
options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=256,rpc_timeout=3000)
if rank == 0:
rpc.init_rpc(
"worker0",
rank=rank,
world_size=world_size,
rpc_backend_options=options
)
run_master(num_split)
else:
rpc.init_rpc(
"worker1",
rank=rank,
world_size=world_size,
rpc_backend_options=options
)
pass
rpc.shutdown()
The main function:
if __name__=="__main__":
world_size = 2
rank=int(sys.argv[1])
num_split=[8]
print(f"{config}, {num_layers}, {num_split}")
tik = time.time()
run_worker(rank, world_size, num_split)
tok = time.time()
print(f"Total program execution time = {tok - tik}")
It needs to import transformers package with the command:
pip install transformers
The whole script is uploaded in ubuntu pastebin
I run it on my macOS with PyTorch 1.8.1 CPU-only, the running command is like this:
on the first terminal:
python pipeline_parallelism.py 0
on the second terminal:
python pipeline_parallelism.py 1
I use top command to check the memory usage, and after each forward, the memory increases about 3MB until OOM or the RPC shutdown. Could anyone help me to fix or find the problem? Thank you very much! |
st175986 | This looks interesting!
I think firs you need to limit the debugging scope. For example, if you change the microbatch size to 1, will it still cause memory leak? If so, then the issue at least is not microbatching in pipeline parallelism.
On the other hand, have you tried PyTorch native pipeline parallelism API here? |
st175987 | Hi Yi Wang,
Thanks for your reply! When I limit the microbatch size to 1 and change the num_batches to 10, the memory leak problem exists. I will continue to debug this, any suggestions?
Thank you for recommending this API But PyTorch native pipeline parallelism API seems not to work for multiple distributed machines, right? Currently we want to build pipeline parallelism for multiple nodes.
Best,
Yang |
st175988 | I will continue to debug this, any suggestions?
Then the bug is unlikely to be relevant to the pipelining logic. One thing I would check is all the RRef objects. Have you used any of the RRef objects or its field temporarily? If any, you can try to delete it after the usage. For example, you probably can try del pixel_values after the line x = pixel_values.to_here().to(self.device). |
st175989 | Hi Yi Wang,
Thanks for your reply. I edit the code according to your suggestions like this (del pixel_values, del x_rref):
class TransformerShard1(nn.Module):
def __init__(self, device, config,num_layers):
super().__init__()
# self.config = config
self.model = ViTForImageClassification.from_pretrained(config)
self.model.vit.encoder.layer = nn.Sequential(*[self.model.vit.encoder.layer[i] for i in range(num_layers)])
self.model.vit= nn.Sequential(*list(self.model.vit.children())[:-1])
self.model = nn.Sequential(*list(self.model.children())[:-1])
self._lock = threading.Lock()
self.device = device
def forward_kernel(self, x):
x = self.model(x).to_tuple()[0]
end = time.time()
return x
@torch.no_grad()
def forward(self, pixel_values=None,
head_mask=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None):
x = pixel_values.to_here().to(self.device)
del pixel_values
gc.collect()
with self._lock:
x = self.forward_kernel(x)
print("Shard1 finish its microbatch ")
return x.cpu()
class TransformerShard2(nn.Module):
def __init__(self,device, config, num_layers):
super().__init__()
self.model = ViTForImageClassification.from_pretrained(config)
self.model.vit.encoder.layer = nn.Sequential(*[self.model.vit.encoder.layer[i] for i in range(num_layers, 2*num_layers)])
self.model.vit= nn.Sequential(*list(self.model.vit.children())[1:-1])
self.model = nn.Sequential(*list(self.model.children())[:-1])
self._lock = threading.Lock()
self.device = device
def forward_kernel(self, x):
x = self.model(x)[0]
return x
@torch.no_grad()
def forward(self, x_rref=None,
head_mask=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None):
x = x_rref.to_here().to(self.device)
del x_rref
gc.collect()
with self._lock:
x = self.forward_kernel(x)
print("Shard1 finish its microbatch ")
return x.cpu()
The overall memory usage is reduced (for the microbatch is 256, the peak memory usage is reduced from ~800MB to ~500MB). However, after each forward pass, the memory still keeps growing. It even does not be reduced after finishing one batch, which is very strange. |
st175990 | I only provided one example, and it’s expected that my suggestions is not exhaustive.
Memory leak is a really tricky issue. You can try reverting some code and checking when the memory leak appears step by step.
On the other hand, check out this question: How to debug causes of GPU memory leaks? - #3 by smth |
st175991 | Hi Yi Wang,
Thanks for your reply and this useful link! I found this memory leak problem is related to num_worker_threads=256 in options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=256). According to your suggestion and the link, I first add:
for obj in gc.get_objects():
try:
if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
print(f"After forward: {type(obj)}, {obj.size()}")
except: pass
to trace the tensor and add del ... to make sure there are the same amount of tensors between different forward passes. But the memory still keeps growing.
However, when I reduce the num_worker_threads=256 in options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=256) to num_worker_threads=2, the memory stops growing. Further, the memory shows significant growing only when I set num_worker_threads > 6.
I could only find little information about this argument in the PyTorch document:
num_worker_threads (int, optional) – The number of threads in the thread-pool used by TensorPipeAgent to execute requests (default: 16).
And the tutorial for pipeline parallelism set it to 128, which I thought was related to the communication/request speed. I am not familiar with it, and wonder:
Why does larger num_worker_threads cause the memory leak problem?
When do we need a larger num_worker_threads?
Do you have any related experience or suggestions? I need your help! Thank you very much!
Best,
Yang |
st175992 | Why does larger num_worker_threads cause the memory leak problem?
When do we need a larger num_worker_threads?
Good question! cc: @lcw (TensorPipe expert).
On the other hand, can you try ProcessGroupRpcBackendOptions to see if there is memory leak? This backend will be deprecated soon, as it is much slower on GPUs, but you should still be able to use it now.
Example code:
rpc.init_rpc(
"worker1",
rank=0,
world_size=2,
backend=rpc.BackendType.PROCESS_GROUP,
rpc_backend_options=rpc.ProcessGroupRpcBackendOptions(
num_send_recv_threads=16,
rpc_timeout=20 # 20 second timeout
)
) |
st175993 | Hi Yi Wang,
When I change to ProcessGroupRpcBackendOptions and set num_send_recv_threads=16, the memory also keeps growing with each forward pass. I observe it likes this: the memory first is reduced ~ 1M after one forward pass and then be increased >3M after several passes, which is similar to the TensorPipe backend.
In addition, the tutorial with mp.spawn(run_worker, args=(world_size, num_split), nprocs=world_size, join=True) and options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=256, rpc_timeout=300) do not cause memory leak problem on single machine. However, if I change it for 2 nodes like the following, it shows memory leak problem (the modified lines are marked with comment # original:):
import os
import sys
import threading
import time
from functools import wraps
import torch
import torch.nn as nn
import torch.distributed.autograd as dist_autograd
import torch.distributed.rpc as rpc
import torch.multiprocessing as mp
import torch.optim as optim
from torch.distributed.optim import DistributedOptimizer
from torch.distributed.rpc import RRef
from torchvision.models.resnet import Bottleneck
#########################################################
# Define Model Parallel ResNet50 #
#########################################################
# In order to split the ResNet50 and place it on two different workers, we
# implement it in two model shards. The ResNetBase class defines common
# attributes and methods shared by two shards. ResNetShard1 and ResNetShard2
# contain two partitions of the model layers respectively.
num_classes = 1000
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class ResNetBase(nn.Module):
def __init__(self, block, inplanes, num_classes=1000,
groups=1, width_per_group=64, norm_layer=None):
super(ResNetBase, self).__init__()
self._lock = threading.Lock()
self._block = block
self._norm_layer = nn.BatchNorm2d
self.inplanes = inplanes
self.dilation = 1
self.groups = groups
self.base_width = width_per_group
def _make_layer(self, planes, blocks, stride=1):
norm_layer = self._norm_layer
downsample = None
previous_dilation = self.dilation
if stride != 1 or self.inplanes != planes * self._block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * self._block.expansion, stride),
norm_layer(planes * self._block.expansion),
)
layers = []
layers.append(self._block(self.inplanes, planes, stride, downsample, self.groups,
self.base_width, previous_dilation, norm_layer))
self.inplanes = planes * self._block.expansion
for _ in range(1, blocks):
layers.append(self._block(self.inplanes, planes, groups=self.groups,
base_width=self.base_width, dilation=self.dilation,
norm_layer=norm_layer))
return nn.Sequential(*layers)
def parameter_rrefs(self):
r"""
Create one RRef for each parameter in the given local module, and return a
list of RRefs.
"""
return [RRef(p) for p in self.parameters()]
class ResNetShard1(ResNetBase):
"""
The first part of ResNet.
"""
def __init__(self, device, *args, **kwargs):
super(ResNetShard1, self).__init__(
Bottleneck, 64, num_classes=num_classes, *args, **kwargs)
self.device = device
self.seq = nn.Sequential(
nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False),
self._norm_layer(self.inplanes),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1),
self._make_layer(64, 3),
self._make_layer(128, 4, stride=2)
).to(self.device)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.ones_(m.weight)
nn.init.zeros_(m.bias)
def forward(self, x_rref):
x = x_rref.to_here().to(self.device)
with self._lock:
out = self.seq(x)
print("Shard1 finish 1 microbatch")
return out.cpu()
class ResNetShard2(ResNetBase):
"""
The second part of ResNet.
"""
def __init__(self, device, *args, **kwargs):
super(ResNetShard2, self).__init__(
Bottleneck, 512, num_classes=num_classes, *args, **kwargs)
self.device = device
self.seq = nn.Sequential(
self._make_layer(256, 6, stride=2),
self._make_layer(512, 3, stride=2),
nn.AdaptiveAvgPool2d((1, 1)),
).to(self.device)
self.fc = nn.Linear(512 * self._block.expansion, num_classes).to(self.device)
def forward(self, x_rref):
x = x_rref.to_here().to(self.device)
with self._lock:
out = self.fc(torch.flatten(self.seq(x), 1))
print("Shard2 finish 1 microbatch")
return out.cpu()
class DistResNet50(nn.Module):
"""
Assemble two parts as an nn.Module and define pipelining logic
"""
def __init__(self, split_size, workers, *args, **kwargs):
super(DistResNet50, self).__init__()
self.split_size = split_size
# Put the first part of the ResNet50 on workers[0]
self.p1_rref = rpc.remote(
workers[0],
ResNetShard1,
# original: args = ("cuda:0",) + args,
args = ("cpu",) + args,
kwargs = kwargs
)
# Put the second part of the ResNet50 on workers[1]
self.p2_rref = rpc.remote(
workers[1],
ResNetShard2,
# original: args = ("cuda:1",) + args,
args = ("cpu",) + args,
kwargs = kwargs
)
def forward(self, xs):
# Split the input batch xs into micro-batches, and collect async RPC
# futures into a list
out_futures = []
for x in iter(xs.split(self.split_size, dim=0)):
x_rref = RRef(x)
y_rref = self.p1_rref.remote().forward(x_rref)
z_fut = self.p2_rref.rpc_async().forward(y_rref)
out_futures.append(z_fut)
# collect and cat all output tensors into one tensor.
return torch.cat(torch.futures.wait_all(out_futures))
def parameter_rrefs(self):
remote_params = []
remote_params.extend(self.p1_rref.remote().parameter_rrefs().to_here())
remote_params.extend(self.p2_rref.remote().parameter_rrefs().to_here())
return remote_params
#########################################################
# Run RPC Processes #
#########################################################
#original: num_batches = 3
num_batches = 10
batch_size = 120
image_w = 128
image_h = 128
def run_master(split_size):
# put the two model parts on worker1 and worker2 respectively
model = DistResNet50(split_size, ["worker0", "worker1"])
loss_fn = nn.MSELoss()
opt = DistributedOptimizer(
optim.SGD,
model.parameter_rrefs(),
lr=0.05,
)
one_hot_indices = torch.LongTensor(batch_size) \
.random_(0, num_classes) \
.view(batch_size, 1)
for i in range(num_batches):
print(f"Processing batch {i}")
# generate random inputs and labels
inputs = torch.randn(batch_size, 3, image_w, image_h)
labels = torch.zeros(batch_size, num_classes) \
.scatter_(1, one_hot_indices, 1)
# The distributed autograd context is the dedicated scope for the
# distributed backward pass to store gradients, which can later be
# retrieved using the context_id by the distributed optimizer.
with dist_autograd.context() as context_id:
outputs = model(inputs)
dist_autograd.backward(context_id, [loss_fn(outputs, labels)])
opt.step(context_id)
def run_worker(rank, world_size, num_split):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
# Higher timeout is added to accommodate for kernel compilation time in case of ROCm.
options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=128, rpc_timeout=300)
if rank == 0:
rpc.init_rpc(
# original "master"
"worker0",
rank=rank,
world_size=world_size,
rpc_backend_options=options
)
run_master(num_split)
else:
rpc.init_rpc(
f"worker{rank}",
rank=rank,
world_size=world_size,
rpc_backend_options=options
)
pass
# block until all rpcs finish
rpc.shutdown()
if __name__=="__main__":
# original: world_size = 3
world_size = 2
rank=int(sys.argv[1])
# original: for num_split in [1, 2, 4, 8]:
for num_split in [8]:
# original: mp.spawn(run_worker, args=(world_size, num_split), nprocs=world_size, join=True)
run_worker(rank, world_size, num_split)
Do I have the wrong usage for RPC for multiple nodes? Or do you know any referencess/examples for multiple nodes? Thanks for your time and your suggestion!
Best,
Yang |
st175994 | As I understand, originally the example uses 3 processes: 1 master, and worker1 + worker2, where both workers are assigned to two CUDA devices. Now you only use 2 processes: worker0 (also as master) and worker1, both are assigned to CPU, so basically worker0 runs RPC on itself.
Here are my suggestions:
I don’t know if a process runs RPC on itself will lead to memory leak or not. Ideally it should work as doing the work locally, but I am not sure if such an unreasonable corner case is supported well. You can update the code to train ResNetShard1 locally.
I think you should still use mp.spawn in the main function. I haven’t tried multi-node training by myself. Does mp.spawn fail on multiple nodes?
Out of curiosity, since I don’t have experience on running the computation on CPU cores via RPC, I wonder if you can run the same code on 2 CPU cores of the same machine, by changing “cuda:0” and “cuda:1” to “cpu:0” and “cpu:1”, respectively. We can check if this will also cause memory leak as well. If so, the problem could be TensorPipe + CPU. |
st175995 | Hi Yi Wang,
Thanks for your good suggestions. First, I need to correct a mistake in my observation:
use mp.spawn(run_worker, args=(world_size, num_split), nprocs=world_size, join=True) also shows memory leak problem locally, same with use run_worker(rank, world_size, num_split). I only observe the master process, which does not shows growth. But the worker part shows growth, which has the same result with using run_worker(rank, world_size, num_split) and also could be observed in the tutorial for the ResNet.
In my script for transformer models, both mp.spawn() method and run_worker() have similar result :
Set the num_worker_threads=2, memory does not increase
Set the num_worker_threads=16, memory first increases to a certain level and stops growing
Set the num_worker_threads=128, memory keeps growing after every forward pass
For the tutorial for the ResNet, seems ResNet has more activations for transmission:
Set the num_worker_threads=2, need to run for more than set timeout (60000 ms)
Set the num_worker_threads=16, need to run for more than set timeout (60000 ms)
Set the num_worker_threads=64, memory keeps growing after every forward pass in one batch and is reduced after finishing one batch, and the next batch goes to a larger value. Eg, increase from ~230M to peak memory ~2090M in the first batch; then is reduced to ~235M and goes to peak ~ 2250M in the second batch. But the growth trend is slowing down.
Set the num_worker_threads=128, similar to 64. But the peak memory is larger.
And for your suggestions:
wayi:
I don’t know if a process runs RPC on itself will lead to a memory leak or not. Ideally it should work as doing the work locally, but I am not sure if such an unreasonable corner case is supported well. You can update the code to train ResNetShard1 locally.
I think it is not related to RPC in itself since only the memory for the master process does not keep growing when using mp.spawn(). But I am not sure whether it is a normal phenomenon since the growth speed for memory seems to show slowing down after multiple batches in the original tutorial. I will increase the batch size for my script.
wayi:
I think you should still use mp.spawn in the main function. I haven’t tried multi-node training by myself. Does mp.spawn fail on multiple nodes?
I do not use mp.spawn() is due to cannot give a certain rank in mp.spawn() for multiple nodes. Or any suggestions or suitable usage for this API?
wayi:
Out of curiosity, since I don’t have experience on running the computation on CPU cores via RPC, I wonder if you can run the same code on 2 CPU cores of the same machine, by changing “cuda:0” and “cuda:1” to “cpu:0” and “cpu:1”, respectively. We can check if this will also cause a memory leak as well. If so, the problem could be TensorPipe + CPU.
Yes, I could change “cuda:0” and “cuda:1” to “cpu:0” and “cpu:1”, and the code runs successfully. But it also shows a memory leak problem.
Thanks for your reply and suggestions! Hope to hear more of your thoughts
Best,
YANG |
st175996 | HuYang719:
For the tutorial for the ResNet, seems ResNet has more activations for transmission:
Thanks for all the detailed comments! Just want to confirm my understanding:
Are you saying that, even in the tutorial for the ResNet instead of your own code, you can still observe the memory leak on devices cuda:0 and cuda:1, just not on the master process.
So far, such memory leak is irrelevant to 1) RPC backend (TensorPipe or ProcessGroup), 2) worker’s device (GPU or CPU), or 3) how we launch distributed training (mp.spawn or just run_worker, even within a single host).
The only factor that is relevant to the memory leak is num_worker_threads:
Set the num_worker_threads=2, memory does not increase
Set the num_worker_threads=16, memory first increases to a certain level and stops growing
Set the num_worker_threads=128, memory keeps growing after every forward pass
@mrshenli |
st175997 | Hi Yi Wang,
Yes, I agree with all of you said except I do not test it on GPU (cuda:0, cuda:1) since without enough GPU devices. |
st175998 | Yes, I agree with all of you said except I do not test it on GPU (cuda:0, cuda:1) since without enough GPU devices.
So basically you mean the memory leak is reproducible as long as “cuda” is replaced by “cpu” in the tutorial for the ResNet.
If so, can you file a bug here. I will let RPC developers aware of this bug. Thanks! |
st175999 | Hi Yi Wang,
I have opened an issue 2 3 days ago but current no response. That will be very helpful if you could let them know this! Thank you very much!
Best,
Yang |
st176000 | I updated that bug thread with a summary of the debugging efforts we have so far. If no one answers that bug thread, I will bring it up in a meeting next week. Thanks! |
st176001 | Thanks for your time and effort! I will test it one more time on GPU tomorrow in case of any other problems before I close this thread. If no more other problems, I will close it tomorrow. Thanks! |
st176002 | Hello, I’m trying to setup distributed model training. Distributed Data Parallel documentation says that torch.nn.parallel.DistributedDataParallel performs all reduce operation by itself if I got it right. Is it possible to disable this functionality so I can call all reduce manually? Or in this case I must use something instead DistributedDataParallel? |
st176003 | Solved by wayi in post #2
Is it possible to disable this functionality so I can call all reduce manually?
Do you need to implement any customized logic in allreduce? If so, I will recommend DDP comm hooks, which provides an interface to implement customized allreduce.
Another option is no_sync context manager, which will… |
st176004 | Is it possible to disable this functionality so I can call all reduce manually?
Do you need to implement any customized logic in allreduce? If so, I will recommend DDP comm hooks 1, which provides an interface to implement customized allreduce.
Another option is no_sync context manager 6, which will disable allreduce, then it’s your responsibility to run allreduce. |
st176005 | Hi
I have a federated learning scenario in which i want to send my cloud model parameters to different clients. i tried different ways. i did it with
model_dict[name_of_models[i]].conv1.weight.data = main_model.conv1.weight.detach().clone())
and it was working but as i saw here its better to not use .data in my code.
so i change it to
model_dict[name_of_models[i]].conv1.weight = nn.Parameters(model.conv1.weight.detach().clone())
but when i do this my clients model stop updating. i think that’s because i changed the parameters i referenced in their optimizer.
now i’m doing it with
with torch.no_grad():
for i in range(number_of_clients):
state_dict = model_dict[name_of_models[i]].state_dict()
state_dict['conv1.weight'] = main_model.conv1.weight.detach().clone()
state_dict['conv2.weight'] = main_model.conv2.weight.detach().clone()
state_dict['conv1.bias'] = main_model.conv1.bias.detach().clone()
state_dict['conv2.bias'] = main_model.conv2.bias.detach().clone()
state_dict['fc1.weight'] = main_model.fc1.weight.detach().clone()
state_dict['fc2.weight'] = main_model.fc2.weight.detach().clone()
state_dict['fc3.weight'] = main_model.fc3.weight.detach().clone()
state_dict['fc1.bias'] = main_model.fc1.bias.detach().clone()
state_dict['fc2.bias'] = main_model.fc2.bias.detach().clone()
state_dict['fc3.bias'] = main_model.fc3.bias.detach().clone()
model_dict[name_of_models[i]] = model_dict[name_of_models[i]].load_state_dict(state_dict)
but when i do this i get the error ‘_IncompatibleKeys’ object has no attribute ‘train’ while training my model.
i will appreciate if anyone give me an advice on how to do this properly.
thanks |
st176006 | Since you mentioned federated learning, shouldn’t the data transfer be in a distributed environment? How about the following code snippet?
# Flattening all the parameters of the cloud model into a contiguous buffer to prepare for data transfer.
flat_params = torch.cat([p.data.view(-1) for p in model.parameters()])
# broadcast the tensors or call process group send/recv?
...
# Copy the parameters to the client model layer by layer.
offset = 0
for p in module.parameters():
p.data = flat_params[offset : offset + p.numel()].view_as(p)
offset += p.numel() |
st176007 | Thanks for replying
First of all i thought its not recommended to use .data in our code.
Second, sorry i didn’t understand what that offset is supposed to do. could you explain more?
in my scenario i have 10 clients or node and each one of them have their own model with the same architecture. my code works when i update client models layer by layer with the code below
model_dict[name_of_models[i]].conv1.weight.data = main_model.conv1.weight.detach().clone())
as i heard that .data attribute can cause silent error in my code, i’m looking for some alternate way to do it. |
st176008 | MRRP:
Second, sorry i didn’t understand what that offset is supposed to do. could you explain more?
It’s used for unpacking a flattened tensor into a set of tensors layer by layer.
as i heard that .data attribute can cause silent error in my code, i’m looking for some alternate way to do it.
Hmm, not sure why it can have silent error in a distributed environment. |
st176009 | To set Tensor storage, the recommended way is to use Tensor.set_() function: torch.Tensor.set_ — PyTorch 1.9.0 documentation 1
Learned from @albanD |
st176010 | You can refer to the following example.
You can change the model weights to a Tensor by parameters_to_vector function, communicate between ranks, revert that to weights by vector_to_parameters funciton.
I’m using a collective communication function to synchronize weight parameters on all GPUs but you can change that to point-to-point communication functions such as torch.distributed.send and torch.distributed.recv.
docs: torch.distributed 1
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel
from torch.nn.utils import parameters_to_vector, vector_to_parameters
model = ...
# synchronize model parameters across nodes
vector = parameters_to_vector(model.parameters())
dist.broadcast(vector, 0) # broadcast parameters to other processes
if dist.get_rank() != 0:
vector_to_parameters(vector, model.parameters()) |
st176011 | Hello,
I have 4 GPUs available to me, and I’m trying to run inference utilizing all of them. I’m confused by so many of the multiprocessing methods out there (e.g. Multiprocessing.pool, torch.multiprocessing, multiprocessing.spawn, launch utility).
I have a model that I trained. However, I have several hundred thousand crops I need to run on the model so it is only practical if I run processes simultaneously on each GPU. I have 4 GPUs available to me. I would like to assign one model to each GPU and run 1/4 the data on each. How can I do this?
Thank you in advance. |
st176012 | Since parallel inference does not need any communication among different processes, I think you can use any utility you mentioned to launch multi-processing. We can decompose your problem into two subproblems: 1) launching multiple processes to utilize all the 4 GPUs; 2) Partition the input data using DataLoader.
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
def run_inference(rank, world_size):
# create default process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
# load a model
model = YourModel()
model.load_state_dict(PATH)
model.eval()
model.to(rank)
# create a dataloader
dataset = ...
loader = torch.utils.data.DataLoader(dataset=dataset,
batch_size=batch_size,
shuffle=True,
num_workers=4)
# iterate over the loaded partition and run the model
for idx, data in enumerate(loader):
...
def main():
world_size = 4
mp.spawn(run_inference,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__=="__main__":
main() |
st176013 | Thank you. I will try this out now. I’m assuming that “example” in mp.spawn is the run_inference function?
Also, is it possible to make each GPU run multiple processes or no? |
st176014 | I’m assuming that “example” in mp.spawn is the run_inference function?
Yes, that’s a typo. Fixed now.
Also, is it possible to make each GPU run multiple processes or no?
Running multiple processes on each GPU will be slower, so not recommended IMO. |
st176015 | I recommend using a custom sampler.
Related thread: DistributedSampler 30
By default, DistributedSampler divides the dataset by the number of processes (equivalent to #GPUs).
In the above thread, I provided an example modification on the sampler to avoid duplication of data. |
st176016 | Would the above code stay the same, and I would add the DistributedSampler to verify that each process is getting an equal split of different data? |
st176017 | DistributedSampler with modification will give you the almost equal-sized splits.
I don’t know how you defined your model but you should also use DDP to maximally parallelize the models with multiple GPUs & use DistributedSampler with multiple processes.
make sure to customize the sampler so that there is no overlap between the different ranks (processes).
you should communicate between different processes to collect loss or accuracy metrics.
You may want to take a look at my github repository 16 for an example. |
st176018 | I don’t know how you defined your model but you should also use DDP to maximally parallelize the models with multiple GPUs & use DistributedSampler with multiple processes.
Do you mean using DDP for inference for this case? |
st176019 | @wayi
Fix: multiprocessing without DDP can also work if limited to inference only.
It is my preference is to use DDP at inference, too, because I don’t want to change my model object at training time which is DDP. |
st176020 | There’s no communication between processes during inference, I don’t think you need gloo here. You can just run n processes with different CUDA_VISIBLE_DEVICES. |
st176021 | Hi,
I run distributed training on the computer with 8 GPUs.
I first run the command:
CUDA_VISIBLE_DEVICES=6,7 MASTER_ADDR=localhost MASTER_PORT=47144 WROLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 example_top_api.py
I then run command:
CUDA_VISIBLE_DEVICES=4,5 MASTER_ADDR=localhost MASTER_PORT=47149 WROLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 example_top_api.py
however, I encountered the following issue. what/how should I do to run 2 cases on the same computer?
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 173, in <module>
main()
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 169, in main
run(args)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 621, in run
elastic_launch(
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 116, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 238, in launch_agent
result = agent.run()
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 700, in run
result = self._invoke_run(role)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 822, in _invoke_run
self._initialize_workers(self._worker_group)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 670, in _initialize_workers
self._rendezvous(worker_group)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 530, in _rendezvous
store, group_rank, group_world_size = spec.rdzv_handler.next_rendezvous()
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/rendezvous/static_tcp_rendezvous.py", line 55, in next_rendezvous
self._store = TCPStore(
RuntimeError: Address already in use |
st176022 | Hi, on a single node you only need to use torch.distributed.launch once to launch all processes on the node. Re-launching can run into connectivity issues as a TCPStore is already spawned on the host. |
st176023 | @rvarm1 ,
Thank you!
I would like to run 2 or more tasks which are independent on one computer.
All those commands are different ports and difference GPU.
Is there a way to do that? |
st176024 | @rvarm1 ,
I firstly tried the following 2 commands to start to 2 tasks which include 2 sub-processes respectively. but I encountered the Address already in use issue.
CUDA_VISIBLE_DEVICES=1,3 WORLD_SIZE=2 MASTER_PORT=44144 python -m torch.distributed.launch --nproc_per_node=2 train.py
CUDA_VISIBLE_DEVICES=4,5 WORLD_SIZE=2 MASTER_PORT=44145 python -m torch.distributed.launch --nproc_per_node=2 train.py
I then use the following 2 commands to start 2 tasks. 2 tasks with 2 sub-prcesses are started successfully respectively.
CUDA_VISIBLE_DEVICES=1,3 WORLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 --master_port 47769 train.py
CUDA_VISIBLE_DEVICES=4,5 WORLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 --master_port 47770 train.py |
st176025 | Hi there!
I’m using PyTorch for gradient based inverse problem solving. My current architecture utilizes custom Trainer class which takes model, dataset, optimizer, regularization function …etc. and handles all the training/optimization in its method. It currently wraps the model into the DataParallel object to utilize multiple available GPUs.
Now, I want to switch it to the DistributedDataParallel to further speed up the process and maybe scale it to multiple nodes. However, I’ve figured out that I cant do it in the class method, since DistributedDataParallel requires the process group creation, which can be don only in __main__ and I’m basically need to write separate train script to use the DistributedDataParallel?
Am I understood this correctly ? ( I am not super experienced with torch.multiprocessing)
Are there maybe some ways to use per-created processes group and/or DistributedDataParallel objects to unify the train process with DistributedDataParallel and DataParallel.
Thanks for your answer. |
st176026 | In general we recommend using DistributedDataParallel over DataParallel. Yes, the setup for DistributedDataParallel is indeed different from DataParallel since it requires setting up process group and also multiple processes. My recommendation would be to just have one version of your training script and only use DistributedDataParallel. |
st176027 | Hi @Konsthr,
There is no restriction where DDP is stated. Whether using DDP in a class or outside is not a problem.
You can refer to several examples including mine 1.
The following functions are related.
option.py - setup
model/__init__.py - parallelize
p.s. My implementation allows both DP and DDP. |
st176028 | I am trying to train neural network with Distributed Data Parallel. Each frame (not image) from dataset has a variable size and consist of several patches. Since optimization step should be done for each frame, gradient accumulation is need to be incorporated. In my DDP settings, batch_size=1 which means one frame per GPU and Adam optimizer with learning rate 1e-3 is used. After 15-20 epochs, training/validation loss starts to decrease significantly and then fails to converge.
From my additional test, one GPU case works well for the same script – training loss decreases monotonically. My basic guess right now is that the problem is connected with unavoidable sync between GPUs during gradient accumulation stage.
I’ve tested with different version of Pytorch (old 1.4.0 and newer 1.8.0)
Could you help me please? Thanks!
Here is my code:
(train.py)
with ddp_net.no_sync():
for patch_ind in range(patch_size-1):
# extract patch
patch_node_features = batch_node_features[patch_ind].cuda(non_blocking=True)
patch_gt = batch_gt[patch_ind].cuda(non_blocking=True)
patch_output = ddp_net(patch_node_features)
# loss calculation
train_loss = training_criterion(patch_output, patch_gt)
train_loss_mean += torch.Tensor.cpu(train_loss).detach().numpy()
train_loss = train_loss/train_node_num
train_loss.backward()
# plus last patch (sync)
patch_node_features = batch_node_features[patch_size-1].cuda(non_blocking=True)
patch_gt = batch_gt[patch_size-1].cuda(non_blocking=True)
patch_output = ddp_net(patch_node_features)
train_loss = training_criterion(patch_output, patch_gt)
train_loss_mean += torch.Tensor.cpu(train_loss).detach().numpy()
train_loss = train_loss/train_node_num
train_loss.backward() # with sync
# optimizer step
iteration_count += 1
optimizer.step()
ddp_net.zero_grad() |
st176029 | kmityagin:
My basic guess right now is that the problem is connected with unavoidable sync between GPUs during gradient accumulation stage.
There shouldn’t be any sync between the GPUs going on if you’ve disabled gradient synchronization with no_sync context manager. You can verify this by broadcasting gradients across all ranks and observing that they are different.
When switching to distributed training there may be a need to tune certain parameters to improve the accuracy. Have you tried tuning gradient sync interval, batch size, learning rates, etc? |
st176030 | @rvarm1, Thanks for you response!
In my case, batch size=1 which follows to one frame per GPU, gradient accumulation interval include all patches (one frame is set of multiple patches). Basically, multi-gpu allows to speed up training time, which is main bottleneck for my task.
The training script is configured to support multi-gpu and single-gpu with torch.multiprocessing.spawn. I tried to train with 1 gpu, training works well. But for multi-gpu, it fails and train/val loss starts increase after few epochs. The difference between successful and failed cases is only sync.
I am also tried different learning rate, there are no any changes. |
st176031 | What would be the best data-parallel solution regarding the model’s maintaining the same performance or even better compared with training on one GPU?
nn.DataParallel() vs DistributedDataParallel vs PyTorch Lightning Horovod vs any other available methods |
st176032 | We recommend to use DistributedDataParallel over nn.DataParallel as the latter relies on python threading, which is slow due to the GIL.
Regarding comparisons to PyTorch lightning, lightning offers DDP as a plugin and calls into DDP under the hood, so the performance should be comparable. I’m not aware of any performance comparisions between DDP and Horovod, unfortunately. |
st176033 | A couple of papers actually have comparisons between PT, Horovod, and other frameworks: AWS Herring paper 8 and a Ray blog post 8 which does a similar comparison. |
st176034 | Hi all,
I’m confused about how to set the seed in torch DDP. I know that the models must be initialized to the same parameter values across processes, so the seed must be the same.
Looking at the code from the DeiT repo 2 I see the line seed = args.seed + utils.get_rank().
Doesn’t this mean we have a different seed for each process? So wouldn’t the models in each process be initialized differently? |
st176035 | Solved by rvarm1 in post #2
Yes, in that case models on each rank would be initialized with different values.
However at startup time DDP broadcasts model parameters from rank 0 to ensure all ranks start training with the same model params, so setting a seed is not needed, unless you want determinism across different training… |
st176036 | Yes, in that case models on each rank would be initialized with different values.
However at startup time DDP broadcasts model parameters from rank 0 to ensure all ranks start training with the same model params, so setting a seed is not needed, unless you want determinism across different training runs with regard to the model params. |
st176037 | Hi everyone,
i implemented an architecture that handles multiple inputs, each being processed by its own encoder. In order to speed things up, I want to train my model on multiple gpus. This is my code:
def forward(self, x):
''' x: list of input tensors '''
h = list()
for i, x_i in enumerate(x):
h_i = self.encoders[i](x_i)
h.append(h_i)
z = torch.cat(h, dim=0)
y_pred = self.classifier(z)
If I would simply use data_parallel class here, each encoder would get copied to each GPU, however I think it would be faster, if each encoder would be trained on its own GPU. Is there any possibility to achieve this?
Thank you! |
st176038 | Solved by ptrblck in post #2
Yes, you can push each model to a specific GPU via self.encoders[i].to('cuda:{}'.format(i)) (where i would be the GPU id) as well as the input, transfer the outputs back, and concatenate the final output tensor. |
st176039 | Yes, you can push each model to a specific GPU via self.encoders[i].to('cuda:{}'.format(i)) (where i would be the GPU id) as well as the input, transfer the outputs back, and concatenate the final output tensor. |
st176040 | As documented in DataParallel(torch/nn/parallel/data_parallel.py), during the backwards pass, gradients from each replica are summed into the original module. So ASAIK, this is a reduce op, right?
Now I’m trying to do some debug into the C++ source code for this loss.backward(), but can’t find the part related to this “reduce” operation, could you please tell me where is this part done? |
st176041 | Solved by rvarm1 in post #2
DataParallel works via scatter/gather, so the gathering of scattered gradients is implemented in the backwards function of these operators here: pytorch/_functions.py at master · pytorch/pytorch · GitHub |
st176042 | DataParallel works via scatter/gather, so the gathering of scattered gradients is implemented in the backwards function of these operators here: pytorch/_functions.py at master · pytorch/pytorch · GitHub |
st176043 | I use torch.multiprocessing to launch distributed training, but some batches may raise cuda_out_of_memory exception. I just wanna skip these batches. I can successfully skip them when using only one GPU for traning by using try and except.
But it dosen’t work for distributed training case, the training process will just stuck. I guess it may caused by the communication between different threads.
I’d be appreciated if someone could help me. |
st176044 | Yes, doing something similar will indeed result in stuckness issues because some ranks will likely have kicked off communication, and other ranks could be skipping the batch so there will be an inconsistency in terms of no. of collective calls launched.
The tricky thing here is that you need to know a-priori whether a batch will be skipped or not, so that it can be communicated consistently across all distributed processes.
Do you know why some batches result in OOM? Are the sizes somehow different across batches, and can you use the size to estimate whether you’ll need to skip it? |
st176045 | thanks! it is a object detection task, the memory will fluctuate because the targets number varies. and i set a maximum target number now, thus i can successfully train the model. |
st176046 | Hi,
I tried 2 methods to start distributed:
method 1:
call torch.multiprocessing.spawn function to start n processes. on 1 computer with multi-GPUs
method 2:
call torch.distributed.launch to start n processes on 1 computer with multi-GPUs
if I used method 1, and used ctrl + c to stop code, sub-processing will not stop.
if I used mehod 2, and used ctrl + c to stop code, sub-processing will stop.
my questions are:
for method 1, how to stop sub-processing in python code?
for method 1, could the start code be run in python code?
#!/bin/bash
NUM_PROC=$1
shift
python3 -m torch.distributed.launch --master_port=44145 --nproc_per_node=$NUM_PROC train.py "$@" |
st176047 | Hi everyone, I am trying to train using DistributedDataParallel. Thanks to the great work of the team at PyTorch, a very high efficiency has been achieved. Everything is fine when a model is trained on a single node. However, when I try to use multiple nodes in one job script, all the processes will be on the host node and the slave node will not have any processes running on it. Here is my script for the PBS workload manager:
#!/bin/sh
#PBS -V
#PBS -q gpu
#PBS -N test_1e4_T=1
#PBS -l nodes=2:ppn=2
source /share/home/bjiangch/group-zyl/.bash_profile
conda activate Pytorch-181
cd $PBS_O_WORKDIR
path="/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/"
#Number of processes per node to launch
NPROC_PER_NODE=2
#Number of process in all modes
WORLD_SIZE=`expr $PBS_NUM_NODES \* $NPROC_PER_NODE`
MASTER=`/bin/hostname -s`
cat $PBS_NODEFILE>nodelist
#Make sure this node (MASTER) comes first
SLAVES=`cat nodelist | grep -v $MASTER | uniq`
#We want names of master and slave nodes
HOSTLIST="$MASTER $SLAVES"
#The path you place your code
#This command to run your pytorch script
#You will want to replace this
COMMAND="$path --world_size=$WORLD_SIZE"
#Get a random unused port on this host(MASTER)
#First line gets list of unused ports
#3rd line gets single random port from the list
MPORT=`ss -tan | awk '{print $5}' | cut -d':' -f2 | \
grep "[2-9][0-9]\{3,3\}" | sort | uniq | shuf -n 1`
#Launch the pytorch processes, first on master (first in $HOSTLIST) then on the slaves
RANK=0
for node in $HOSTLIST; do
ssh -q $node
python3 -m torch.distributed.launch \
--nproc_per_node=$NPROC_PER_NODE \
--nnodes=$PBS_NUM_NODES \
--node_rank=$RANK \
--master_addr="$MASTER" --master_port="$MPORT" \
$COMMAND &
RANK=$((RANK+1))
done
wait
It is modified according to the here 7.
I want to submit a 4 process work ( 2 nodes and 2 process each node).
For validation, I manually ssh to each node from the login node and execute the
ssh gpu1
python3 -m torch.distributed.launch --nnodes=2 --node_rank=0
ssh gpu2
python3 -m torch.distributed.launch --nnodes=2 --node_rank=1
It will work and has a pretty good parallel efficiency. The same problem will occur on another cluster with a slurm workload manager. I don’t see any difference between the two and lead to the totally different results. Any suggestions are welcome.
And the final error is
Traceback (most recent call last):
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 194, in _run_module_as_main
Traceback (most recent call last):
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 194, in _run_module_as_main
Traceback (most recent call last):
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 194, in _run_module_as_main
Traceback (most recent call last):
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,return _run_code(code, main_globals, None,
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 87, in _run_code
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 87, in _run_code
return _run_code(code, main_globals, None,
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 87, in _run_code
return _run_code(code, main_globals, None,
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/__main__.py", line 1, in <module>
exec(code, run_globals)
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/__main__.py", line 1, in <module>
exec(code, run_globals)
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/__main__.py", line 1, in <module>
exec(code, run_globals)
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/__main__.py", line 1, in <module>
import run.train
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/run/train.py", line 70, in <module>
import run.train
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/run/train.py", line 70, in <module>
import run.train
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/run/train.py", line 70, in <module>
import run.train
File "/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/run/train.py", line 70, in <module>
Prop_class = DDP(Prop_class, device_ids=[local_rank], output_device=local_rank)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 446, in __init__
Prop_class = DDP(Prop_class, device_ids=[local_rank], output_device=local_rank)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 446, in __init__
Prop_class = DDP(Prop_class, device_ids=[local_rank], output_device=local_rank)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 446, in __init__
Prop_class = DDP(Prop_class, device_ids=[local_rank], output_device=local_rank)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 446, in __init__
self._sync_params_and_buffers(authoritative_rank=0)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 457, in _sync_params_and_buffers
self._sync_params_and_buffers(authoritative_rank=0)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 457, in _sync_params_and_buffers
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 457, in _sync_params_and_buffers
self._sync_params_and_buffers(authoritative_rank=0)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 457, in _sync_params_and_buffers
self._sync_params_and_buffers(authoritative_rank=0)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 457, in _sync_params_and_buffers
self._distributed_broadcast_coalesced(
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1155, in _distributed_broadcast_coalesced
self._distributed_broadcast_coalesced(
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1155, in _distributed_broadcast_coalesced
self._distributed_broadcast_coalesced(
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1155, in _distributed_broadcast_coalesced
self._distributed_broadcast_coalesced(
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1155, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1616554793803/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, invalid usage, NCCL version 2.7.8
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).
dist._broadcast_coalesced(
dist._broadcast_coalesced(
RuntimeErrorRuntimeError: : NCCL error in: /opt/conda/conda-bld/pytorch_1616554793803/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, invalid usage, NCCL version 2.7.8
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).NCCL error in: /opt/conda/conda-bld/pytorch_1616554793803/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, invalid usage, NCCL version 2.7.8
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).
dist._broadcast_coalesced(
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1616554793803/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, invalid usage, NCCL version 2.7.8
ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).
Traceback (most recent call last):
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/bin/python3', '-u', '/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/', '--local_rank=1', '--world_size=4']' returned non-zero exit status 1.
Traceback (most recent call last):
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/share/home/bjiangch/group-zyl/.conda/envs/Pytorch-181/bin/python3', '-u', '/share/home/bjiangch/group-zyl/zyl/pytorch/multi-GPU/program/eann/', '--local_rank=1', '--world_size=4']' returned non-zero exit status 1. |
st176048 | My understanding is that when you run the ssh commands manually they work, but the script that does essentially the same seems to be failing?
If so, could you print out the ssh commands from the script and run those manually to check if that works? Could you also share the ssh commands printed out by the script? It could possibly be a bug in the script. |
st176049 | Thank you for your response. Acctually I have manually launched this directive “python3 -m torch.distributed …”. It can work on the PBS management system. However, it still fails in the slurm system. I have tried to use a different version of PyTorch (1.9.0) and the error suggests using torch.distributed.run instead of torch.distributed.launch. And the launch script seems to be simpler than that of before. It may not need to ssh to each node?(can not determine?)Still, that can work on PBS and fails in slurm with the following error
1 [INFO] 2021-07-10 16:51:24,635 run: Running torch.distributed.run with args: ['/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/pytho n3.9/site-packages/torch/distributed/run.py', '--nproc_per_node=2', '--nnodes=1', '--rdzv_id=1050201', '--rdzv_backend=c10d', '--rd zv_endpoint=gnode09:6818', '/home/chp/bjiangch/zyl/2021_0705/program/eann/']
2 [INFO] 2021-07-10 16:51:24,641 run: Using nproc_per_node=2.
3 [INFO] 2021-07-10 16:51:24,641 api: Starting elastic_operator with launch configs:
4 entrypoint : /home/chp/bjiangch/zyl/2021_0705/program/eann/
5 min_nodes : 1
6 max_nodes : 1
7 nproc_per_node : 2
8 run_id : 1050201
9 rdzv_backend : c10d
10 rdzv_endpoint : gnode09:6818
11 rdzv_configs : {'timeout': 900}
12 max_restarts : 3
13 monitor_interval : 5
14 log_dir : None
15 metrics_cfg : {}
16
17 terminate called after throwing an instance of 'std::system_error'
18 what(): Connection reset by peer
19 Fatal Python error: Aborted
20
21 Thread 0x00002b644a6b7a00 (most recent call first):
22 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous _backend.py", line 103 in _call_store
23 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous _backend.py", line 54 in __init__
24 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous _backend.py", line 206 in create_backend
25 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/registry.py", l ine 35 in _create_c10d_handler
26 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/api.py", line 2 53 in create_handler
27 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/registry.py", l ine 64 in get_rendezvous_handler
28 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 214 in laun ch_agent
29 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__i nit__.py", line 348 in wrapper
30 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 116 in __ca ll__
31 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/run.py", line 621 in run
32 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/run.py", line 629 in main
33 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/site-packages/torch/distributed/run.py", line 637 in <module>
34 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/runpy.py", line 87 in _run_code
35 File "/home/chp/bjiangch/.conda/envs/PyTorch-190/lib/python3.9/runpy.py", line 197 in _run_module_as_main
36 /var/spool/slurmd/job1050201/slurm_script: line 46: 17544 Aborted python -m torch.distributed.run --nproc_per_node= $NPROC_PER_NODE --nnodes=$SLURM_JOB_NUM_NODES --rdzv_id=$SLURM_JOB_ID --rdzv_backend=c10d --rdzv_endpoint=$MASTER:$MPORT $COMMAND > out
Next is my script
1 #!/bin/sh
2 #SBATCH -J 1e5-N-T=1
3 #SBATCH -p GPU-V100
4 #SBATCH --qos=gpujoblimit
5 ##SBATCH --qos=qos_a100_gpu
6 #SBATCH --gres=gpu:2
7 #SBATCH --nodes=1
8 #SBATCH --ntasks-per-node=2 --cpus-per-task=20
9 #SBATCH --gres-flags=enforce-binding
10 #SBATCH -o %x.o%j
11 #SBATCH -e %x.e%j
12 echo Running on hosts
13 echo Time is `date`
14 echo Directory is $PWD
15 echo This job runs on the following nodes:
16 echo $SLURM_JOB_NODELIST
17 # Your conda environment
18 conda_env=PyTorch-190
19
20 #ATTENTION! HERE MUSTT BE ONE LINE,OR ERROR!
21 source ~/.bashrc
22
23 module add cuda/11.1
24 module add /opt/Modules/python/anaconda3
25 #module add cudnn/7.6.5.32_cuda10.2
26 conda activate $conda_env
27 cd $PWD
28
29
30 #Number of processes per node to launch (20 for CPU, 2 for GPU)
31 NPROC_PER_NODE=2
32
33 #The path you place your code
34 path="/home/chp/bjiangch/zyl/2021_0705/program/eann/"
35 #This command to run your pytorch script
36 #You will want to replace this
37 COMMAND="$path"
38
39 #We want names of master and slave nodes
40 MASTER=`/bin/hostname -s`
41
42 MPORT=`ss -tan | awk '{print $4}' | cut -d':' -f2 | \
43 grep "[2-9][0-9]\{3,3\}" | grep -v "[0-9]\{5,5\}" | \
44 sort | uniq | shuf`
45
46 python -m torch.distributed.run --nproc_per_node=$NPROC_PER_NODE --nnodes=$SLURM_JOB_NUM_NODES --rdzv_id=$SLURM_JOB_ID --rdzv_backe nd=c10d --rdzv_endpoint=$MASTER:$MPORT $COMMAND >out
Many thanks for your kind help! |
st176050 | I want to implement a model which has a very large Linear layer, because the amount of features is very big, say 2^32 features, actual input will be a sparse tensor.
Typically if the amount of features is not big, I can create an ordinary Linear layer, for example
self.lin = Linear(in_features, out_features)
But since in_features=2^32, the above Linear layer won’t work properly.
So I’m thinking about ideas like,
Split the huge Linear into multiple small ones, e.g. each has 2^20 features. And I looked at torch.distributed.rpc, but doesn’t seem to be able to do it.
Or use parameter server, but no idea how to turn the Linear layer into a parameter server.
I didn’t find how to do the above 2 ideas, please give me some advice.
Thanks |
st176051 | Regarding the parameter server idea, I guess parameter server and linear are quite different concepts as a PS is a training paradigm while a linear layer is a component of a model.
It looks like you have a use case to shard a linear layer across multiple processes/potentially multiple nodes. We are planning to build out sharding primitives within PyTorch, please see this RFC: [RFC] Model Sharding for distributed training · Issue #55207 · pytorch/pytorch · GitHub 2 which would suit this use case once it is built out.
Curious, if the input is a sparse feature, would an nn.EmbeddingBag work better than a linear layer? |
st176052 | Thanks for your answer Rohan!
Yes, surely EmbeddingBag also looks great, but meanwhile I want to see how to shard Linear layer into multiple processes / machines.
Will definitely check out the RFC. |
st176053 | BTW if it’s nn.EmbeddingBag (actually I don’t need the aggregation operation, so nn.Embedding would be better I think), I think we still have to think out sharding.
Because the EmbeddingBag size would still be huge (i.e. 2^32), might not fit into a single machine. |
st176054 | In distributeddataparallel, when local batch-size is 64 (i.e. torch.utils.data.DataLoader(batch_size=64) and torch.utils.data.distributed.DistributedSampler() is used), assume there are N processes totally in ddp (N processes distirbute in one node or more than one node). Is the forward-backward process in ddp similar to the forward-backward process in a single gpu using 64×N batch-size inputs? |
st176055 | Solved by Yanli_Zhao in post #2
yes, distributed training using DDP is mathematically equivalent to local training |
st176056 | yes, distributed training using DDP is mathematically equivalent to local training |
st176057 | Can you clarify this? The OP is asking if batch_size of 64 per DDP process in a world size of N is the same as a single gpu with a total batch size of 64*N. There is a note in the DDP docs which state:
“When a model is trained on M nodes with batch=N , the gradient will be M times smaller when compared to the same model trained on a single node with batch=M*N if the loss is summed (NOT averaged as usual) across instances in a batch (because the gradients between different nodes are averaged). You should take this into consideration when you want to obtain a mathematically equivalent training process compared to the local training counterpart. But in most cases, you can just treat a DistributedDataParallel wrapped model, a DataParallel wrapped model and an ordinary model on a single GPU as the same (E.g. using the same learning rate for equivalent batch size).”
It looks to me that they say that they should be the same if the single GPU case if the batch size is 64 and not 64*N (as the OP asked). Can you clarify?
Thanks for any help! |
st176058 | Do DataParallel and DistributedDataParallel affect the batch size and GPU memory consumption?
(I use NCCL backend).
If I set batch_size=4 and train with nn.DataParallel or nn.DistributedDataParallel on 8 GPUs, then what will be the batch-size and mini_batch_size: 4, 8, or 32?
Can I use batch_size lower than number of GPUs, batch_size=4 for 8xGPUs (will it lead to error, or will be used only 4 GPUs or will be batch_size increased to 8 or 32)?
I tried to train EfficientNet-L2 by using each of nn.DataParallel and nn.DistributedDataParallel, but with nn.DataParallel I can use batch_size 2x higher than with nn.DistributedDataParallel without CUDA Out of memory. Does nn.DistributedDataParallel spend 2x time more GPU memory than nn.DataParallel? |
st176059 | Hey @mdc
If I set batch_size=4 and train with nn.DataParallel or nn.DistributedDataParallel on 8 GPUs, then what will be the batch-size and mini_batch_size: 4, 8, or 32?
The batch_size var is usually a per-process concept. As DataParallel is single-process multi-threads, setting batch_size=4 will make 4 the real batch size. The per-thread batch-size will be 4/num_of_devices. However, as these threads accumulate grads into the same param.grad field, the per-threads batch-size shouldn’t make any differences.
For DistributedDataParallel (DDP), as it is multi-process training, if you set batch_size=4 for each process, the real batch_size will be 4 * world_size. One caveat is that, DDP uses AllReduce to calculate the average (instead of sum) gradients across processes.
Can I use batch_size lower than number of GPUs, batch_size=4 for 8xGPUs (will it lead to error, or will be used only 4 GPUs or will be batch_size increased to 8 or 32)?
It should work, but will not fully utilize all devices. If batch_size=4, IIUC, it can at most use 4 GPUs.
I tried to train EfficientNet-L2 by using each of nn.DataParallel and nn.DistributedDataParallel, but with nn.DataParallel I can use batch_size 2x higher than with nn.DistributedDataParallel without CUDA Out of memory. Does nn.DistributedDataParallel spend 2x time more GPU memory than nn.DataParallel?
DDP allocates dedicated CUDA buffer as communication buckets, so it will use more CUDA memory than DP. But it is not 2X compared to DP. The total comm bucket size is the same as the model size. |
st176060 | mrshenli:
Can I use batch_size lower than number of GPUs, batch_size=4 for 8xGPUs (will it lead to error, or will be used only 4 GPUs or will be batch_size increased to 8 or 32)?
It should work, but will not fully utilize all devices. If batch_size=4, IIUC, it can at most use 4 GPUs.
Greetings! I’d like some clarifications on this. Is this response referring to DP or DDP? If DDP then isn’t batch_size per-process? Meaning if one sets batch_size=4 in the DataLoader then isn’t that 4 sample per process/gpu? How does this turn into ‘it can at most use 4 GPUs?’
I guess I have always been confused by the DDP statement of “The batch size should be larger than the number of GPUs used locally.” because we are setting the batch_size per process/gpu not batch_size for the entire sets of gpus in aggregate. Or does “batch size” have two different meanings?
Thanks for any help! |
st176061 | Hi,
I would like to start 2 processes on my computer with 2 GPUs. spawn function is used to start 2 processes.
Question 1:
how to specify rank number for each process when I use spawn function to start main_worker?
Question 2:
how to specify/check local_rank of each process in main_worker?
Question 3:
world_size means total number of GPUs used in the processes.
rank is the index of each processes/process number.
local_rank means the GPU index used in the process(rank).
is my understanding correct?
My test code is pasted as follows.
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
mp.spawn(main_worker, nprocs=2, args=(2, myargs))
def main_worker(proc, nprocs, args):
dist.init_process_group(backend='nccl', init_method='tcp://127.0.0.1:23456', world_size=2, rank=gpu)
torch.cuda.set_device(args.local_rank)
train_dataset = ...
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=..., sampler=train_sampler)
model = ...
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank])
optimizer = optim.SGD(model.parameters())
for epoch in range(100):
for batch_idx, (data, target) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
target = target.cuda(non_blocking=True)
...
output = model(images)
loss = criterion(output, target)
...
optimizer.zero_grad()
loss.backward()
optimizer.step() |
st176062 | Ardeal:
how to specify rank number for each process when I use spawn function to start main_worker?
The method you start with mp.spawn must take in as its first argument a rank parameter (proc) in your example, which will be the rank of the process. Ranks are assigned in order of the processes starting in each worker.
Ardeal:
how to specify/check local_rank of each process in main_worker?
After initializing the process group (as you have with init_process_group), you can use the dist.get_rank() API to get the global rank of the process in the world. To get the local rank, assuming a homogenous set up, mod the result of dist.get_rank() by the number of GPUs on the machine.
Ardeal:
Question 3:
world_size means total number of GPUs used in the processes.
rank is the index of each processes/process number.
local_rank means the GPU index used in the process(rank).
is my understanding correct?
Your understanding of rank and local_rank is correct, though note that local_rank is not necessarily the GPU index. You can assign GPUs to ranks non-sequentially using torch.cuda.set_device() API, for example rank 0 could operate on GPU 1 and rank 1 could operate on GPU 0. Regarding world-size, world_size = total no. of active processes across all nodes. Usually this will be equal to the no. of GPUs available |
st176063 | @rvarm1 ,
Many thanks for your reply!
one more question:
what is global_rank? and how to get/set global_rank in code? |
st176064 | @rvarm1 ,
I am trying to run torch code on vscode on Ubuntu. The code is running on Ubuntu.
I am experiencing the following issue:
Exception has occurred: ProcessRaisedException
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/home/smb/code_python/pytorch-image-models/train.py", line 523, in main
torch.distributed.init_process_group(backend='nccl', init_method='env://')
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 520, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/rendezvous.py", line 199, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
RuntimeError: Address already in use
File "/home/smb/code_python/pytorch-image-models/train.py", line 1089, in <module>
mp.spawn(main, nprocs=2, args=(2, [333,444,555]))
I set the env like this:
"env": {
"CUDA_VISIBLE_DEVICES": "6,7",
"WORLD_SIZE": "2",
"RANK": "0",
"MASTER_ADDR": "127.0.0.1",
"MASTER_PORT": "44147",
}
mp.spawn(main, nprocs=2, args=(2, [333,444,555]))
args = parser.parse_args()
def main(process_index, process_cnt:int, args_spawn:list=[]):
args.device = 'cuda:%d' % args.local_rank
torch.cuda.set_device(args.local_rank)
torch.distributed.init_process_group(backend='nccl', init_method='env://')
args.world_size = torch.distributed.get_world_size()
args.rank = torch.distributed.get_rank() |
st176065 | Hi there.
I am playing with ImageNet training in Pytorch following official examples 6. To log things in DDP training, I write a function get_logger:
import logging
import os
import sys
class NoOp:
def __getattr__(self, *args):
def no_op(*args, **kwargs):
"""Accept every signature by doing non-operation."""
pass
return no_op
def get_logger(log_dir, log_name=None, resume="", is_rank0=True):
"""Get the program logger.
Args:
log_dir (str): The directory to save the log file.
log_name (str, optional): The log filename. If None, it will use the main
filename with ``.log`` extension. Default is None.
resume (str): If False, open the log file in writing and reading mode.
Else, open the log file in appending and reading mode; Default is "".
is_rank0 (boolean): If True, create the normal logger; If False, create the null
logger, which is useful in DDP training. Default is True.
"""
if is_rank0:
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
# StreamHandler
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setLevel(level=logging.INFO)
logger.addHandler(stream_handler)
# FileHandler
mode = "w+" if resume == "False" else "a+"
if log_name is None:
log_name = os.path.basename(sys.argv[0]).split(".")[0] + (".log")
file_handler = logging.FileHandler(os.path.join(log_dir, log_name), mode=mode)
file_handler.setLevel(level=logging.INFO)
logger.addHandler(file_handler)
else:
logger = NoOp()
return logger
The logger should log to a file and print to stdout only in rank0. This works with torch.__version__<=1.6.0:
Use GPU: 0 for training
=> creating model 'resnet50'
Epoch: [0][ 0/153] Time 3.652 ( 3.652) Data 1.754 ( 1.754) Loss 7.1175e+00 (7.1175e+00) Acc@1 0.00 ( 0.00) Acc@5 0.00 ( 0.00)
Epoch: [0][ 10/153] Time 0.339 ( 0.632) Data 0.000 ( 0.160) Loss 8.2664e+00 (1.3576e+01) Acc@1 2.35 ( 2.67) Acc@5 11.76 ( 15.29)
Epoch: [0][ 20/153] Time 0.340 ( 0.493) Data 0.000 ( 0.089) Loss 5.9911e+00 (1.0709e+01) Acc@1 10.59 ( 3.75) Acc@5 21.18 ( 17.03)
Epoch: [0][ 30/153] Time 0.343 ( 0.444) Data 0.000 ( 0.064) Loss 4.9582e+00 (9.2672e+00) Acc@1 0.00 ( 3.49) Acc@5 16.47 ( 16.96)
Epoch: [0][ 40/153] Time 0.340 ( 0.419) Data 0.000 ( 0.051) Loss 4.5358e+00 (8.3598e+00) Acc@1 5.88 ( 3.64) Acc@5 15.29 ( 16.96)
Epoch: [0][ 50/153] Time 0.342 ( 0.404) Data 0.000 ( 0.043) Loss 3.4166e+00 (7.5119e+00) Acc@1 7.06 ( 3.62) Acc@5 22.35 ( 17.02)
However, when torch.__version__>1.6.0, a repeated INFO exists in stdout while not exists in the log file:
Use GPU: 0 for training
=> creating model 'resnet50'
Epoch: [0][ 0/153] Time 3.884 ( 3.884) Data 1.946 ( 1.946) Loss 7.0728e+00 (7.0728e+00) Acc@1 0.00 ( 0.00) Acc@5 0.00 ( 0.00)
Epoch: [0][ 10/153] Time 0.338 ( 0.655) Data 0.000 ( 0.177) Loss 1.1332e+01 (1.1690e+01) Acc@1 3.53 ( 4.06) Acc@5 12.94 ( 15.08)
INFO:log:Epoch: [0][ 10/153] Time 0.338 ( 0.655) Data 0.000 ( 0.177) Loss 1.1332e+01 (1.1690e+01) Acc@1 3.53 ( 4.06) Acc@5 12.94 ( 15.08)
Epoch: [0][ 20/153] Time 0.340 ( 0.505) Data 0.000 ( 0.098) Loss 7.3043e+00 (9.7744e+00) Acc@1 10.59 ( 3.98) Acc@5 25.88 ( 16.75)
INFO:log:Epoch: [0][ 20/153] Time 0.340 ( 0.505) Data 0.000 ( 0.098) Loss 7.3043e+00 (9.7744e+00) Acc@1 10.59 ( 3.98) Acc@5 25.88 ( 16.75)
Epoch: [0][ 30/153] Time 0.341 ( 0.452) Data 0.000 ( 0.069) Loss 5.5403e+00 (8.6561e+00) Acc@1 4.71 ( 3.72) Acc@5 15.29 ( 16.81)
INFO:log:Epoch: [0][ 30/153] Time 0.341 ( 0.452) Data 0.000 ( 0.069) Loss 5.5403e+00 (8.6561e+00) Acc@1 4.71 ( 3.72) Acc@5 15.29 ( 16.81)
Epoch: [0][ 40/153] Time 0.341 ( 0.424) Data 0.000 ( 0.055) Loss 4.7685e+00 (7.6899e+00) Acc@1 4.71 ( 3.85) Acc@5 22.35 ( 17.27)
INFO:log:Epoch: [0][ 40/153] Time 0.341 ( 0.424) Data 0.000 ( 0.055) Loss 4.7685e+00 (7.6899e+00) Acc@1 4.71 ( 3.85) Acc@5 22.35 ( 17.27)
Epoch: [0][ 50/153] Time 0.340 ( 0.408) Data 0.000 ( 0.046) Loss 3.9618e+00 (6.9825e+00) Acc@1 0.00 ( 3.46) Acc@5 18.82 ( 17.07)
INFO:log:Epoch: [0][ 50/153] Time 0.340 ( 0.408) Data 0.000 ( 0.046) Loss 3.9618e+00 (6.9825e+00) Acc@1 0.00 ( 3.46) Acc@5 18.82 ( 17.07)
The log file:
image1805×238 76.1 KB
I provided a working example in my gist 2. |
st176066 | Solved by ibro45 in post #4
similar to this? Distributed 1.8.0 logging twice in a single process, same code works properly in 1.7.0 - #7 by ibro45
PyTorch sets up the loggers somewhere, rebuilding the log handers it as mentioned solves the problem. Personally, I went for loguru as it’s even easier to do that with it. |
st176067 | similar to this? Distributed 1.8.0 logging twice in a single process, same code works properly in 1.7.0 - #7 by ibro45 2
PyTorch sets up the loggers somewhere, rebuilding the log handers it as mentioned solves the problem. Personally, I went for loguru 5 as it’s even easier to do that with it. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.