id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st175268 | Solved by lupus83 in post #10
Yeah, that’s it! Thanks a lot for checking it out.
I experimented with different settings of cudnn and found that calling
torch.backends.cudnn.deterministic = True
was sufficient to solve the issue.
Some additional info with respect to runtime per batch for future readers (ii and iii solve the i… |
st175269 | I consider them far apart (weights differ up to the second decimal after a couple of iterations). |
st175270 | You don’t have dropout in your models, right?
This could be also be due to numerical precision and nondeterminism, it’s hard to tell with the information at hand. One indication of this would be if you cannot pinpoint where they differ. Otherwise you could compare after the first iteration and find which forward activations or gradients differ. |
st175271 | Thanks for your reply!!
There’s no dropout in my models. However, I’ve re-run the code with a model consisting of a single linear layer. Surprisingly, it works.
My model consists of conv, batch/instance norm, ReLU, AdaptiveAveragePooling, MaxPooling and linear layers, including skip connections. It’s essentially a ResNet.
Again, I really appreciate your feedback!
Edit:
Just noticed that the gradients of the input layer are different right from the first iteration. The maximum difference between the gradients of that layer is 8.5e^-5. |
st175272 | That likely is numerical precision. You could try to use double (experimentally) and see if the difference gets smaller. |
st175273 | Surprisingly, using double precision for models and inputs results in a bigger maximum difference in gradients: 1.3e^-4. |
st175274 | So I came up with this minimal example, consisting of just a Conv Layer and a Linear layer. The code exits after the first iteration.
import torch
import torchvision
import torch.nn as nn
import copy
train_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST("./", train=True, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=4096, shuffle=True)
class Net(nn.Module):
def __init__(self):
super().__init__()
self.encoder = nn.Conv2d(1, 8, 3, stride=1, padding=1)
self.head = nn.Linear(6272, 10)
def forward(self, image):
out = self.encoder(image)
out = out.flatten(start_dim=1)
print(out.shape)
return self.head(out)
torch.manual_seed(42)
model1 = Net()
torch.manual_seed(42)
model2 = Net()
assert all([torch.equal(x[1], y[1]) for x, y in zip(model1.state_dict().items(), model2.state_dict().items())])
optim1 = torch.optim.AdamW(model1.parameters())
optim2 = torch.optim.AdamW(model2.parameters())
loss1 = nn.MSELoss()
loss2 = nn.MSELoss()
dev1 = torch.device("cuda:0")
dev2 = torch.device("cuda:1")
cpu = torch.device("cpu")
model1 = model1.to(dev1)
model2 = model2.to(dev2)
model1.train()
model2.train()
for i, (images, targets) in enumerate(train_loader):
batch1 = copy.deepcopy(images).to(dev1)
batch2 = copy.deepcopy(images).to(dev2)
t1 = copy.deepcopy(targets).to(dev1)
t2 = copy.deepcopy(targets).to(dev2)
t1 = nn.functional.one_hot(t1, num_classes=10).float()
t2 = nn.functional.one_hot(t2, num_classes=10).float()
optim1.zero_grad()
result1 = model1.forward(batch1)
l1 = loss1(result1, t1)
l1.backward()
optim1.step()
optim2.zero_grad()
result2 = model2.forward(batch2)
l2 = loss2(result2, t2)
l2.backward()
optim2.step()
if not (model1.to(cpu).encoder.weight.grad == model2.to(cpu).encoder.weight.grad).all():
print(f"Nope nope nope - Chuck Testa!\n @Iteration {i}")
break
else:
model1 = model1.to(dev1)
model2 = model2.to(dev2)
model1.train()
model2.train() |
st175275 | Yeah, but I get (running both nets on the same device) an error that is 1e-8ish, which seems to be within numerical precision.
When disabling cudnn , the error goes to 0 but I don’t know what exactly it is. |
st175276 | Yeah, that’s it! Thanks a lot for checking it out.
I experimented with different settings of cudnn and found that calling
torch.backends.cudnn.deterministic = True
was sufficient to solve the issue.
Some additional info with respect to runtime per batch for future readers (ii and iii solve the issue):
i: default settings (i.e. non-deterministic)
------> 0.51s
ii: torch.backends.cudnn.enabled = False
------> 0.14s
iii: torch.backends.cudnn.deterministic=True
------> 0.002s
Note the speed-up for this model. Reasons for speed-up of deterministic algorithms was discussed here 2. |
st175277 | Good day to all of you I am pretty new to Parallel and wish to train my model on distributed TPUs. (I am not sure if this is the right place to ask so please redirect me if I am wrong)
My code is basically from some standard tutorial with a slight changes to use custom dataset. The code works well on single GPU say in Colab. However when using TPUs it is able to go through first step in training loop but will deadlock at getting outputs from model in second step.
At first I thought it would be the data sampler part since my dataset is imbalanced and I have been using DistributedSamplerWrapper from Catalyst. However switching Pytorch’s DistributedSampler does not yield any difference.
I also thought maybe the batchsize is too large so I tried difference settings from 64 to say 8, not working…
Data Loader part
## Dataloader ##
class TweetsData(Dataset):
def __init__(self, dataframe, tokenizer, max_len):
self.tokenizer = tokenizer
self.data = dataframe
self.sentence = dataframe.sentence
self.targets = self.data.label
self.max_len = max_len
def __len__(self):
return len(self.sentence)
def __getitem__(self, index):
sentence = str(self.sentence[index])
sentence= " ".join(sentence.split())
inputs = self.tokenizer.encode_plus(
sentence,
# Pad to max_length such that tensor can stack in each batches
padding="max_length",
truncation=True,
max_length=self.max_len,
pad_to_max_length=True
#return_token_type_ids=True
)
ids = inputs['input_ids']
mask = inputs['attention_mask']
token_type_ids = inputs["token_type_ids"]
return {
'ids': torch.tensor(ids, dtype=torch.long),
'mask': torch.tensor(mask, dtype=torch.long),
'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long),
'targets': torch.tensor(self.targets[index], dtype=torch.float)
}
Average results using a function:
## Define how loss is averaged out of the 8 TPUs
def reduce_fn(vals):
# take average
return sum(vals) / len(vals)
The training loop (I printed in every step to see which part is stuck):
# Define training loop function
def train_loop_fn(data_loader, model, optimizer, device, scheduler = None):
tracker = xm.RateTracker()
model.train() # Put model to training mode
for bi, data in enumerate(data_loader):
print("Start")
start_time = time.time()
print("Extract data")
# Extract data
ids = data['ids'].to(device, dtype = torch.long)
mask = data['mask'].to(device, dtype = torch.long)
token_type_ids = data['token_type_ids'].to(device, dtype = torch.long)
targets = data['targets'].to(device, dtype = torch.long)
# Reset the gradent
print("Zero Grad")
optimizer.zero_grad()
# Pass ids, mask, token_type_ids to model
print("Model")
outputs = model(ids, mask, token_type_ids)
# Create loss function (Cross Entropy loss for multi-label classification) and optimizer (using Adam optimizer)
print("Loss")
loss_fn = torch.nn.CrossEntropyLoss()
loss = loss_fn(outputs, targets)
# Backprop
print("Backward")
loss.backward()
# Use PyTorch XLA optimizer stepping
print("Step Optimizer")
xm.optimizer_step(optimizer)
# Print every 20 steps
# if bi%20==0:
# since the loss is on all 8 cores, reduce the loss values and print the average (as defined in reduce_fn)
print('[xla:{}]({}) Loss={:.5f} Rate={:.2f} GlobalRate={:.2f} Time={}'.format(
xm.get_ordinal(), bi, loss.item(), tracker.rate(),
tracker.global_rate(), time.asctime()), flush=True)
if scheduler is not None:
scheduler.step()
end_time = time.time()
print(f"Time for steps {bi}: {end_time - start_time}")
# Set model to evaluation mode
model.eval()
The map_fn function (I only post the class that is related to the problem):
## https://www.kaggle.com/tanlikesmath/the-ultimate-pytorch-tpu-tutorial-jigsaw-xlm-r
def map_fn(index, flags):
torch.set_default_tensor_type('torch.FloatTensor')
# Sets a common random seed - both for initialization and ensuring graph is the same
torch.manual_seed(TORCH_SEED)
# Acquires the (unique) Cloud TPU core corresponding to this process's index
device = xm.xla_device()
# Use one instances to download datasets
if not xm.is_master_ordinal():
xm.rendezvous('download_only_once')
train_dataset = pd.read_csv(root_path + "train_set.csv")
val_dataset = pd.read_csv(root_path + "dev_set.csv")
if not xm.is_master_ordinal():
xm.rendezvous('download_only_once')
tokenizer = tfm.AutoTokenizer.from_pretrained(root_path + "BERTweet_uncased", use_fast=False, return_tensors='pt')
# Custom dataloader __init__, __len__, __getitem__ #
train_set = TweetsData(train_dataset, tokenizer, MAX_LEN)
val_set = TweetsData(val_dataset, tokenizer, MAX_LEN)
# Training dataset loader #
# Wrap our Class imbalance Sampler with DistributedSamplerWrapper
# train_sampler = DistributedSamplerWrapper(
# sampler=BalanceClassSampler(labels=train_dataset.label.values, mode='upsampling'),
# num_replicas=xm.xrt_world_size(),
# rank=xm.get_ordinal(),
# shuffle=True
# )
train_sampler = DistributedSampler(
dataset = train_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True
)
train_loader = DataLoader(train_set,
batch_size=TRAIN_BATCH_SIZE,
sampler=train_sampler,
num_workers=NUM_WORKERS_DATA,
drop_last=True)
# Validation dataset loader #
# val_sampler = DistributedSamplerWrapper(
# sampler=BalanceClassSampler(labels=val_dataset.label.values , mode='upsampling'),
# num_replicas=xm.xrt_world_size(),
# rank=xm.get_ordinal(),
# shuffle=True
# )
val_sampler = DistributedSampler(
dataset = val_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True
)
val_loader = DataLoader(val_set,
batch_size=VALID_BATCH_SIZE,
sampler=val_sampler,
num_workers=NUM_WORKERS_DATA,
drop_last=True)
# Push our neural network to TPU
model = bertweetClass()
model.to(device)
# Don't decay normalized layer
param_optimizer = list(model.named_parameters()) # model parameters to optimize
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
# apply to weight decay
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.001},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}]
# Create loss function (Cross Entropy loss for multi-label classification) and optimizer (using Adam optimizer)
optimizer = AdamW(params = optimizer_grouped_parameters , lr = LEARNING_RATE * xm.xrt_world_size())
# Create number of training steps
num_train_steps = int(len(train_dataset) / TRAIN_BATCH_SIZE / xm.xrt_world_size() * EPOCHS)
# Scheduler for optimizer for learning decay
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=0,
num_training_steps=num_train_steps
)
xm.master_print(f"Train for {len(train_dataset)} steps per epoch")
xm.master_print(f'num_training_steps = {num_train_steps}, world_size={xm.xrt_world_size()}')
for epoch in range(EPOCHS):
gc.collect()
xm.master_print(f"Starting training in epoch: {epoch}")
## Training Part ##
xm.master_print("Entering training loop")
para_train_loader = pl.ParallelLoader(train_loader, [device]).per_device_loader(device)
gc.collect()
# Call Training Loop
train_loop_fn(para_train_loader, model, optimizer, device, scheduler=scheduler)
del para_train_loader
gc.collect()
## Evaluation Part ##
para_eval_loader = pl.ParallelLoader(val_loader, [device]).per_device_loader(device)
xm.master_print("Entering validation loop")
# Call Evaluation Loop
model_label, target_label = eval_loop_fn(para_eval_loader, model, device)
del para_eval_loader
gc.collect()
## Evaluation metrics ##
## Reporting Matthews correlation coefficient ##
epoch_mcc = matthews_corrcoef(target_label, model_label, sample_weight=None)
epoch_mcc = xm.mesh_reduce("mcc", epoch_mcc, reduce_fn)
xm.master_print(f"Matthews Coefficent at epoch {epoch} : {epoch_mcc}")
epoch_f1 = f1_score(target_label, model_label, sample_weight=None)
epoch_f1 = xm.mesh_reduce("f1", epoch_f1, reduce_fn)
xm.master_print(f"Matthews Coefficent at epoch {epoch} : {epoch_f1}")
Lastly spawn instances with parameter
## Define key variables to be used in training
NUM_LABELS = 3
MAX_LEN = 128
TRAIN_BATCH_SIZE = 32
VALID_BATCH_SIZE = 32
EPOCHS = 1
LEARNING_RATE = 3e-05
NUM_WORKERS_DATA = 2
TORCH_SEED = 1234
flags = {}
xmp.spawn(map_fn, args=(flags,), nprocs=8, start_method='fork')
Here is the interpreter result from running xmp.spawn:
Train for 12638343 steps per epoch
num_training_steps = 789896, world_size=8
Starting training in epoch: 0
Entering training loop
Start
Extract data
Zero Grad
Model
Loss
Backward
Step Optimizer
xla:0 Loss=1.03125 Rate=0.00 GlobalRate=0.00 Time=Fri May 7 12:56:08 2021
Time for steps 0: 8.53129506111145
Start
Extract data
Zero Grad
Model
It will stuck at getting output from model in second step like forever…
Would it be that I am not using the nighty release of pytorch XLA package?
I encounter the same problem as in this thread:
https://stackoverflow.com/questions/67257008/oserror-libmkl-intel-lp64-so-1-cannot-open-shared-object-file-no-such-file-or
and a bug report here:
https://github.com/pytorch/xla/issues/2933
Currently I am using:
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8.1-cp37-cp37m-linux_x86_64.whl
Sorry for all long codes but help is much appreciated!
Thanks all |
st175278 | Hi, are you initializing process groups (using init_process_group) and using DDP somewhere in your code? DistributedSampler is not intended for use outside of distributed/DDP setting. Does the training loop not hang when removing distributed components?
cc @ailzhang for XLA/TPU question |
st175279 | Hi Rohan many thanks for your reply! I did not initialize init_process_group apparently this is for parallelization in CPU/GPU only? In fact I have not seen this in any XLA tutorial like:
https://www.kaggle.com/tanlikesmath/the-ultimate-pytorch-tpu-tutorial-jigsaw-xlm-r
Regarding the sampler in non distributed setting I do not use DDP but the normal Sampler like WeightedRandomSampler and work flawlessly.
Thanks! |
st175280 | I had the exactly same problem. The training loop just stuck after first step. Not loading new data. I could monitor that the memory was keeping increasing until leaking. I am using pytorch-xla 1.9.
@gabrielwong1991 Have you resolved this problem? |
st175281 | Hi bryan, I can’t exactly remember what I did but you might want to check out how you define your dataset class.
For me, my task was training huggingface BERT model and instead define a dataset class I just use their datasets library. |
st175282 | Thank you for your reply. I just found my problem. It was about the loss backward part rather than dataset loading. It may because of my model itself. I am still investigating it and just opening a thread 12. |
st175283 | I am using distributed data parallel to train the model on multiple gpus. I meet one problem: I have used register_buffer to define one parameter. In addition, I need to manually update it. How could I achieve this? I tried to do the same update as the model is trained on one gpu, but the results are not correct. It seems that the value of this parameter is not synchronized over gpus. Thanks a lot |
st175284 | If this is a model parameter, any reason for using register_buffer instead of register_parameter?
In addition, I need to manually update it. How could I achieve this?
If it is a parameter (not a buffer) and if you don’t expect the autograd engine to compute gradients for you, you can set its .requires_grad field to False before passing the model to the DDP ctor. Then, DDP won’t sync it’s grads and optimizer won’t update the parameter value.
I tried to do the same update as the model is trained on one gpu, but the results are not correct. It seems that the value of this parameter is not synchronized over gpus.
I might miss something. Looks like you want to manually update a parameter, but still want DDP to help synchronize the parameter across the GPUs/processes? I don’t fully understand the use case, could you please elaborate this? If this parameter is manually updated, would it be possible to let all processes to set it to the same value? |
st175285 | "I might miss something. Looks like you want to manually update a parameter, but still want DDP to help synchronize the parameter across the GPUs/processes"’
Yes. This is what I would like to achieve.
For example
class model():
def init(self):
a = torch.zeros((3, 1))
self.register_buffer(“a”, a)
def update_a(self, b):
self.a.add_(b)
b is a vector that is dynamic with respect to the input data. So I could not manually set it to the same value.
mrshenli:
If this parameter is manually updated, would it be possible to let all processes to set it to the same value?
Let me know whether I describe the problem clearly? Thanks a lot for your help. |
st175286 | hi, @jetcai1900 , have you solved the problem? I met the same problem here, hope you can give me some advice, thanks |
st175287 | Hi everyone! I try to write custom handle reader in pytorch based on this topic.
I read topic with torch.multiprocessing.reductions.rebuild_cuda_tensor and its work Ok, but if I stop reading this handle GPU memory leaked.
Can someone introduce me how its work and how overwrite handle for solve memory leaks?
And second? can I read rebuild_cuda_tensor with multiple processes?
PS. I use independent processes and can not transfer Queue here. |
st175288 | Hey! I am trying to implement DDP + MDP and followed [this].(Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.10.0+cu102 documentation 2)
I’m having issues with the demo code itself so any help is welcome! I’m trying to run it on a p2.8xlarge gpu instance on AWS and using the pytorch 1.9.0 version.
Here’s the error that I’m getting:
ip-172-31-136-108:121:121 [0] NCCL INFO Bootstrap : Using [0]ecs-eth0:169.254.172.12<0> [1]eth0:172.31.136.108<0>
ip-172-31-136-108:121:121 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ip-172-31-136-108:121:121 [0] NCCL INFO NET/IB : No device found.
ip-172-31-136-108:121:121 [0] NCCL INFO NET/Socket : Using [0]ecs-eth0:169.254.172.12<0> [1]eth0:172.31.136.108<0>
ip-172-31-136-108:121:121 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda10.2
net2
ip-172-31-136-108:122:122 [0] NCCL INFO Bootstrap : Using [0]ecs-eth0:169.254.172.12<0> [1]eth0:172.31.136.108<0>
ip-172-31-136-108:122:122 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ip-172-31-136-108:122:122 [0] NCCL INFO NET/IB : No device found.
ip-172-31-136-108:122:122 [0] NCCL INFO NET/Socket : Using [0]ecs-eth0:169.254.172.12<0> [1]eth0:172.31.136.108<0>
ip-172-31-136-108:122:122 [0] NCCL INFO Using network Socket
ip-172-31-136-108:122:202 [0] init.cc:573 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device 170
ip-172-31-136-108:121:201 [0] init.cc:573 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 170
ip-172-31-136-108:122:202 [0] NCCL INFO init.cc:840 -> 5
ip-172-31-136-108:121:201 [0] NCCL INFO init.cc:840 -> 5
ip-172-31-136-108:121:201 [0] NCCL INFO group.cc:73 -> 5 [Async thread]
ip-172-31-136-108:122:202 [0] NCCL INFO group.cc:73 -> 5 [Async thread]
Traceback (most recent call last):
File "relnet/tools/test_mdp.py", line 78, in <module>
run_demo(demo_model_parallel, world_size)
File "relnet/tools/test_mdp.py", line 72, in run_demo
mp.spawn(demo_fn, args=(world_size,), nprocs=world_size, join=True)
File "/opt/pensa-recognition/pensa-cognition/relnet/.venv/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/opt/pensa-recognition/pensa-cognition/relnet/.venv/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/opt/pensa-recognition/pensa-cognition/relnet/.venv/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/opt/pensa-recognition/pensa-cognition/relnet/.venv/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/opt/pensa-recognition/pensa-cognition/relnet/relnet/tools/test_mdp.py", line 51, in demo_model_parallel
ddp_mp_model = DDP(mp_model)
File "/opt/pensa-recognition/pensa-cognition/relnet/.venv/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__
dist._verify_model_across_ranks(self.process_group, parameters)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, invalid usage, NCCL version 2.7.8
I’m only making GPU 0 and 1 visible and running the demo code as is. What am I doing wrong?
Thanks in advance! |
st175289 | I noticed that a post-processing step in my model takes more than twice the time of the actual NN part. This post-processing involves sorting the outputs in each sample (sample by sample in a batch of 32+).
The sorting package is provided as “torchsort” and is already optimized as a C++ torchscript.
I thus tried to multiprocess each of the sample sortings in a batch using torch.multiprocessing, but of course receive the warning that: [...] autograd does not support crossing process boundaries..
So I wonder if I can “fake” distributed training locally somehow with rpc?
Would that be a good choice to solve my speed problems?
Could it be done easily? |
st175290 | Do I understand correctly that DP will not average gradients of “replica” batches, while DDP will?
Is it always the case that gradient averaging is correct (assuming the loss averages over the “replica” batchsize)? If yes, is it the case because of the product operation in the chain rule?
Thanks! |
st175291 | right, DDP averages gradients in default, as it assumes batch size is the same for each rank. If the batch size is not the same for each rank, or the loss function does not expect averaged gradients, you can call ’ register_comm_hook()’ on the DDP model, define your own communication strategy (sum, or other ops) |
st175292 | I found out the imbalance of GPU usages on my implementation.
def train(device, args):
torch.distributed.init_process_group(backend='nccl', rank=device, world_size=torch.cuda_device_count())
model_A = A()
model_B = B()
# Not gonna update A
model_A.to(device)
ckpt = torch.load(path)
model_A.load_state_dict(ckpt['model_state_dict '])
model_A = torch.nn.parallel.DIstributedDataParallel(model_A, device_ids=[device])
model_A.eval()
# Gonna update B
model_B.to(device)
model_B = torch.nn.parallel.DistributedDataParallel(model_B, device_ids=[device])
model_B.train()
if __name__ == '__main__':
args = argparse()
torch.multiprocessing.spawn(train, nprocs=torch.cuda.device_count(), args=(args, ))
I intend to load both models A and B on GPUs however,
GPU usage tells me only model_A is allocated on GPU:0 and not allocated on the other GPUs.
like below,
GPU 0: 7000MiB / 11019MiB (model A, B)
GPU 1: 4000MiB / 11019MiB (model B)
GPU 2: 4000MiB / 11019MiB (model B)
GPU 3: 4000MiB / 11019MiB (model B)
Please ask me free if you have any unclears. |
st175293 | Solved by thecho7 in post #3
I solved this issue by map_location='cpu’
Loading pretrained model
ckpt = torch.load(path)
automatically allocates the parameters to GPU:0.
I cannot understand why until now, but it is solved by using map_location while I load the model.
ckpt = torch.load(path, map_location='cpu')
model_A.load_… |
st175294 | It could be all processes unintentionally created CUDA context on the default GPU (cuda:0). To avoid this situation, can you try setting CUDA_VISIBLE_DEVICES env var to a different device for each process, so that each process would only see one GPU? |
st175295 | I solved this issue by map_location='cpu’
Loading pretrained model
ckpt = torch.load(path)
automatically allocates the parameters to GPU:0.
I cannot understand why until now, but it is solved by using map_location while I load the model.
ckpt = torch.load(path, map_location='cpu')
model_A.load_state_dict(ckpt['model_state_dict '])
model_A.to(device) |
st175296 | I cannot understand why until now, but it is solved by using map_location while I load the model.
I think this is because, all processes are trying to load the model to cuda:0 by default if you don’t set map_location or CUDA_VISIBLE_DEVICES. BTW, does directly setting map_location to device work for you? |
st175297 | What is the key difference between torch.dist.distributedparallel and horovod?
If my understanding is correct, torch.dist.distributedparallel work on single node with one or more GPUs (it does not distribute workloads across GPUs across more than one node) whereas horovod can work with multi-node multi-gpu.
If my understanding is not correct, kindly explain when to use horovod and when to use torch.dist.distributedparallel?
Kindly share your thoughts? Thank you very much in advance!! |
st175298 | Solved by ptrblck in post #2
As given in the DDP docs, DistributedDataParallel is able to use multiple machines:
DistributedDataParallel (DDP) implements data parallelism at the module level which can run across multiple machines. Applications using DDP should spawn multiple processes and create a single DDP instance per proc… |
st175299 | As given in the DDP docs 7, DistributedDataParallel is able to use multiple machines:
DistributedDataParallel 1 (DDP) implements data parallelism at the module level which can run across multiple machines. Applications using DDP should spawn multiple processes and create a single DDP instance per process.
I’m not familiar with horovod and don’t know what the advantages might be.
PS: please don’t tag specific users, as it might discourage others to post better answers |
st175300 | One difference between PyTorch DDP is Horovod+PyTorch is that, DDP overlaps backward computation with communication. In contrast, according to the following example, Horovod synchronizes models in the optimizer step(), which won’t be able to overlap with backward computations. So, in theory, DDP should be faster.
https://horovod.readthedocs.io/en/stable/pytorch.html 45 |
st175301 | I don’t think so. Horovod is able to create async communication functions for parameter.grad’s hook to synchronize gradients. That gives handles of async functions, in optimizer.step(), they synchronize them so that overlap backward. |
st175302 | Hi,
I’m using torch.nn.DataParallel to do single-node data parallelism , and I’m wondering the following: how should the DataLoader batch be scaled?
I’m asking since I have a code running fine with batch 16 on a T4 GPU, but doing CUDA OOM with batch 416 = 64 (and even with 48!) with torch.nn.DataParallel over 4x T4 GPUs. Is torch.nn.DataParallel doing anything weird with the memory, so that it has less memory available than N1 GPU memory? or is torch.nn.DataParallel already applying a scaling rule so that the dataloader batch is the per-GPU batch and not the SGD-level batch? (I don’t think that’s the case as the doc says it “splits the input across the specified devices by chunking in the batch dimension”)
note:I know PyTorch recommends DDP even for single-node data parallel, but honestly I’m not smart enough to figure out how to use all those torchrun/torch.distributed/launch.py tool, MPI, local_rank things and couldn’t make DDP work after a week and 7 issues opened |
st175303 | nn.DataParallel will transfer the data to GPU0 and scatter latter. So it is easy to cause OOM problems. You can refer to pytorch ddp 3 to understand how to leverage DDP if you can read Chinese. |
st175304 | ok so if I understand correctly, in nn.DataParallel, all the cluster data must be able to fit in one GPU? That does not make any sense right? Because if all the data fits in one GPU, people would not use data parallelism in the first place |
st175305 | The data would still be split in its batch dimension and the forward activations would thus be shared between all devices, which are usually much larger than the input data.
However, @techkang is right and the scatter/gather ops from the default device create a memory imbalance which is why we recommend the usage of DDP (besides DDP also being faster). |
st175306 | Hi!
I am recently using torch elastic with c10d and min_nodes=1. I have succeeded in joining the existing training from other nodes dynamically. The training process blocks for rendezvous and restarts from the latest checkpoint with a new remaining iteration number (because of the updated world size), as expected.
However, when I try to kill the process on the other node, the c10d node also fails and the training is terminated. The error log with NCCL info is attached as follows:
ip-10-0-0-204:31012:31048 [0] include/socket.h:416 NCCL WARN Net : Connection closed by remote peer
ip-10-0-0-204:31012:31048 [0] NCCL INFO transport/net_socket.cc:405 -> 2
ip-10-0-0-204:31012:31048 [0] NCCL INFO include/net.h:28 -> 2
ip-10-0-0-204:31012:31048 [0] NCCL INFO transport/net.cc:357 -> 2
ip-10-0-0-204:31012:31048 [0] NCCL INFO proxy.cc:198 -> 2 [Proxy Thread]
Traceback (most recent call last):
File "./main.py", line 603, in <module>
main()
File "./main.py", line 188, in main
train(train_loader, model, criterion, optimizer, epoch, device_id, print_freq)
File "./main.py", line 471, in train
loss.backward()
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/autograd/__init__.py", line 149, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: NCCL communicator was aborted on rank 1.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 31012) of binary: /home/ubuntu/anaconda3/envs/pytorch_1.9_p37/bin/python
ERROR:torch.distributed.elastic.agent.server.api:Error waiting on exit barrier. Elapsed: 4.040053606033325 seconds
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/agent/server/api.py", line 889, in _exit_barrier
barrier_timeout=self._exit_barrier_timeout,
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/utils/store.py", line 67, in barrier
synchronize(store, data, rank, world_size, key_prefix, barrier_timeout)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/utils/store.py", line 53, in synchronize
agent_data = get_all(store, key_prefix, world_size)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/utils/store.py", line 31, in get_all
data = store.get(f"{prefix}{idx}")
RuntimeError: Stop_waiting response is expected
Exception in thread RendezvousKeepAliveTimer_0:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/rendezvous/utils.py", line 255, in _run
ctx.function(*ctx.args, **ctx.kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1002, in _keep_alive_weak
self._keep_alive()
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 1012, in _keep_alive
self._op_executor.run(op, deadline)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 546, in run
has_set = self._state_holder.sync()
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/rendezvous/dynamic_rendezvous.py", line 376, in sync
get_response = self._backend.get_state()
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 63, in get_state
base64_state: bytes = self._call_store("get", self._key)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/rendezvous/c10d_rendezvous_backend.py", line 103, in _call_store
return getattr(self._store, store_op)(*args, **kwargs)
MemoryError: std::bad_alloc
WARNING:torch.distributed.elastic.rendezvous.dynamic_rendezvous:The node 'ip-10-0-0-204.us-west-2.compute.internal_30838_0' has failed to shutdown the rendezvous 'yzs123' due to an error of type RendezvousConnectionError.
/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py:367: UserWarning:
**********************************************************************
CHILD PROCESS FAILED WITH NO ERROR_FILE
**********************************************************************
CHILD PROCESS FAILED WITH NO ERROR_FILE
Child process 31012 (local_rank 1) FAILED (exitcode 1)
Error msg: Process failed with exitcode 1
Without writing an error file to <N/A>.
While this DOES NOT affect the correctness of your application,
no trace information about the error will be available for inspection.
Consider decorating your top level entrypoint function with
torch.distributed.elastic.multiprocessing.errors.record. Example:
from torch.distributed.elastic.multiprocessing.errors import record
@record
def trainer_main(args):
# do train
**********************************************************************
warnings.warn(_no_error_file_warning_msg(rank, failure))
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/run.py", line 702, in <module>
main()
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 361, in wrapper
return f(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/run.py", line 698, in main
run(args)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/run.py", line 692, in run
)(*cmd_args)
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/ubuntu/anaconda3/envs/pytorch_1.9_p37/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
***************************************
./main.py FAILED
=======================================
Root Cause:
[0]:
time: 2021-10-31_06:11:31
rank: 1 (local_rank: 0)
exitcode: 1 (pid: 31012)
error_file: <N/A>
msg: "Process failed with exitcode 1"
=======================================
Other Failures:
<NO_OTHER_FAILURES>
***************************************
I suppose that it is not the expected behavior. Any help based on this information? I am using pytorch 1.9.1 with python 3.7, installed from conda.
Training script in Ubuntu Pastebin, which comes from the docker image torchelastic/example:0.2.0 with minor modification.
Launch script: NCCL_DEBUG=INFO python -m torch.distributed.run --nnodes=1:4 --nproc_per_node=1 --rdzv_id=xxxx --rdzv_backend=c10d --rdzv_endpoint=10.0.0.204:29400 ./main.py --arch resnet18 --epochs 20 --batch-size 32 --dist-backend nccl …/…/data/tiny-imagenet-200
(the tiny imagenet dataset also holds a copy in the image) |
st175307 | Solved by Kiuk_Chung in post #8
I can confirm this is indeed a bug. Please track the progress of the fix: [torch/elastic] Scale down does not work correctly when agent is killed with SIGINT, SIGTERM · Issue #67742 · pytorch/pytorch · GitHub. The fix itself is quite simple. |
st175308 | hmm how do you kill the agent? using ctrl+c? if so can you try to kill by sending a SIGTERM instead? Might be related to [distributed elastic] How to tolerate agent failures with etcd rendezvous backend? · Issue #67616 · pytorch/pytorch · GitHub 3
Looks like in python SIGINT (send when ctrl + c from terminal) produces a KeyboardInterrupt error which results in the shutdown() method of rendezvous being called in the finally block. The shutdown() should only get called on an orderly shutdown not a “failure” hence closes the rendezvous permanently failing all other nodes with a RendezvousClosedException
Elastic was built to handle real life faults which would just SIGTERM or SIGKILL or in the worst case just cause the node itself to just crash and disappear. All of which doesn’t produce a SIGINT. |
st175309 | Hi, KiuK!
Yes, I used Ctrl+C to kill the process and the killed process receives the Keyboard Interruption exception.
I have just tried sending SIGTERM to the main python process. However, the training process failed no matter using gloo/nccl as dist backend or c10d/etcd as rdzv backend.
The related log consistent with the former log except for the kill signal is attached: Ubuntu Pastebin
Thanks for your time and looking forward to further investigation and your demo video in the github issue! |
st175310 | Ref [distributed elastic] rendezvous brain split with etcd backend · Issue #67616 · pytorch/pytorch · GitHub
@Kiuk_Chung Seems that SIGTERM does not work, too. We need to use SIGKILL. Can we remove the code pytorch/api.py at cd51d2a3ecc8ac579bee910f6bafe41a4c41ca80 · pytorch/pytorch · GitHub 2 to avoid shutdown when the agent received SIGTERM? |
st175311 | I see thanks for the investigation. That code was added in: [torchelastic] Improve process termination logic (#61602) · pytorch/pytorch@0c55f1b · GitHub 1
This only affects worker (not agent) exception handling so I’m still not understanding why you suspect this is the reason why scale down is not working when a SIGTERM is sent to the agent |
st175312 | Oh I mis read the code. The signal handler is registered in the agent process. I have to talk to @aivanou who added this piece of logic. I believe that termination handler was added to make sure there are no orphaned trainers when the agent gets signaled. Instead of removing the termination handler we probably need to catch the SignalException in the main loop and avoid the finally block that shuts down rdzv |
st175313 | I can confirm this is indeed a bug. Please track the progress of the fix: [torch/elastic] Scale down does not work correctly when agent is killed with SIGINT, SIGTERM · Issue #67742 · pytorch/pytorch · GitHub 6. The fix itself is quite simple. |
st175314 | Hi, I’m using this doc to launch a DDP script examples/README.md at master · pytorch/examples · GitHub 2
my launch code is
python /home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/distributed/launch.py \
--nnodes 1 \
--node_rank=0 \
--nproc_per_node 4 \
train.py \
--gpu-count 4 \
--dataset . \
--cache tmp \
--height 604 \
--width 960 \
--checkpoint-dir . \
--batch 10 \
--workers 24 \
--log-freq 20 \
--prefetch 2 \
--bucket $bucket \
--eval-size 10 \
--iterations 20 \
--class-list a2d2_images/camera_lidar_semantic/class_list.json
However, when I print the content of each process I see that on each process local_rank is set to -1
How to get different and unique values in the local_rank argument? I thought launch.py was handling that? |
st175315 | Hi, I wasn’t able to repro your issue with torch-1.10.
Here’s the test script I tested with
# save this as test.py
import argparse
import os
import sys
from torch.distributed.elastic.multiprocessing.errors import record
def parse_args(argv):
parser = argparse.ArgumentParser(description="test script")
parser.add_argument("--local_rank", type=int)
return parser.parse_args()
@record
def main():
args = parse_args(sys.argv[1:])
print(f"local_rank={args.local_rank}")
if __name__ == "__main__":
main()
Then run
$ python -m torch.distributed.launch --nnode=1 --node_rank=0 --nproc_per_node=4 test.py
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
warnings.warn(
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
local_rank=3
local_rank=2
local_rank=0
local_rank=1
The example seems out of date. Please follow the instructions here: torchrun (Elastic Launch) — PyTorch 1.10.0 documentation 5 |
st175316 | thanks a lot I’ll try that!
so you’re using
python -m torch.distributed.launch
?
isn’t the fresh guidance to use torchrun? |
st175317 | Kiuk_Chung:
from torch.distributed.elastic.multiprocessing.errors import record
What is that line doing? |
st175318 | That line is there to make sure the error trace info on the trainer (a different process from the agent process) can be propagated to the agent for error summary and reporting purposes. See: Error Propagation — PyTorch 1.10.0 documentation 6
If the @record is not there then no trace information will be logged in the error summary table. So you’ll have to dig through the logs for the exception stack trace. |
st175319 | Hi.
I’m trying to use DDP on two nodes, but the DDP creation hangs forever. The code is like this:
import torch
import torch.nn as nn
import torch.distributed as dist
import os
from torch.nn.parallel import DistributedDataParallel as DDP
import datetime
os.environ['MASTER_ADDR']='$myip'
os.environ['MASTER_PORT']='7777'
# os.environ['NCCL_BLOCKING_WAIT']='1'
os.environ['NCCL_ASYNC_ERROR_HANDLING']='1'
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
The following lines are different for each node:
dist.init_process_group(backend='nccl', timeout=datetime.timedelta(0, 10), world_size=2, rank=0) # rank=0 for $myip node, rank=1 for the other node
model = ToyModel().to(0)
ddp_model = DDP(model, device_ids=[0], output_device=0) # This is where hangs.
One of the nodes would show this:
In [4]: model = ToyModel().to(0)
...: ddp_model = DDP(model, device_ids=[0], output_device=0)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-4-7fbd4245ff44> in <module>
1 model = ToyModel().to(0)
----> 2 ddp_model = DDP(model, device_ids=[0], output_device=0)
~/bin/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/parallel/distributed.py in __init__(self, module, device_ids, output_device, dim, broadcast_buffers, process_group, bucket_cap_mb, find_unused_parameters, check_reduction, gradient_as_bucket_view)
576 parameters, expect_sparse_gradient = self._build_params_for_reducer()
577 # Verify model equivalence.
--> 578 dist._verify_model_across_ranks(self.process_group, parameters)
579 # Sync params and buffers. Ensures all DDP models start off at the same value.
580 self._sync_params_and_buffers(authoritative_rank=0)
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1634272172048/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:957, unhandled system error, NCCL version 21.0.3
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
Any advices? Thanks~ |
st175320 | Hey @Musoy_King, looks like NCCL broadcast crashed. Can you try if directly calling dist.broadcast 1 would fail too?
github.com
pytorch/pytorch/blob/c65f332da47eb9bc76aefc50122cae2630fff2cc/torch/nn/parallel/distributed.py#L622 2
module_states = []
for name, param in self.module.named_parameters():
if name not in self.parameters_to_ignore:
module_states.append(param.detach())
for name, buffer in self.module.named_buffers():
if name not in self.parameters_to_ignore:
module_states.append(buffer.detach())
if len(module_states) > 0:
self._distributed_broadcast_coalesced(
module_states, self.broadcast_bucket_size, authoritative_rank
)
def _log_and_throw(self, err_type, err_msg):
if self.logger is not None:
self.logger.set_error_and_log(f"{str(err_type)}: {err_msg}")
raise err_type(err_msg)
def _ddp_init_helper(
self, parameters, expect_sparse_gradient, param_to_name_mapping
Also, looks like you are using ipython or notebook. Can you try to directly use python to run the script on the two nodes? |
st175321 | This doc 1 encourages to use torchrun. But doesn’t tell how to install it
How to install and get started with torchrun? |
st175322 | Solved by cbalioglu in post #2
torchrun is part of PyTorch v1.10. If you are running an older version, python -m torch.distributed.run command serves the same purpose. |
st175323 | torchrun is part of PyTorch v1.10. If you are running an older version, python -m torch.distributed.run command serves the same purpose. |
st175324 | @cbalioglu shall I run python -m torch.distributed.run once for the whole cluster? or once per node? or once per GPU? also asked here How to map processes to GPU in DDP and how to launch the DDP cluster? 1 |
st175325 | Hi!
So I have a text file bigger than my ram memory, I would like to create a dataset in PyTorch that reads line by line, so I don’t have to load it all at once in memory. I found pytorch IterableDataset as potential solution for my problem. It only works as expected when using 1 worker, if using more than one worker it will create duplicate recods. Let me show you an example:
Having a testfile.txt containing:
0 - Dummy line
1 - Dummy line
2 - Dummy line
3 - Dummy line
4 - Dummy line
5 - Dummy line
6 - Dummy line
7 - Dummy line
8 - Dummy line
9 - Dummy line
Defining a IterableDataset:
class CustomIterableDatasetv1(IterableDataset):
def __init__(self, filename):
#Store the filename in object's memory
self.filename = filename
def preprocess(self, text):
### Do something with text here
text_pp = text.lower().strip()
###
return text_pp
def line_mapper(self, line):
#Splits the line into text and label and applies preprocessing to the text
text, label = line.split('-')
text = self.preprocess(text)
return text, label
def __iter__(self):
#Create an iterator
file_itr = open(self.filename)
#Map each element using the line_mapper
mapped_itr = map(self.line_mapper, file_itr)
return mapped_itr
We can now test it:
base_dataset = CustomIterableDatasetv1("testfile.txt")
#Wrap it around a dataloader
dataloader = DataLoader(base_dataset, batch_size = 1, num_workers = 1)
for X, y in dataloader:
print(X,y)
It outputs:
('0',) (' Dummy line\n',)
('1',) (' Dummy line\n',)
('2',) (' Dummy line\n',)
('3',) (' Dummy line\n',)
('4',) (' Dummy line\n',)
('5',) (' Dummy line\n',)
('6',) (' Dummy line\n',)
('7',) (' Dummy line\n',)
('8',) (' Dummy line\n',)
('9',) (' Dummy line',)
That is correct. But If I change the number of workers to 2 the output becomes
('0',) (' Dummy line\n',)
('0',) (' Dummy line\n',)
('1',) (' Dummy line\n',)
('1',) (' Dummy line\n',)
('2',) (' Dummy line\n',)
('2',) (' Dummy line\n',)
('3',) (' Dummy line\n',)
('3',) (' Dummy line\n',)
('4',) (' Dummy line\n',)
('4',) (' Dummy line\n',)
('5',) (' Dummy line\n',)
('5',) (' Dummy line\n',)
('6',) (' Dummy line\n',)
('6',) (' Dummy line\n',)
('7',) (' Dummy line\n',)
('7',) (' Dummy line\n',)
('8',) (' Dummy line\n',)
('8',) (' Dummy line\n',)
('9',) (' Dummy line',)
('9',) (' Dummy line',)
Which is incorrect, as is creating duplicates of each sample per worker in the data loader.
Is there a way to solve this issue with pytorch? So a dataloader can be created to not load all file in memory with support for multiple workers. |
st175326 | Solved by jiwidi in post #3
Thanks for the reply!
Really good material you linked to, I think I have solved it. Can you double-check my logic? I tested it and works good so far.
I replace the dataset with this new definition:
class CustomIterableDatasetv1(IterableDataset):
def __init__(self, filename):
#Store … |
st175327 | The docs 7 explain this behavior and suggest to use the worker information:
When a subclass is used with DataLoader 3, each item in the dataset will be yielded from the DataLoader 3 iterator. When num_workers > 0, each worker process will have a different copy of the dataset object, so it is often desired to configure each copy independently to avoid having duplicate data returned from the workers. get_worker_info() 1, when called in a worker process, returns information about the worker. It can be used in either the dataset’s __iter__() method or the DataLoader 3 ‘s worker_init_fn option to modify each copy’s behavior. |
st175328 | Thanks for the reply!
Really good material you linked to, I think I have solved it. Can you double-check my logic? I tested it and works good so far.
I replace the dataset with this new definition:
class CustomIterableDatasetv1(IterableDataset):
def __init__(self, filename):
#Store the filename in object's memory
self.filename = filename
def preprocess(self, text):
### Do something with text here
text_pp = text.lower().strip()
###
return text_pp
def line_mapper(self, line):
#Splits the line into text and label and applies preprocessing to the text
text, label = line.split('-')
text = self.preprocess(text)
return text, label
def __iter__(self):
worker_total_num = torch.utils.data.get_worker_info().num_workers
worker_id = torch.utils.data.get_worker_info().id
#Create an iterator
file_itr = open(self.filename)
#Map each element using the line_mapper
mapped_itr = map(self.line_mapper, file_itr)
#Add multiworker functionality
mapped_itr = itertools.islice(mapped_itr, worker_id, None, worker_total_num)
return mapped_itr
I make use of your suggestion and access get_worker_info() to know the total number of workers and current worker. I return a sliced version of the dataloader where each worker will only return the samples that correspond to it. Each worker will still iterate over the full dataset, just that it wont return samples other workers are returning. |
st175329 | import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
def run(rank, size):
tensor1 = torch.ones(1, device=2*rank)
tensor2 = torch.ones(1, device=2*rank+1)
dist.all_reduce(tensor1, op=dist.ReduceOp.SUM)
dist.all_reduce(tensor2, op=dist.ReduceOp.SUM)
print('Rank ', rank, ' has data ', tensor1[0], tensor2[0])
def init_process(rank, size, fn, backend='nccl'):
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29501'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 2
processes = []
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join()
I have a computer with four GPUs,
GPU0: tensor1, GPU1: tensor2
GPU2: tensor1, GPU3: tensor2
I expect to reduce tensor1 in GPU0 and GPU2, reduce tensor2 in GPU1 and GPU3. But when I execute the above code, the program keeps blocking.
I don’t know why. Can someone help me, thanks! |
st175330 | Unfortunately I couldn’t reproduce your problem. This is the output of your script when I run it on a 4 GPU machine:
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1025 17:28:00.405782 39617 ProcessGroupNCCL.cpp:520] [Rank 0] ProcessGroupNCCL initialized with following options:
NCCL_ASYNC_ERROR_HANDLING: 0
NCCL_BLOCKING_WAIT: 0
TIMEOUT(ms): 1800000
USE_HIGH_PRIORITY_STREAM: 0
NCCL_DEBUG: UNSET
I1025 17:28:00.405783 39637 ProcessGroupNCCL.cpp:621] [Rank 0] NCCL watchdog thread started!
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1025 17:28:00.422144 39618 ProcessGroupNCCL.cpp:520] [Rank 1] ProcessGroupNCCL initialized with following options:
NCCL_ASYNC_ERROR_HANDLING: 0
NCCL_BLOCKING_WAIT: 0
TIMEOUT(ms): 1800000
USE_HIGH_PRIORITY_STREAM: 0
NCCL_DEBUG: UNSET
I1025 17:28:00.422158 39638 ProcessGroupNCCL.cpp:621] [Rank 1] NCCL watchdog thread started!
Rank 1 has data tensor(2., device='cuda:2') tensor(2., device='cuda:3')
Rank 0 has data tensor(2., device='cuda:0') tensor(2., device='cuda:1')
Are you sure that your script is able to see all your CUDA devices (e.g. CUDA_VISIBLE_DEVICES=0,1,2,3)? |
st175331 | But my results are as follows:
Snipaste_2021-10-27_23-27-33741×55 1.51 KB
My result just keeps blocking.
GPU Usage Details:
Snipaste_2021-10-27_23-45-40726×574 14.5 KB
Feels like a weird GPU usage
My configuration:
pytorch1.7.1+cu101
cuda10.0.130
GPU:GeForce RTX 2080Ti |
st175332 | I updated the below script to reflect the interaction between different models.
Hi,
I am implementing a model with multiple types of forward functions.
An example should look like:
import torch.nn as nn
from torch.nn.parallel import DataParallel, DistributedDataParallel
class Model1(nn.Module):
def __init__(self):
...
def forward(self, x):
...
def forward2(self, x, y):
...
def forward3(self, z, w):
...
# model1 = DP(Model1(), ...)
model1 = DDP(Model1(), ...)
model2 = DDP(Model2(), ...)
...
out1 = model1(x) # parallized
out2 = model1.module.forward2(x, y) # not parallelized in DP and
z, w = model2(y) # model2 is also used somewhere else
out3 = model1.module.forward3(z, w) # no communication in DDP
These forward functions are there for different purposes and are all necessary for training and inference.
However, DP or DDP-wrapped models do not directly parallelize those functions other than the default forward that could be called without explicitly naming it.
How can I parallelize other forward2 and forward3?
Here are several candidates that I could think of for now.
define a big forward function with an option flag so that it could call other functions in it.
I think this would work but would cause a lot of if else statements in the functions.
Also, the input argument parsing part would be cluttered.
register a function to DDP such that it would recognize other functions
I looked into some hooks but didn’t find a way for my case.
create a custom myDDP class that inherits DDP and implement other functions similar to forward.
This might be possible but I would need to update myDDP every time I define new functions.
If 2 is possible, that would be the most elegant solution and 1 could be a safe and dirty way.
Are there any suggestions?
p.s. I checked this thread 1 but it does not apply to me.
My actual code is more complex and a more fundamental solution is necessary. |
st175333 | How do you train your model locally? Do you call your forward functions in a specific order? Do they train different parts of your model? |
st175334 | Hi @cbalioglu
Thank you for looking into this issue:)
training
I am training an invertible network with forward/inverse path and optionally, jointly with other models.
The invertible network has several other functions, too.
function call order
I (usually) call the functions in order but not always:
model1.func1
model1.func2
model2.func1 # sometimes omitted for different purposes
model1.func3 # also called outside the training loop
model1.func4
I am using the intermediate outputs to compute loss functions.
I sometimes call func3 independently from the above flow.
training parts per function
The training weights are partially shared across different functions while some don’t require weights. |
st175335 | Hey @seungjun, yep, I confirm that your 1st option should work as DDP only calls the default forward() function.
github.com
pytorch/pytorch/blob/a33d3d84dfcaef93c17d78cf7983d62ee01d28e3/torch/nn/parallel/distributed.py#L915-L919 1
if self.device_ids:
inputs, kwargs = self.to_kwargs(inputs, kwargs, self.device_ids[0])
output = self.module(*inputs[0], **kwargs[0])
else:
output = self.module(*inputs, **kwargs)
However, there is a caveat. DDP has internal states that requires alternating forward and backward. So if you call things like foward, forward, backward, DDP is likely to hang or crash. |
st175336 | Hi @mrshenli,
Thank you for confirming the 1st option and pointing to the related part of the DDP source code.
I checked the DDP implementation and it seems that option 1 is the only possible way for now.
forward is the only function that DDP supports safe parallelization and going for option 3 would be an adventure.
By the way, I’m not sure if I could avoid the function call patterns like forward, forward, backward you mentioned.
Thank you very much and I will post here when I come up with a nice solution.
Best,
Seungjun |
st175337 | How to convert a single-GPU PyTorch script to a multi-GPU multi-node PyTorch script with DDP?
I read this 8 10 times already but honestly it’s not really helpful ; what I need is a place that lists the modification needed to convert a single-GPU code to a multi-node multi-GPU code.
Is there a place in the doc that explains how to distribute a PyTorch training script over multiple machines? |
st175338 | Check out these resources. They helped me understand how to do it. I agree that the docs as of now are not to the point.
Distributed data parallel training in Pytorch 3
GitHub - GoldenRaven/Pytorch_DistributedParallel_GPU_test: Pytorch distributed data parallel test of GPU on MNIST 1
https://towardsdatascience.com/how-to-convert-a-pytorch-dataparallel-project-to-use-distributeddataparallel-b84632eed0f6 2
Distributed Training in PyTorch (Distributed Data Parallel) | by Praneet Bomma | Analytics Vidhya | Medium 1 |
st175339 | Hey @Olivier-CR, do you mind opening an issue in the PyTorch repository for this? There is definitely room for improvement in our documentation and this would help us to prioritize it in the near future. Thanks! |
st175340 | Hi,
I have a well working single-GPU script that I (believe) I correctly adapted to use DDP
I tried to use it in a single-node, 4-GPU EC2 with 2 different techniques, both hang forever (1min+) with CPU and GPU idle. What is wrong? How to use DDP?
python -m torch.distributed.launch --use_env train.py \
--gpu-count 4 \
--dataset . \
--cache tmp \
--height 604 \
--width 960 \
--checkpoint-dir . \
--batch 10 \
--workers 24 \
--log-freq 20 \
--prefetch 2 \
--bucket $bucket \
--eval-size 10 \
--iterations 20 \
--class-list a2d2_images/camera_lidar_semantic/class_list.json
hangs
python /home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/distributed/launch.py \
train.py \
--gpu-count 4 \
--dataset . \
--cache tmp \
--height 604 \
--width 960 \
--checkpoint-dir . \
--batch 10 \
--workers 24 \
--log-freq 20 \
--prefetch 2 \
--bucket $bucket \
--eval-size 10 \
--iterations 20 \
--class-list a2d2_images/camera_lidar_semantic/class_list.json
hangs too.
I strongly suggest the PyTorch team to work on improve the distribution experience. As model and dataset scale, this is a feature people will use more and more |
st175341 | It is very hard to root cause your problem by just looking at the commands you have run. I suggest checking out our debugging tools described here 2. They might give you more context about the problem.
Also make sure that you read our documentation on torchrun 1 which is the officially recommended way to start distributed jobs. |
st175342 | DDP default timeout is 30min. Adjusting it through init_process_group would not resolve the problem but would allow to fail faster |
st175343 | Q1: Equivalence
From my understanding, DDP spawns multiple unique processes for updating gradients. Each process in DDP gathers all the gradients from all other processes and averaging it via an all-reduction operation.
For example, if a researcher has published a paper stating to train using SGD, LR=0.1, batch = 64 for 100 epochs. Would it be equivalent to me doing DDP with SGD, LR=0.1, batch = 16 for 100 epochs if distributed to 4 GPUs? I want to know whether is it equivalent so that I scale experiments done in a single GPU to a distributed setting for faster training. It seems to me that it is equal unless I have misunderstood that all gradients from every process are averaged by every individual process in parallel.
I see an exact example here: examples/main.py at master · pytorch/examples · GitHub 1
Q2:
If I am using a workstation with more than 1 GPU, is there any particular situation where I should opt not to use DPP and just use 1 GPU for training? |
st175344 | Solved by cbalioglu in post #2
Regarding your first question. That is correct. DDP should give you the same result as if your training was run on a single process with a single GPU. As you described, DDP by default averages your gradients across all nodes, meaning except any deviations due to floating point arithmetic, mathematic… |
st175345 | Regarding your first question. That is correct. DDP should give you the same result as if your training was run on a single process with a single GPU. As you described, DDP by default averages your gradients across all nodes, meaning except any deviations due to floating point arithmetic, mathematically the outputs should be identical (assuming you scale your hyperparameters -i.e. batch size- accordingly).
There is no definitive answer for your second question. Overall the best approach would be to simply measure the speed and the convergence rate of your training for a few epochs. One particular case where it might not be worth to use more than one GPU is if your batch size is fairly small and can already fit into a single GPU. Obviously in such case it makes not much sense to use a multi-GPU setting. |
st175346 | Hi everyone,
I have created a distributed model using 2 machines, rank = 0, rank = 1 respectively. The model appears to be trained because it converges to the expected result. I save the model using the command “torch.save (model.state_dict (), ‘RPC1.pth’)”
The problem is when I want to load the model (model.load_state_dict (torch.load (‘RPC1.pth’))) to make a prediction. The loaded model does not appear to have been saved. And the prediction is wrong.
How can I save the trained model using torch.distributed RPC and RRef?
The python file is:
import numpy as np
import torch
import torch.utils.data as data
import torch.nn.functional as F
from torch.autograd import Variable
import torch.nn as nn
import matplotlib.pyplot as plt
import os
import torch.distributed.rpc as rpc
from torch.distributed.rpc import RRef
import torch.distributed.autograd as dist_autograd
from torch.distributed.optim import DistributedOptimizer
def _call_method(method, rref, *args, **kwargs):
r"""
a helper function to call a method on the given RRef
“”"
return method(rref.local_value(), *args, **kwargs)
def _remote_method(method, rref, *args, **kwargs):
r"""
a helper function to run method on the owner of rref and fetch back the
result using RPC
“”"
return rpc.rpc_sync(
rref.owner(),
_call_method,
args=[method, rref] + list(args),
kwargs=kwargs
)
def _parameter_rrefs(module):
r"""
Create one RRef for each parameter in the given local module, and return a
list of RRefs.
“”"
param_rrefs = []
for param in module.parameters():
param_rrefs.append(RRef(param))
return param_rrefs
We create the dataset and an iterable.
class my_points(data.Dataset):
def init(self, n_samples):
self.n_tuples = int(n_samples/4)
self.n_samples = self.n_tuples * 4
pd_data = np.tile(np.array([[0.,0.,0.],[1.,1.,0.],[1.,0.,1.],[0.,1.,1.]]), (self.n_tuples, 1) ) # data
self.data = pd_data[:, 0:2] # 1st and 2nd columns → x,y
self.target = pd_data[:, 2:] # 3nd column → label
def __len__(self): # Length of the dataset.
return self.n_samples
def __getitem__(self, index): # Function that returns one point and one label.
return torch.Tensor(self.data[index]), torch.Tensor(self.target[index])
class rref_in(nn.Module):
def init(self, n_in=2, n_hidden=4, n_out=2):
super(rref_in, self).init()
self.n_in = n_in
self.n_out = n_out
self.n_hidden = n_hidden
self.h = nn.Linear(self.n_in, self.n_hidden, bias=True)
self.fc1 = nn.Linear(self.n_hidden, self.n_out, bias=True)
def forward(self, x):
x = F.sigmoid(self.h(x))
x = F.sigmoid(self.fc1(x))
return x
We build a model with two inputs and one output.
class my_model(nn.Module):
def init(self, ps, n_in=2, n_hidden=4, n_out=2, dim=1):
super(my_model, self).init()
self.n_in = n_in
self.n_out = n_out
self.n_hidden = n_hidden
self.rref_in = rpc.remote(ps, rref_in, args=(n_in, n_hidden, n_out)) # setup remotely
self.out = nn.Softmax(dim=dim)
def forward(self, x):
x = _remote_method(rref_in.forward, self.rref_in, x)
x=self.out(x)
return x
def parameter_rrefs(self):
remote_params = []
# create RRefs for local parameters
remote_params.extend(_remote_method(_parameter_rrefs, self.rref_in))
remote_params.extend(_parameter_rrefs(self.out))
return remote_params
def _run_trainer(data):
n_classes = 2
# We create the dataloader.
# 100 iteraciones → n_points = 2000 / batch_size = 20
my_data = my_points(2000)
batch_size = 20
my_loader = data.DataLoader(my_data,batch_size=batch_size,num_workers=1)
# Model.
# Now, we create the model, the loss function or criterium and the optimizer
model = my_model(ps= 'ps',n_in=2, n_hidden=2, n_out=2, dim=1)
# print(model)
criterium = nn.CrossEntropyLoss()
# setup distributed optimizer
optimizer = DistributedOptimizer( torch.optim.SGD, model.parameter_rrefs(), lr=0.06, momentum=0.9)
# Supervised Taining.
epochs=10
max_iter = my_loader.__len__()*epochs
cost = np.zeros((max_iter, 1))
ucost = np.zeros((max_iter, 1))
i, c_ant, beta = 0, 0, 0.99
for ep in range(epochs):
for k, (data, target) in enumerate(my_loader):
with dist_autograd.context() as context_id:
# Definition of inputs as variables for the net.
# requires_grad is set False because we do not need to compute the
# derivative of the inputs.
data = Variable(data, requires_grad=False)
target = Variable(target.long(), requires_grad=False)
# Feed forward.
pred = model(data)
# Loss calculation.
loss = criterium(pred, target.view(-1))
# run distributed backward pass
dist_autograd.backward(context_id, [loss])
# run distributed optimizer
optimizer.step(context_id)
cost[i] = loss.item()
c_act = (1 - beta) * cost[i] + beta * c_ant
ucost[i] = c_act / (1 - beta ** (i + 1))
c_ant = c_act
i += 1
print('Loss {:.4f} at epoch {:d}'.format(loss.item(), ep + 1))
# Now, we plot the results.
# Plot the loss C.
plt.plot(range(max_iter), cost, color='steelblue', marker='o')
plt.plot(range(max_iter), ucost,'c-', linewidth=3)
plt.xlabel("Iterations")
plt.ylabel("Cost (loss)")
plt.show(block=True)
colors = ['r','b','g','y']
points = data.numpy()
# Ground truth last batch.
target = target.numpy()
for k in range(n_classes):
select = target[:,0]==k
p = points[select,:]
plt.scatter(p[:,0],p[:,1],facecolors=colors[k])
# Predictions last batch.
pred = pred.exp().detach() # exp of the log prob = probability.
_, index = torch.max(pred,1) # index of the class with maximum probability.
pred = pred.numpy()
index = index.numpy()
for k in range(n_classes):
select = index==k
p = points[select,:]
plt.scatter(p[:,0],p[:,1],s=60,marker='s',edgecolors=colors[k],facecolors='none')
plt.show()
torch.save(model.state_dict(), 'RPC1.pth')
if name == ‘main’:
rank = 0 # rank = 0 (train) rank = 1 (server)
world_size = 2
os.environ[“MASTER_ADDR”] = ‘192.168.0.14’ #ip machine rank 0
os.environ[“MASTER_PORT”] = str(26500)
os.environ[“WORLD_SIZE”] = str(world_size)
os.environ[“RANK”] = str(rank)
os.environ[‘TP_SOCKET_IFNAME’] = ‘wlp2s0’ #name ip port 192.168.0.14 (check ifconfig ubuntu, ipconfig windows)
os.environ[“GLOO_SOCKET_IFNAME”] = “wlp2s0” #name ip port 192.168.0.14 (check ifconfig ubuntu, ipconfig windows)
if rank == 0:
print('rank: ', rank, world_size)
rpc.init_rpc("trainer", rank=rank, world_size=world_size)
_run_trainer(data)
else:
print('rank: ', rank, world_size)
rpc.init_rpc("ps", rank=rank, world_size=world_size)
# parameter server does nothing
pass
rpc.shutdown() |
st175347 | Have you verified that RPC1 has all parameters before saving it to a file? I had trouble following your code due to formatting issues, but the most straightforward explanation is that you are not copying the parameters residing on RPC2 to RPC1 before saving the model. |
st175348 | Hi,
I’m trying to launch a train.py DDP script to run over a 4-GPU machine.
i’m using the launch.py tool described here, (this experience is quite ugly btw, I which there was a clean PyTorch class to do that!) that is supposed to set local_rank properly in each process: “–local_rank: This is passed in via launch.py” as the documentation says.
python /home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/distributed/launch.py \
--nnode=1 \
--node_rank=0 \
--nproc_per_node=4 \
train.py \
--gpu-count 4 \
--dataset . \
--cache tmp \
--height 604 \
--width 960 \
--checkpoint-dir . \
--batch 10 \
--workers 24 \
--log-freq 20 \
--prefetch 2 \
--bucket $bucket \
--eval-size 10 \
--iterations 20 \
--class-list a2d2_images/camera_lidar_semantic/class_list.json
However, in each of my processes local_rank = -1 (default value). What is wrong? how to get local_ranks each distinct? |
st175349 | I followed https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py 5 to create my own ImageFolder (I called it ImageFolderSuperpixel, folder_sp.py). It works FINE in a single GPU but it meets bugs in a single node, multiple GPUs. Anyone can tell me what is going on here?
Traceback (most recent call last):
File “”, line 1, in
Traceback (most recent call last):
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 115, in _main
self = reduction.pickle.load(from_parent)
_pickle.UnpicklingError: pickle data was truncated
File “”, line 1, in
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 115, in _main
self = reduction.pickle.load(from_parent)
_pickle.UnpicklingError: pickle data was truncated
Traceback (most recent call last):
File “”, line 1, in
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 115, in _main
self = reduction.pickle.load(from_parent)
_pickle.UnpicklingError: pickle data was truncated
Traceback (most recent call last):
File “main.py”, line 102, in
main()
File “main.py”, line 45, in main
classification.start(dataset_path, checkpoints_path, args, **CONFIG[args.dataset])
File “/nfs/hpc/share/coe_hanweiku/xxxnet-pytorch/utils/classification.py”, line 304, in start
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
File “/nfs/hpc/share/coe_hanweiku/xxxnet-pytorch/venv2/lib64/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method=‘spawn’)
File “/nfs/hpc/share/coe_hanweiku/xxxnet-pytorch/venv2/lib64/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 158, in start_processes
while not context.join():
File “/nfs/hpc/share/coe_hanweiku/xxxnet-pytorch/venv2/lib64/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 108, in join
(error_index, name)
Exception: process 1 terminated with signal SIGKILL
"""Custom Image Datasets API
Image datasets API have two input image directories, which could provide the
interface for superpixel research
Author: Weikun Han <[email protected]>
Reference:
- https://github.com/pytorch/vision/blob/master/torchvision/datasets/vision.py
- https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py
"""
import os
import random
import torch
import torch.utils.data as data
from PIL import Image
class VisionDataset(data.Dataset):
_repr_indent = 4
def __init__(self, root, root_sp, transforms=None, transform=None, target_transform=None):
if isinstance(root, torch._six.string_classes):
root = os.path.expanduser(root)
if isinstance(root_sp, torch._six.string_classes):
root_sp= os.path.expanduser(root_sp)
self.root = root
self.root_sp = root_sp
has_transforms = transforms is not None
has_separate_transform = transform is not None or target_transform is not None
if has_transforms and has_separate_transform:
raise ValueError("Only transforms or transform/target_transform can "
"be passed as argument")
# for backwards-compatibility
self.transform = transform
self.target_transform = target_transform
if has_separate_transform:
transforms = StandardTransform(transform, target_transform)
self.transforms = transforms
def __getitem__(self, index):
raise NotImplementedError
def __len__(self):
raise NotImplementedError
def __repr__(self):
head = "Dataset " + self.__class__.__name__
body = ["Number of datapoints: {}".format(self.__len__())]
if self.root is not None:
body.append("Root location: {}".format(self.root))
if self.root_sp is not None:
body.append("Root superpixel location: {}".format(self.root_sp))
body += self.extra_repr().splitlines()
if hasattr(self, "transforms") and self.transforms is not None:
body += [repr(self.transforms)]
lines = [head] + [" " * self._repr_indent + line for line in body]
return '\n'.join(lines)
def _format_transform_repr(self, transform, head):
lines = transform.__repr__().splitlines()
return (["{}{}".format(head, lines[0])] +
["{}{}".format(" " * len(head), line) for line in lines[1:]])
def extra_repr(self):
return ""
class StandardTransform(object):
def __init__(self, transform=None, target_transform=None):
self.transform = transform
self.target_transform = target_transform
def __call__(self, input, target):
if self.transform is not None:
input = self.transform(input)
if self.target_transform is not None:
target = self.target_transform(target)
return input, target
def _format_transform_repr(self, transform, head):
lines = transform.__repr__().splitlines()
return (["{}{}".format(head, lines[0])] +
["{}{}".format(" " * len(head), line) for line in lines[1:]])
def __repr__(self):
body = [self.__class__.__name__]
if self.transform is not None:
body += self._format_transform_repr(self.transform,
"Transform: ")
if self.target_transform is not None:
body += self._format_transform_repr(self.target_transform,
"Target transform: ")
return '\n'.join(body)
def has_file_allowed_extension(filename, extensions):
"""Checks if a file is an allowed extension.
Args:
filename (string): path to a file
extensions (tuple of strings): extensions to consider (lowercase)
Returns:
bool: True if the filename ends with one of given extensions
"""
return filename.lower().endswith(extensions)
def is_image_file(filename):
"""Checks if a file is an allowed image extension.
Args:
filename (string): path to a file
Returns:
bool: True if the filename ends with a known image extension
"""
return has_file_allowed_extension(filename, IMG_EXTENSIONS)
def make_dataset(directory, class_to_idx, extensions=None, is_valid_file=None):
instances = []
directory = os.path.expanduser(directory)
both_none = extensions is None and is_valid_file is None
both_something = extensions is not None and is_valid_file is not None
if both_none or both_something:
raise ValueError("Both extensions and is_valid_file cannot be None or not None at the same time")
if extensions is not None:
def is_valid_file(x):
return has_file_allowed_extension(x, extensions)
for target_class in sorted(class_to_idx.keys()):
class_index = class_to_idx[target_class]
target_dir = os.path.join(directory, target_class)
if not os.path.isdir(target_dir):
continue
for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):
for fname in sorted(fnames):
path = os.path.join(root, fname)
if is_valid_file(path):
item = path, class_index
instances.append(item)
return instances
class DatasetFolder(VisionDataset):
"""A generic data loader where the samples are arranged in this way: ::
root/class_x/xxx.ext
root/class_x/xxy.ext
root/class_x/xxz.ext
root/class_y/123.ext
root/class_y/nsdf3.ext
root/class_y/asd932_.ext
Args:
root (string): Root directory path.
root_sp (string): Root directory path for superpixel.
loader (callable): A function to load a sample given its path.
extensions (tuple[string]): A list of allowed extensions.
both extensions and is_valid_file should not be passed.
transform (callable, optional): A function/transform that takes in
a sample and returns a transformed version.
E.g, ``transforms.RandomCrop`` for images.
target_transform (callable, optional): A function/transform that takes
in the target and transforms it.
is_valid_file (callable, optional): A function that takes path of a file
and check if the file is a valid file (used to check of corrupt files)
both extensions and is_valid_file should not be passed.
Attributes:
classes (list): List of the class names sorted alphabetically.
class_to_idx (dict): Dict with items (class_name, class_index).
samples (list): List of (sample path, class_index) tuples
targets (list): The class_index value for each image in the dataset
"""
def __init__(self, root, root_sp, loader, extensions=None, transform=None,
target_transform=None, is_valid_file=None):
super(DatasetFolder, self).__init__(root, root_sp, transform=transform,
target_transform=target_transform)
classes, class_to_idx = self._find_classes(self.root)
classes_sp, class_to_idx_sp = self._find_classes(self.root_sp)
samples = make_dataset(self.root, class_to_idx, extensions, is_valid_file)
samples_sp = make_dataset(self.root_sp, class_to_idx_sp, extensions, is_valid_file)
if len(samples) == 0:
msg = "Found 0 files in subfolders of: {}\n".format(self.root)
if extensions is not None:
msg += "Supported extensions are: {}".format(",".join(extensions))
raise RuntimeError(msg)
if len(samples_sp) == 0:
msg = "Found 0 files in subfolders of: {}\n".format(self.root_sp)
if extensions is not None:
msg += "Supported extensions are: {}".format(",".join(extensions))
raise RuntimeError(msg)
if len(samples) != len(samples_sp):
msg = "Image files is not equal to superpixel files.\n"
if extensions is not None:
msg += "Supported extensions are: {}".format(",".join(extensions))
raise RuntimeError(msg)
self.loader = loader
self.extensions = extensions
self.classes = classes
self.classes_sp = classes_sp
self.class_to_idx = class_to_idx
self.class_to_idx_sp = class_to_idx_sp
self.samples = samples
self.samples_sp = samples_sp
self.targets = [s[1] for s in samples]
self.targets_sp = [s[1] for s in samples_sp]
def _find_classes(self, dir):
"""
Finds the class folders in a dataset.
Args:
dir (string): Root directory path.
Returns:
tuple: (classes, class_to_idx) where classes are relative to (dir), and class_to_idx is a dictionary.
Ensures:
No class is a subdirectory of another.
"""
classes = [d.name for d in os.scandir(dir) if d.is_dir()]
classes.sort()
class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}
return classes, class_to_idx
def __getitem__(self, index):
"""
Args:
index (int): Index
Returns:
tuple: (sample, target) where target is class_index of the target class.
"""
path, target = self.samples[index]
path_sp, target_sp = self.samples_sp[index]
sample = self.loader(path).convert('RGB')
sample_sp = self.loader(path_sp)
if self.transform is not None:
torch.manual_seed(1234)
random.seed(1234)
sample = self.transform(sample)
torch.manual_seed(1234)
random.seed(1234)
sample_sp = self.transform(sample_sp)
if self.target_transform is not None:
torch.manual_seed(4321)
random.seed(4321)
target = self.target_transform(target)
torch.manual_seed(4321)
random.seed(4321)
target_sp = self.target_transform(target_sp)
return sample, target, sample_sp, target_sp
def __len__(self):
return len(self.samples)
IMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif', '.tiff', '.webp')
def pil_loader(path):
img = Image.open(path)
return img
def accimage_loader(path):
import accimage
try:
return accimage.Image(path)
except IOError:
# Potentially a decoding problem, fall back to PIL.Image
return pil_loader(path)
def default_loader(path):
from torchvision import get_image_backend
if get_image_backend() == 'accimage':
return accimage_loader(path)
else:
return pil_loader(path)
class ImageFolderSuperpixel(DatasetFolder):
"""A generic data loader where the images are arranged in this way: ::
root/dog/xxx.png
root/dog/xxy.png
root/dog/xxz.png
root/cat/123.png
root/cat/nsdf3.png
root/cat/asd932_.png
Args:
root (string): Root directory path.
root_sp (string): Root directory path for superpixel.
transform (callable, optional): A function/transform that takes in an PIL image
and returns a transformed version. E.g, ``transforms.RandomCrop``
target_transform (callable, optional): A function/transform that takes in the
target and transforms it.
loader (callable, optional): A function to load an image given its path.
is_valid_file (callable, optional): A function that takes path of an Image file
and check if the file is a valid file (used to check of corrupt files)
Attributes:
classes (list): List of the class names sorted alphabetically.
class_to_idx (dict): Dict with items (class_name, class_index).
imgs (list): List of (image path, class_index) tuples
"""
def __init__(self, root, root_sp, transform=None, target_transform=None,
loader=default_loader, is_valid_file=None):
super(ImageFolderSuperpixel, self).__init__(root, root_sp, loader, IMG_EXTENSIONS if is_valid_file is None else None,
transform=transform,
target_transform=target_transform,
is_valid_file=is_valid_file)
self.imgs = self.samples
self.imgs_sp = self.samples_sp |
st175350 | Solved by weikunhan in post #4
Thanks for helping! I try to use one node with 2 GPUs, there is no problem. Next, I try to use one node with 3 or 4 GPUs but not work, I found ONLY 2 GPUs is working…
[image]
Therefore, I try to understand why I can only use one node with 2 GPUs. The answer is node memory. I use default setting wh… |
st175351 | @ptrblck I changed to the ImageFolder class and there is no problem! Therefore, I am sure that my ImageFolderSuperpixel class have some problems that I cannot find it.
The example to use this API, the main purpose for this API is to load two image folder at same time (ImageFolder only support loading one image dir):
dataset_path = './data/imagenet'
traindir = os.path.join(dataset_path, 'train')
traindir_sp = os.path.join(dataset_path, 'train')
train_dataset = ImageFolderSuperpixel(
traindir,
traindir_sp,
transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
]))
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=1, shuffle=False,
num_workers=1, pin_memory=True, sampler=None)
for i, (images, target, images_sp, _) in enumerate(train_loader): |
st175352 | I can’t see any obvious errors in your code.
Could you use num_workers=0 and rerun the code?
This should give you a better error message in case a worker is failing to load the data. |
st175353 | Thanks for helping! I try to use one node with 2 GPUs, there is no problem. Next, I try to use one node with 3 or 4 GPUs but not work, I found ONLY 2 GPUs is working…
Therefore, I try to understand why I can only use one node with 2 GPUs. The answer is node memory. I use default setting which only provide 10GB in each node. Compare with official Pytorch datasets.ImageFolder(), 10GB memory is OK for one node with 4 or 6 GPUs. However, since I create ImageFolderSuperpixel which contains too much information, it need more memory than official Pytorch datasets.ImageFolder(). After I apply for 40GB in each node. I can run with each node with 4GPUs.
Thanks
Traceback (most recent call last):
File “”, line 1, in
Traceback (most recent call last):
File “”, line 1, in
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 115, in _main
self = reduction.pickle.load(from_parent)
_pickle.UnpicklingError: pickle data was truncated
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 115, in _main
self = reduction.pickle.load(from_parent)
_pickle.UnpicklingError: pickle data was truncated
/usr/lib64/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 11 leaked semaphores to clean up at shutdown
len(cache))
/usr/lib64/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 11 leaked semaphores to clean up at shutdown
len(cache))
Traceback (most recent call last):
File “”, line 1, in
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “/usr/lib64/python3.6/multiprocessing/spawn.py”, line 115, in _main
self = reduction.pickle.load(from_parent)
_pickle.UnpicklingError: pickle data was truncated
Traceback (most recent call last):
File “main2.py”, line 102, in
main()
File “main2.py”, line 45, in main
classification.start(dataset_path, checkpoints_path, args, **CONFIG[args.dataset])
File “/nfs/hpc/share/coe_hanweiku/xxxnet-pytorch/utils/classificationtest.py”, line 304, in start
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
File “/nfs/hpc/share/coe_hanweiku/xxxnet-pytorch/venv2/lib64/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method=‘spawn’)
File “/nfs/hpc/share/coe_hanweiku/xxxnet-pytorch/venv2/lib64/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 158, in start_processes
while not context.join():
File “/nfs/hpc/share/coe_hanweiku/xxxnet-pytorch/venv2/lib64/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 108, in join
(error_index, name)
Exception: process 1 terminated with signal SIGKILL
/usr/lib64/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 11 leaked semaphores to clean up at shutdown
len(cache)) |
st175354 | I notice that when using zero stage-3 in deepspeed to train model without any recomputing (checkpointing) methods, the parameters of the model cannot correctly released. I have already release param.data with my ZeRO, which set param.data to torch.Tensor([1]). However, the memory consumption doesn’t decrease, which means param.data still remains in the memory.
I think the problem occurs in autograd module, which may possibly create a weak ref to “param.data” when there are some corresponding intermediate results ( a.k.a. intermediate activations) in GPU memory.
Could anyone tell us how I can remove this param ref to help reduce memory? |
st175355 | Hi,
In general, you should never use .data
Could you give more details on what you’re trying to do here? Because it is expected that the autograd saves (a lot) of things in the graph. |
st175356 | I am trying to save memory as much as possible.
The implementation actually comes from DeepSpeed. this line set torch.data to a torch.ones(1) tensor. DeepSpeed/partition_parameters.py at master · microsoft/DeepSpeed · GitHub 1
However, the parameters memory didn’t release correctly. And I found out that if a parameter Tensor has its corresponding intermediate results, the parameters memory won’t be released.
The parameter data will only be gathered when they are needed. Like gather in pre_sub_module_backward_function here:DeepSpeed/stage3.py at master · microsoft/DeepSpeed · GitHub
But If the parameters memory not release correctly, here gathering won’t consume any other memory. In other word, I reset the param.data to parameter Tensor, but the memory_allocated has no change. |
st175357 | ConnollyLeon:
However, the parameters memory didn’t release correctly.
I wouldn’t say that. You do remove the parameter successfully. The problem is that other part of the code are also using this Tensor and so the Tensor cannot be freed. This is actually expected.
If you want to play tricks with what the autograd saves for backward, you have hooks for that: Autograd mechanics — PyTorch 1.10.0 documentation 1. There is also a tutorial for that. |
st175358 | My build:
Asrock z390 extreme4
Intel 8700k
x2 2080 ti x2
Cooler Master v1200 Platinum
Ubuntu 18.04
Cuda 10.0
nccl 2.4.0-2
pytorch was installed according to guide on pytorch.org
So I’ve got something interesting: pc crashes right after I try running imagenet script for multi gpu from official pytorch repository. It doesn’t crash pc if I start training with apex mixed precision. Training on a single 2080 also didn’t cause reboot.
What didn’t work:
decreasing batch size
limiting power consumption of gpu’s via nvidia-smi
changing motherboard, cpu, power supply
changing 2080 ti vendor
For some reason everything worked after I switched both 2080 ti’s with 1080 ti’s. So it seems pytorch (or some nvidia software) isn’t fully compatible with multiple 2080 ti’s? Has anyone encountered this? |
st175359 | Two 2080TIs should work, so I think it might be a hardware issue.
However, it seems you’ve already changed a lot of parts of your system.
Regarding point 3 and 4, it seems you completely rebuilt your system.
Does your code only crash using the ImageNet example or also a very small model, e.g. a single linear layer and DataParallel? |
st175360 | I can corroborate. It happens with me too. torch distributed data parallel fails on 2080Ti with >1 gpu, however works well with titan-x, titan-xp or 1080Ti. |
st175361 | Same here, fails on two 2080Ti, works with two 1080Ti. In the script, I just load a resnet50 from torchvision.models and do inference on it. It does not crash however, if I run the same script twice at the same time, each using one GPU. So it does not seem to a power supply issue |
st175362 | Okay, so now I have done some runs with the following code:
import os
from tqdm import tqdm
from torchvision.models import resnet50, resnet18
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import torch
import torch.nn as nn
n = 1000000
class RandomDs(Dataset):
def __init__(self, ):
pass
def __len__(self):
return n
def __getitem__(self, index):
return torch.rand(3, 512, 256)
if __name__ == '__main__':
dataset = RandomDs()
data_loader = DataLoader(dataset, batch_size=128, shuffle=False, num_workers=4, pin_memory=False)
model = resnet50()
# os.environ["CUDA_VISIBLE_DEVICES"] = "0"
device = torch.device('cuda:0')
model = nn.DataParallel(model)
model.to(device)
with torch.no_grad():
for batch in tqdm(data_loader):
batch = batch.to(device)
model(batch)
I changed the input shape to the network, batch size, number of workers, model type, torch version, and whether it runs with DataParallel or as two scripts (so two scripts, running at the same time, so that both GPUs are in use). These are the results:
input shape
batch_size
num_workers
model
parallel
torch version
crash
memory (GB each)
3, 256, 256
128
4
resnet50
yes
1.2.0
No
~2.3
3, 512, 256
128
4
resnet50
yes
1.2.0
Yes
~3.3
3, 256, 256
256
4
resnet50
yes
1.2.0
Yes
~3.3
3, 256, 256
256
4
resnet50
two scripts
1.2.0
No
~5.3 (single)
3, 256, 256
256
16
resnet50
yes
1.2.0
Yes
~3.3
3, 256, 256
256
4
resnet18
yes
1.2.0
Yes
~2.2
3, 512, 256
128
4
resnet50
yes
1.4.0
Yes
~3.3
It seems to be memory related, but given that I am using 2*2080Ti and I have 64GB ram, it’s not an OOM. Any ideas what else I can test?
I am using cuda 10.2 and tried pytroch version 1.2 and 1.4 |
st175363 | Does your machine restart or how does the crash look for you?
If that’s the case, was PyTorch (or other libs) working before or is it a new setup?
Did you run some stress tests on the devices? |
st175364 | The machine shuts down immediately and then restarts. It is more or less a new system: It has been used for other tasks before, but not for ML. I’ve run GPU-burn 4 for an hour with Double precision and tensor cores enabled and it seemed to work fine.
The only times when it doesn’t crash is when I am using a small input (shape/bs) or when I’m not using DataParallel. Could this be related to the communication between the GPUs? |
st175365 | Seems fine:
Screenshot from 2020-04-21 13-45-54957×651 101 KB
I’m really out of ideas on what to test. The only way I can replicate the crashes or find any abnormalities is by using PyTorch DataParallel. GPU burn runs fine, nccl-test runs fine, utilizing both GPUs at the same time individually is fine. What else could I do in order to identify the cause of the problem? |
st175366 | Can you check system logs to see if there is something there mentioning why the machine restarted? Usually on Linux there might be some information in /var/log/messages.
Another option might be to post on Nvidia forums to check if there might be some system issue. |
st175367 | That’s another good idea.
@AljoSt try to grab for Xid and post the result here, please. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.