id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st176468 | I have a weird and persistent problem where everything works fine when training with one GPU but when I move onto multi-GPU training some individual pixels in my input images become NaNs and this ofc crashes the training. It happens with random images and there is nothing wrong with the images as I check for NaNs in my Dataset.__call__ function. Then during training_step the NaNs magically appear, with random inputs at random pixels. So there may be problems inside the collate_fn?
Has anyone encountered anything similar? |
st176469 | The code below works on Terminal but not on Jupyter Notebook
import os
from datetime import datetime
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP
os.environ["CUDA_VISIBLE_DEVICES"] = "6,7"
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.ones(200, 10))
labels = torch.randn(200, 5).to(rank)
loss = loss_fn(outputs, labels)
print("Loss is ",loss.item())
loss.backward()
optimizer.step()
cleanup()
if __name__ == '__main__':
world_size = 2
print("We have available ", torch.cuda.device_count(), "GPUs! but using ",world_size," GPUs")
#########################################################
mp.spawn(demo_basic, args=(world_size), nprocs=world_size, join=True)
#########################################################
Terminal output:
We have available 2 GPUs! but using 2 GPUs
Running basic DDP example on rank 1.
Running basic DDP example on rank 0.
Loss is 1.0888941287994385
Loss is 1.0354920625686646
Jupyter Notebook Output:
We have available 2 GPUs! but using 2 GPUs
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-2-52a9f6d32955> in <module>
68
69 #########################################################
---> 70 mp.spawn(demo_basic, args=(world_size), nprocs=world_size, join=True)
71 #########################################################
~/.conda/envs/praveen_tf/lib/python3.6/site-packages/torch/multiprocessing/spawn.py in spawn(fn, args, nprocs, join, daemon, start_method)
198 ' torch.multiprocessing.start_process(...)' % start_method)
199 warnings.warn(msg)
--> 200 return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
~/.conda/envs/praveen_tf/lib/python3.6/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)
156
157 # Loop on join until it returns True or raises an exception.
--> 158 while not context.join():
159 pass
160
~/.conda/envs/praveen_tf/lib/python3.6/site-packages/torch/multiprocessing/spawn.py in join(self, timeout)
111 raise Exception(
112 "process %d terminated with exit code %d" %
--> 113 (error_index, exitcode)
114 )
115
Exception: process 1 terminated with exit code 1
Can anyone explain why is this? |
st176470 | @praveen_91 Getting multiprocessing code to work with Jupyter Notebooks is tricky. You can probably try out solutions such as https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac 130 to see if it works for you.
There is an issue open for this on ipython as well: https://github.com/ipython/ipython/issues/12396 69. |
st176471 | pritamdamania87:
ing multiprocessing code to work with Jupyter Notebooks is trick
I am not on a windows machine, its a linux server and I have a 1 hop connection with jupyter notebook |
st176472 | Hello,
I want to modify this code examples/main.py at master · pytorch/examples · GitHub which has 1 Agent and N observers which interact with an environment at the same time through torch.distributed.rpc.
My goal is to obtain N agents and 1 Simulator and the Simulator would ask the agents to sample actions and update when required.
For example to select action:
def select_actions_all_agents(self, state):
self.current_actions = np.zeros(self.current_actions.shape, dtype=np.int32) * -1000
futs = []
start_time = time.time()
for ag_rreff in self.ag_rrefs:
# make async RPC to kick off an episode on all observers
futs.append(
rpc_async(
ag_rreff.owner(),
_call_method,
args=(Agent.select_action, ag_rreff, self.sim_rref, state)
)
)
# wait until all agents have finished selecting action
for fut in futs:
fut.wait()
self.time_select_action += (time.time() - start_time)
self.num_time_select_action += 1
However, it seems that it does not reduce the inference time.
When instantiating each agent I send it inside the same GPU.
class Agent:
def __init__(self):
self.id = rpc.get_worker_info().id
self.device = ("cuda" if torch.cuda.is_available() else "cpu")
torch.manual_seed(args.seed+self.id)
self.policy = Policy()
self.policy.to(self.device)
I would expect that each agent would operate in parallel in the GPU and the inference time greatly reduced.
Any ideas? |
st176473 | Is this the same question as this post?
RPC does not seem to help in forward time distributed-rpc
Using RPC only decreases “backward time”.
Hello I am using distributed.rpc changing the example in examples/main.py at 01539f9eada34aef67ae7b3d674f30634c35f468 · pytorch/examples · GitHub
I have N agents and 1 simulation. I would like to select actions in parallel, if possible in different GPU’s.
However, when I execute my code the only thing that seems to reduce is the update part. Below there is the code for selecting the actions which does not seem to be reduced at all, any ideas?
from to… |
st176474 | Hello, I am trying to make my workflow run on multiple GPUs. Since torch.nn.DataParallel did not work out for me (see this discussion 12), I am now trying to go with torch.nn.parallel.DistributedDataParallel (DDP). However I am not sure how to use the tensorboard logger when doing distributed training. Previous questions about this topic remain unanswered: (here 27 or here 20).
I have set up a typical training workflow that runs fine without DDP (use_distributed_training=False) but fails when using it with the error: TypeError: cannot pickle '_io.BufferedWriter' object.
Is there any way to make this code run, using both tensorboard and DDP?
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from tensorboardX import SummaryWriter
from torch import nn
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.utils.data import Dataset, DataLoader
from torch.utils.data.distributed import DistributedSampler
from torch.distributions import Laplace
class ToyNet(nn.Module):
def __init__(self):
super().__init__()
self.dens1 = nn.Linear(in_features=16, out_features=3)
def forward(self, x):
x = self.dens1(x)
x = Laplace(x, torch.tensor([1.0]))
return x
class RandomDataset(Dataset):
def __init__(self):
pass
def __getitem__(self, index):
sample = {'mod1': torch.rand(1, 16).float(),
'mod2': torch.rand(1, 16).float(),
'mod3': torch.rand(1, 16).float()}
label = torch.randint(0, 1, (3,)).float()
return sample, label
def __len__(self):
return 20
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class Experiment:
def __init__(self, distributed, dir_logs):
# initialize summary writer
if not os.path.exists(dir_logs):
os.makedirs(dir_logs)
self.logger = SummaryWriter(dir_logs)
self.model = ToyNet()
self.rank = None
self.distributed = distributed
if distributed:
self.world_size = torch.cuda.device_count()
assert self.world_size > 1, 'More than 1 GPU need to be accessible to use distributed training'
else:
self.world_size = 1
def train(exp: Experiment):
rank = exp.rank
if exp.distributed:
model = DDP(exp.model, device_ids=[rank])
sampler = DistributedSampler(RandomDataset(), num_replicas=exp.world_size, rank=rank)
else:
model = exp.model.to(rank)
sampler = None
rand_loader = DataLoader(dataset=RandomDataset(),
batch_size=8, shuffle=False, pin_memory=True, sampler=sampler, num_workers=0)
mse_loss = nn.MSELoss()
for step, (batch, label) in enumerate(rand_loader):
for modality in batch.keys():
label = label.to(rank)
batch = {k: v.to(rank) for k, v in batch.items()}
output = model(batch[modality]).mean
loss = mse_loss(output, label)
exp.logger.add_scalars(f'train/loss',
{'train_loss': loss.item()},
step)
def validate(exp):
model = exp.model.eval()
rank = exp.rank
with torch.no_grad():
if exp.distributed:
sampler = DistributedSampler(RandomDataset(), num_replicas=exp.world_size, rank=rank)
else:
sampler = None
rand_loader = DataLoader(dataset=RandomDataset(),
batch_size=8, shuffle=False, pin_memory=True, sampler=sampler, num_workers=0)
mse_loss = nn.MSELoss()
for step, (batch, label) in enumerate(rand_loader):
for modality in batch.keys():
label = label.to(rank)
batch = {k: v.to(rank) for k, v in batch.items()}
output = model(batch[modality]).mean
loss = mse_loss(output, label)
exp.logger.add_scalars(f'val/loss',
{'val_loss': loss.item()},
step)
def run_epochs(rank, exp: Experiment):
print(f"Running basic DDP example on rank {rank}.")
exp.rank = rank
if exp.distributed:
setup(rank, exp.world_size)
for epoch in range(5):
train(exp)
validate(exp)
if exp.distributed:
cleanup()
print('done!')
if __name__ == '__main__':
log_dir = 'temp_dir'
use_distributed_training = True
ex = Experiment(use_distributed_training, log_dir)
if ex.distributed:
mp.spawn(run_epochs,
args=(ex,),
nprocs=ex.world_size,
join=True)
else:
run_epochs(torch.device('cuda'), ex) |
st176475 | Solved by cryptopic in post #4
@Jimmy2027: I was able to make logging work by moving SummaryWriter creation from main process to child process, specifically remove
self.logger = SummaryWriter(dir_logs)
And add in run_epochs
exp.logger = SummaryWriter(exp.dir_logs)
So that we don’t have to folk the lock inside SummaryWriter (… |
st176476 | The only option seems to be to only log one process. This code runs fine:
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from tensorboardX import SummaryWriter
from torch import nn
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.utils.data import Dataset, DataLoader
from torch.utils.data.distributed import DistributedSampler
from torch.distributions import Laplace
class ToyNet(nn.Module):
def __init__(self):
super().__init__()
self.dens1 = nn.Linear(in_features=16, out_features=3)
def forward(self, x):
x = self.dens1(x)
x = Laplace(x, torch.tensor([1.0]))
return x
class RandomDataset(Dataset):
def __init__(self):
pass
def __getitem__(self, index):
sample = {'mod1': torch.rand(1, 16).float(),
'mod2': torch.rand(1, 16).float(),
'mod3': torch.rand(1, 16).float()}
label = torch.randint(0, 1, (3,)).float()
return sample, label
def __len__(self):
return 20
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class Experiment:
def __init__(self, distributed: bool, dir_logs: str):
self.logger = None
self.dir_logs = dir_logs
self.model = ToyNet()
self.rank = None
self.distributed = distributed
if distributed:
self.world_size = torch.cuda.device_count()
assert self.world_size > 1, 'More than 1 GPU need to be accessible to use distributed training'
else:
self.world_size = 1
def setup_logger(self):
# initialize summary writer
if not os.path.exists(self.dir_logs):
os.makedirs(self.dir_logs)
self.logger = SummaryWriter(self.dir_logs)
def train(exp: Experiment, rand_loader: DataLoader):
rank = exp.rank
model = exp.model.to(rank)
if exp.distributed:
model = DDP(exp.model, device_ids=[rank])
mse_loss = nn.MSELoss()
for step, (batch, label) in enumerate(rand_loader):
for modality in batch.keys():
label = label.to(rank)
batch = {k: v.to(rank) for k, v in batch.items()}
output = model(batch[modality]).mean
loss = mse_loss(output, label)
if exp.logger:
exp.logger.add_scalars(f'train/loss',
{'train_loss': loss.item()},
step)
loss.backward()
def validate(exp, rand_loader: DataLoader):
rank = exp.rank
model = exp.model.eval()
with torch.no_grad():
mse_loss = nn.MSELoss()
for step, (batch, label) in enumerate(rand_loader):
for modality in batch.keys():
label = label.to(rank)
batch = {k: v.to(rank) for k, v in batch.items()}
output = model(batch[modality]).mean
loss = mse_loss(output, label)
if exp.logger:
exp.logger.add_scalars(f'val/loss',
{'val_loss': loss.item()},
step)
def run_epochs(rank: any, exp: Experiment):
print(f"Running basic DDP example on rank {rank}.")
exp.rank = rank
if not exp.distributed or (rank % exp.world_size == 0):
print(f'setting up logger for rank {rank}')
exp.setup_logger()
if exp.distributed:
setup(rank, exp.world_size)
sampler = DistributedSampler(RandomDataset(), num_replicas=exp.world_size, rank=rank)
else:
sampler = None
rand_loader = DataLoader(dataset=RandomDataset(),
batch_size=8, shuffle=False, pin_memory=True, sampler=sampler, num_workers=0)
for epoch in range(5):
if exp.distributed:
sampler.set_epoch(epoch)
train(exp, rand_loader)
validate(exp, rand_loader)
if exp.distributed:
cleanup()
if exp.logger:
exp.logger.close()
print('done!')
if __name__ == '__main__':
log_dir = 'temp_dir'
use_distributed_training = True
ex = Experiment(use_distributed_training, log_dir)
if ex.distributed:
mp.spawn(run_epochs,
args=(ex,),
nprocs=ex.world_size,
join=True)
else:
run_epochs(torch.device('cuda'), ex) |
st176477 | @Jimmy2027: I was able to make logging work by moving SummaryWriter creation from main process to child process, specifically remove
self.logger = SummaryWriter(dir_logs)
And add in run_epochs
exp.logger = SummaryWriter(exp.dir_logs)
So that we don’t have to folk the lock inside SummaryWriter (in _AsyncWriter https://github.com/tensorflow/tensorboard/blob/master/tensorboard/summary/writer/event_file_writer.py#L163 13). In general each child process should create their own SummaryWriter instead of forking from parent process.
Also unrelated to your issue, tensorboardX has long been deprecated and no longer actively maintained, being replaced by pytorch native support for TensorBoard since Pytorch 1.2. To use it simply replace
from tensorboardX import SummaryWriter
With
from torch.utils.tensorboard import SummaryWriter |
st176478 | Thanks @cryptopic works fine. I am surprised that there is no need to to a post-processing of the logged data, does tensorboard joins traumatically the data from all processes? |
st176479 | does tensorboard joins automatically the data from all processes?
Yes, different processes will write to different log files, and TensorBoard will aggregate all log files during visualization |
st176480 | Yes you are right. Just as a note here, when using this we have to bare in mind that, as you say, different processes will write different log files and TB will aggregate all for visualization. So here the problem is if you write multiple values for the same variables which gives you crazy charts like this:
This can be solved by using RANK variable, so that only one process will write a log file. For example, I have done something like:
if args.rank==0:
dir = os.path.join(args.output_dir, 'logs')
logger = SummaryWriter(dir)
print('wrigint on {}'.format(dir))
logger.add_scalar('error', value, 0)
logger.add_scalar('error', value, 1)
logger.add_scalar('error', value, 2)
logger.add_scalar('error', value, 3)
logger.close()
I am curious how do you deal with this? Is there a more interesting way of doing this? |
st176481 | Hey Nicolas,
Wondering if you have made any progress on this. Also face the same issue here |
st176482 | Hello, I have a piece of code that uses a torch.utils.data.DataLoader with a custom BatchSampler to sample batches with the same amount of objects in each class.
github.com
roman-vygon/triplet_loss_kws/blob/master/layers/datalayer.py 6
from functools import partial
from typing import Any, Dict, List, Optional, Union
import numpy as np
import torch
from nemo import logging
from nemo.backends.pytorch import DataLayerNM
from nemo.collections.asr.parts.dataset import AudioLabelDataset, seq_collate_fn
from nemo.collections.asr.parts.features import WaveformFeaturizer
from nemo.collections.asr.parts.perturb import AudioAugmentor
from nemo.collections.asr.parts.perturb import perturbation_types
from nemo.core.neural_types import *
from torch.utils.data.sampler import BatchSampler
class BalancedBatchSampler(BatchSampler):
"""
BatchSampler - from a MNIST-like dataset, samples n_classes and within these classes samples n_samples.
Returns batches of size n_classes * n_samples
"""
This file has been truncated. show original
I’m trying to use it in a multi-gpu scenario with NeMo framework. By default when in multi-gpu mode it should be something like this:
if self._placement == DeviceType.AllGpu:
sampler = torch.utils.data.distributed.DistributedSampler(self._dataset)
self._dataloader = torch.utils.data.DataLoader(
dataset=self._dataset,
sampler=sampler,
num_workers=num_workers,
)
I’ve found some tricks to implement a custom distributed sampler, but none of them work for custom distributed batch sampler. What can I do? |
st176483 | I have defined some buffer variables in init for the model, implementing similar idea as MoCo.
Model:
def __init__(self, config):
<other modules>
self.K = 66536
self.register_buffer("queue", torch.randn(768, config.NETWORK.MEMORY_BANK_SIZE))
self.queue = nn.functional.normalize(self.queue, dim=0)
self.register_buffer("queue_ptr", torch.zeros(1, dtype=torch.long))
During the forward pass, I gather a tensor “a” using torch.distributed. After doing that, I am able to print their shape and type but unable to use or print the buffer tensors or the gathered tensor. The training script freezes there.
def forward(self, batch_input):
with torch.no_grad():
a = some_module(batch_input) # compute tensor a by passing batch_input through some module
self.concat_all_gather(a)
@torch.no_grad()
def concat_all_gather(self,tensor):
print("PRE_{}_{}".format(torch.distributed.get_rank(),self.queue_ptr))
tensors_gather = [torch.ones_like(tensor) for _ in range(torch.distributed.get_world_size())]:
torch.distributed.all_gather(tensors_gather, tensor, async_op=False)
# Unable to access tensors_gather and buffer variables. Variables like self.K
print(type(self.queue_ptr), self.queue_ptr.shape)
print("POST_{}_{}".format(torch.distributed.get_rank(),self.queue_ptr))
output = torch.cat(tensors_gather, dim=0)
return output
Output:
PRE_3_tensor([0], device='cuda:3')
<class 'torch.Tensor'> torch.Size([1])
PRE_0_tensor([0], device='cuda:0')
<class 'torch.Tensor'> torch.Size([1])
PRE_2_tensor([0], device='cuda:2')
<class 'torch.Tensor'> torch.Size([1])
PRE_1_tensor([0], device='cuda:1')
<class 'torch.Tensor'> torch.Size([1])
When I comment out torch.distributed.all_gather(tensors_gather, tensor, async_op=False)
Output:
PRE_2_tensor([0], device='cuda:2')
<class 'torch.Tensor'> torch.Size([1])
POST_2_tensor([0], device='cuda:2')
PRE_0_tensor([0], device='cuda:0')
<class 'torch.Tensor'> torch.Size([1])
POST_0_tensor([0], device='cuda:0')
PRE_1_tensor([0], device='cuda:1')
<class 'torch.Tensor'> torch.Size([1])
POST_1_tensor([0], device='cuda:1')
PRE_3_tensor([0], device='cuda:3')
<class 'torch.Tensor'> torch.Size([1])
POST_3_tensor([0], device='cuda:3')
Also, there is no problem when 1 process, GPU is used.
What might be the problem here? Is the implementation for gathering the tensor incorrect?
NCCL_DEBUG=INFO output:
cv03:25267:25267 [0] NCCL INFO Bootstrap : Using [0]enp1s0f0:128.59.8.153<0>
cv03:25267:25267 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
cv03:25267:25267 [0] NCCL INFO NET/IB : No device found.
cv03:25267:25267 [0] NCCL INFO NET/Socket : Using [0]enp1s0f0:128.59.8.153<0>
cv03:25267:25267 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda10.2
cv03:25271:25271 [2] NCCL INFO Bootstrap : Using [0]enp1s0f0:128.59.8.153<0>
cv03:25269:25269 [1] NCCL INFO Bootstrap : Using [0]enp1s0f0:128.59.8.153<0>
cv03:25272:25272 [3] NCCL INFO Bootstrap : Using [0]enp1s0f0:128.59.8.153<0>
cv03:25271:25271 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
cv03:25272:25272 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
cv03:25271:25271 [2] NCCL INFO NET/IB : No device found.
cv03:25271:25271 [2] NCCL INFO NET/Socket : Using [0]enp1s0f0:128.59.8.153<0>
cv03:25271:25271 [2] NCCL INFO Using network Socket
cv03:25269:25269 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
cv03:25269:25269 [1] NCCL INFO NET/IB : No device found.
cv03:25269:25269 [1] NCCL INFO NET/Socket : Using [0]enp1s0f0:128.59.8.153<0>
cv03:25269:25269 [1] NCCL INFO Using network Socket
cv03:25272:25272 [3] NCCL INFO NET/IB : No device found.
cv03:25272:25272 [3] NCCL INFO NET/Socket : Using [0]enp1s0f0:128.59.8.153<0>
cv03:25272:25272 [3] NCCL INFO Using network Socket
cv03:25271:25457 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
cv03:25269:25458 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
cv03:25271:25457 [2] NCCL INFO Trees [0] 3/-1/-1->2->1|1->2->3/-1/-1 [1] 3/-1/-1->2->1|1->2->3/-1/-1
cv03:25267:25456 [0] NCCL INFO Channel 00/02 : 0 1 2 3
cv03:25269:25458 [1] NCCL INFO Trees [0] 2/-1/-1->1->0|0->1->2/-1/-1 [1] 2/-1/-1->1->0|0->1->2/-1/-1
cv03:25271:25457 [2] NCCL INFO Setting affinity for GPU 2 to 3ff003ff
cv03:25272:25460 [3] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
cv03:25267:25456 [0] NCCL INFO Channel 01/02 : 0 1 2 3
cv03:25269:25458 [1] NCCL INFO Setting affinity for GPU 1 to 3ff003ff
cv03:25272:25460 [3] NCCL INFO Trees [0] -1/-1/-1->3->2|2->3->-1/-1/-1 [1] -1/-1/-1->3->2|2->3->-1/-1/-1
cv03:25272:25460 [3] NCCL INFO Setting affinity for GPU 3 to 3ff003ff
cv03:25267:25456 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
cv03:25267:25456 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1
cv03:25267:25456 [0] NCCL INFO Setting affinity for GPU 0 to 3ff003ff
cv03:25269:25458 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
cv03:25271:25457 [2] NCCL INFO Could not enable P2P between dev 2(=60000) and dev 1(=1b000)
cv03:25267:25456 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 3(=61000)
cv03:25272:25460 [3] NCCL INFO Could not enable P2P between dev 3(=61000) and dev 2(=60000)
cv03:25269:25458 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 2(=60000)
cv03:25269:25458 [1] NCCL INFO Channel 00 : 1[1b000] -> 2[60000] via direct shared memory
cv03:25271:25457 [2] NCCL INFO Could not enable P2P between dev 2(=60000) and dev 3(=61000)
cv03:25267:25456 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
cv03:25271:25457 [2] NCCL INFO Channel 00 : 2[60000] -> 3[61000] via direct shared memory
cv03:25267:25456 [0] NCCL INFO Channel 00 : 0[1a000] -> 1[1b000] via direct shared memory
cv03:25272:25460 [3] NCCL INFO Could not enable P2P between dev 3(=61000) and dev 0(=1a000)
cv03:25272:25460 [3] NCCL INFO Channel 00 : 3[61000] -> 0[1a000] via direct shared memory
cv03:25269:25458 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 2(=60000)
cv03:25267:25456 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
cv03:25271:25457 [2] NCCL INFO Could not enable P2P between dev 2(=60000) and dev 3(=61000)
cv03:25272:25460 [3] NCCL INFO Could not enable P2P between dev 3(=61000) and dev 2(=60000)
cv03:25272:25460 [3] NCCL INFO Channel 00 : 3[61000] -> 2[60000] via direct shared memory
cv03:25269:25458 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
cv03:25269:25458 [1] NCCL INFO Channel 00 : 1[1b000] -> 0[1a000] via direct shared memory
cv03:25271:25457 [2] NCCL INFO Could not enable P2P between dev 2(=60000) and dev 1(=1b000)
cv03:25267:25456 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 3(=61000)
cv03:25271:25457 [2] NCCL INFO Channel 00 : 2[60000] -> 1[1b000] via direct shared memory
cv03:25272:25460 [3] NCCL INFO Could not enable P2P between dev 3(=61000) and dev 2(=60000)
cv03:25269:25458 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
cv03:25271:25457 [2] NCCL INFO Could not enable P2P between dev 2(=60000) and dev 1(=1b000)
cv03:25267:25456 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
cv03:25267:25456 [0] NCCL INFO Channel 01 : 0[1a000] -> 1[1b000] via direct shared memory
cv03:25271:25457 [2] NCCL INFO Could not enable P2P between dev 2(=60000) and dev 3(=61000)
cv03:25272:25460 [3] NCCL INFO Could not enable P2P between dev 3(=61000) and dev 0(=1a000)
cv03:25269:25458 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 2(=60000)
cv03:25271:25457 [2] NCCL INFO Channel 01 : 2[60000] -> 3[61000] via direct shared memory
cv03:25272:25460 [3] NCCL INFO Channel 01 : 3[61000] -> 0[1a000] via direct shared memory
cv03:25269:25458 [1] NCCL INFO Channel 01 : 1[1b000] -> 2[60000] via direct shared memory
cv03:25271:25457 [2] NCCL INFO Could not enable P2P between dev 2(=60000) and dev 3(=61000)
cv03:25267:25456 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
cv03:25272:25460 [3] NCCL INFO Could not enable P2P between dev 3(=61000) and dev 2(=60000)
cv03:25269:25458 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 2(=60000)
cv03:25272:25460 [3] NCCL INFO Channel 01 : 3[61000] -> 2[60000] via direct shared memory
cv03:25271:25457 [2] NCCL INFO Could not enable P2P between dev 2(=60000) and dev 1(=1b000)
cv03:25271:25457 [2] NCCL INFO Channel 01 : 2[60000] -> 1[1b000] via direct shared memory
cv03:25269:25458 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
cv03:25272:25460 [3] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
cv03:25272:25460 [3] NCCL INFO comm 0x7fe118001060 rank 3 nranks 4 cudaDev 3 busId 61000 - Init COMPLETE
cv03:25269:25458 [1] NCCL INFO Channel 01 : 1[1b000] -> 0[1a000] via direct shared memory
cv03:25267:25456 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
cv03:25267:25456 [0] NCCL INFO comm 0x7f144c001060 rank 0 nranks 4 cudaDev 0 busId 1a000 - Init COMPLETE
cv03:25267:25267 [0] NCCL INFO Launch mode Parallel
cv03:25271:25457 [2] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
cv03:25271:25457 [2] NCCL INFO comm 0x7fd448001060 rank 2 nranks 4 cudaDev 2 busId 60000 - Init COMPLETE
cv03:25269:25458 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
cv03:25269:25458 [1] NCCL INFO comm 0x7f58d4001060 rank 1 nranks 4 cudaDev 1 busId 1b000 - Init COMPLETE
native distributed, size: 4, rank: 3, local rank: 3
native distributed, size: 4, rank: 2, local rank: 2
native distributed, size: 4, rank: 1, local rank: 1
native distributed, size: 4, rank: 0, local rank: 0
This is how the distributed group is initialized:
distributed.init_process_group(
backend='nccl',
init_method='tcp://{}:{}'.format(master_address, master_port),
world_size=world_size,
rank=rank,
group_name='mtorch') |
st176484 | The all_gather call here does look okay to me. Could you share a minimal repro script that we can try on our end? Few possibilities leading to a freeze could involve:
The order of allgather calls are not the same on all ranks.
There could be some sort of cuda synchronization causing a deadlock with allgather (ex: allgather on node1 waiting for node2, but allgather on node2 is waiting for a cuda synchronize or something). |
st176485 | amogh112:
tensors_gather = [torch.ones_like(tensor) for _ in range(torch.distributed.get_world_size())]:
There is a colon after this line, could it be the cause? |
st176486 | Hey,
This post may be very much related to this post 1. However, there is no solution currently available so here goes:
Problem: When I run the following training routine it sometimes finishes with and sometimes without a SIGSEGV error.
Environment:
python3.9.1
torch1.7.1
cuda11.0
cluster with nodes containing 8 GPUs (used by multiple users), jobs are submitted via LSF batch system
Code:
import os
import torch
import socket
import random
import argparse
from datetime import date
from contextlib import closing
from torch.utils.tensorboard import SummaryWriter
from torch.utils.data import DataLoader, Dataset
import numpy as np
import torch.nn as nn
import torch.distributed as dist
import torch.multiprocessing as mp
class my_dataset(Dataset):
def __init__(self, n):
self.data = torch.rand((n, 10), dtype=torch.float32)
self.labels = torch.randint(0, 2, (n, 1))
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return {'data': self.data[idx, :], 'labels': self.labels[idx]}
def find_free_port():
""" https://stackoverflow.com/questions/1365265/on-localhost-how-do-i-pick-a-free-port-number """
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return str(s.getsockname()[1])
def arg_parse():
desc = "Program to train a segmentation model."
parser = argparse.ArgumentParser(description=desc)
parser.add_argument('--devices',
type=str,
nargs='+',
default=['0'],
help='Devices to use for model training. Can be GPU IDs as in default or "cpu".')
return parser.parse_args()
def train_model_distributed(rank_gpu, world_size, dataset, **kwargs):
# initialize process group
dist.init_process_group(backend='nccl', init_method='env://', world_size=world_size, rank=rank_gpu)
torch.cuda.set_device(rank_gpu)
lr = 1e-3
batch_size = 8
batch_size_distr = int(batch_size / world_size)
# set up dataloader
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset, num_replicas=world_size, rank=rank_gpu)
train_loader = DataLoader(dataset, batch_size=batch_size_distr, shuffle=False, num_workers=2, prefetch_factor=2,
pin_memory=True, sampler=train_sampler)
# set up model
model = nn.Sequential(
nn.Linear(10, 1, bias=True),
nn.BatchNorm1d(1),
nn.LeakyReLU(negative_slope=0.3, inplace=True)
)
model.cuda(rank_gpu)
model = nn.SyncBatchNorm.convert_sync_batchnorm(model)
model = nn.parallel.DistributedDataParallel(model, device_ids=[rank_gpu])
# set up training
criterion = nn.MSELoss().cuda(rank_gpu)
optimizer = torch.optim.Adam(model.parameters(), lr=lr, betas=(0.5, 0.999))
start_epoch = 0
train_sampler.set_epoch(start_epoch)
model.train()
for batch in train_loader:
data = batch['data']
labels = batch['labels']
data = data.to(device=torch.device(rank_gpu), dtype=torch.float32, non_blocking=True)
labels = labels.to(device=torch.device(rank_gpu), dtype=torch.float32, non_blocking=True)
# forward pass
preds = model(data)
loss = criterion(preds, labels)
batch_loss = loss.item()
# backward pass
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'end of training loop rank: {rank_gpu}')
dist.destroy_process_group()
def main():
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = find_free_port()
args = arg_parse()
# devices are submitted as $CUDA_VISIBLE_DEVICES
devices = args.devices[0].split(',')
# random seeding
seed = 7612873
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
dataset = my_dataset(80)
world_size = len(devices)
mp.spawn(train_model_distributed, nprocs=world_size, args=(world_size, dataset))
print('finished ddp training!')
if __name__ == '__main__':
main()
Edit: An excerpt from the log and the error message:
end of training loop rank: 0
end of training loop rank: 1
Traceback (most recent call last):
File "/cluster/home/USER/projects/project1/run_debugging.py", line 174, in <module>
¦ main()
File "/cluster/home/USER/projects/project1/run_debugging.py", line 169, in main
¦ mp.spawn(train_model_distributed, nprocs=world_size, args=(world_size, dataset))
File "/cluster/home/USER/.pyenv/versions/torch17/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn
¦ return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/cluster/home/USER/.pyenv/versions/torch17/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
¦ while not context.join():
File "/cluster/home/USER/.pyenv/versions/torch17/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 105, in join
¦ raise Exception(
Exception: process 1 terminated with signal SIGSEGV
I train on two GPUs on a single node. As mentioned above, sometimes the above code executes correctly and sometimes with a SIGSEGV error. This holds true even if the code is repeatedly run on the same node (both in parallel and sequential).
I went through all the torch.distributed tutorials, many forum posts, GitHub issues, and exemplary implementations of DistributedDataParallel(). Nothing really caught my eye that helps.
I am looking forward to any potential solution, hint, advice,…
Cheers |
st176487 | I wasn’t able to reproduce this issue on my end unfortuantely. Could you ensure dist.destroy_process_group() is exiting cleanly and the SIGSEGV does not come from there?
Do your processes always exit cleanly if you remove the distributed init/DDP from your spawned subprocess? This may help narrow down the issue a bit more. |
st176488 | Thanks for the fast reply @rvarm1, I appreciate it.
I think the SIGSEGV is coming from dist.destroy_process_group() since the print(f'end of training loop rank: {rank_gpu}') is logged, but not the print('finished ddp training!').
I removed dist.init_process_group(), DistributedSampler(), SyncBatchNorm, and DistributedDataPrallel() in train_model_distributed(), but kept the mp.spawn() (if that is what you meant). My jobs have been queued for a while now. I will get back at you as soon as they complete.
UPDATE: SIGSEGV occurs if I remove all the distributed training parts (see 2.). |
st176489 | Hey @rvarm1,
So I have found a workaround that seems to work:
Running the above script my_script.py via
python -X dev my_script.py --devices $CUDA_VISIBLE_DEVICES
yielded the following error:
Fatal Python error: PyEval_SaveThread: the function must be called with the GIL held, but the GIL is released (the current Python thread state is NULL)
Searching for that error, I found the following bug report 3. I do not really understand the details of the therein referenced bug report 2, but the way python 3.9 handles GIL seems to cause the SIGSEGV when running mp.spawn().
Workaround: Downgrading to python 3.8.7 got rid of the SIGSEGV and my_script.py runs without any errors.
If you have further suggestions how to make the script work with python 3.9, I would be curious to test it. |
st176490 | Hi there
I’m trying to run the following simple script on two machines:
import torch.distributed
import time
import argparse
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int)
parser.add_argument("--global_rank", type=int)
args = parser.parse_args()
torch.cuda.set_device(args.local_rank)
print("init")
torch.distributed.init_process_group(
backend="nccl",
init_method="tcp://10.10.10.22:1191",
world_size=2,
rank=args.global_rank,
)
time.sleep(5)
print("barrier")
torch.distributed.barrier() # HANGS HERE
if __name__ == "__main__":
main()
I have two machines with the IPs 10.10.10.22 (master) and 10.10.10.25 with port 1191 open on both.
On master:
export NCCL_SOCKET_IFNAME=eno1
export NCCL_DEBUG_SUBSYS=ALL
export NCCL_DEBUG=INFO
export NCCL_IB_DISABLE=1
python test.py --local_rank 0 --global_rank 0
lambda-server4:1793990:1793990 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eno1
lambda-server4:1793990:1793990 [0] NCCL INFO NCCL_SOCKET_IFNAME set to eno1
lambda-server4:1793990:1793990 [0] NCCL INFO Bootstrap : Using [0]eno1:10.10.10.22<0>
lambda-server4:1793990:1793990 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
lambda-server4:1793990:1793990 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
lambda-server4:1793990:1793990 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eno1
lambda-server4:1793990:1793990 [0] NCCL INFO NCCL_SOCKET_IFNAME set to eno1
lambda-server4:1793990:1793990 [0] NCCL INFO NET/Socket : Using [0]eno1:10.10.10.22<0>
lambda-server4:1793990:1793990 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda11.1
On the other machine:
export NCCL_SOCKET_IFNAME=enp49s0f1
export NCCL_DEBUG_SUBSYS=ALL
export NCCL_DEBUG=INFO
export NCCL_IB_DISABLE=1
python test.py --local_rank 0 --global_rank 1
hyperplane1:1255526:1255526 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to enp49s0f1
hyperplane1:1255526:1255526 [1] NCCL INFO NCCL_SOCKET_IFNAME set to enp49s0f1
hyperplane1:1255526:1255526 [1] NCCL INFO Bootstrap : Using [0]enp49s0f1:10.10.10.25<0>
hyperplane1:1255526:1255526 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
hyperplane1:1255526:1255526 [1] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
hyperplane1:1255526:1255526 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to enp49s0f1
hyperplane1:1255526:1255526 [1] NCCL INFO NCCL_SOCKET_IFNAME set to enp49s0f1
hyperplane1:1255526:1255526 [1] NCCL INFO NET/Socket : Using [0]enp49s0f1:10.10.10.25<0>
hyperplane1:1255526:1255526 [1] NCCL INFO Using network Socket
hyperplane1:1266304:1266392 [0] NCCL INFO Call to connect returned Connection timed out, retrying
hyperplane1:1266304:1266392 [0] NCCL INFO Call to connect returned Connection timed out, retrying
hyperplane1:1266304:1266392 [0] include/socket.h:403 NCCL WARN Connect to 10.10.10.22<49177> failed : Connection timed out
hyperplane1:1266304:1266392 [0] NCCL INFO bootstrap.cc:95 -> 2
hyperplane1:1266304:1266392 [0] NCCL INFO bootstrap.cc:309 -> 2
hyperplane1:1266304:1266392 [0] NCCL INFO init.cc:555 -> 2
hyperplane1:1266304:1266392 [0] NCCL INFO init.cc:840 -> 2
hyperplane1:1266304:1266392 [0] NCCL INFO group.cc:73 -> 2 [Async thread]
The program hangs at the barrier call and I don’t know why. I get past the init_process_group call on both machines so I assume the connection between the two servers is fine, but at the barrier it times out.
Does anyone see the problem here? I have probably missed a configuration step, but I don’t know what.
PyTorch 1.8.1
NCCL version 2.7.8+cuda11.1 |
st176491 | Hi, what is the output of ifconfig on both your machines? If you’re using the ifconfig output to set the NCCL_SOCKET_IFNAME variables on each node, you could try setting NCCL_SOCKET_IFNAME=eno1, enp49s0f1 as per the comments on this issue: How to set NCCL_SOCKET_IFNAME · Issue #286 · NVIDIA/nccl · GitHub 2, and in general make sure that the two interfaces can talk to each other according to your network setup.
In addition, can you try changing barrier() to allreduce some tensors (allocated on the appropriate GPU) and check whether that works as expected? |
st176492 | Hi thanks for your input.
ifconfig returns me multiple interfaces. I picked the one that has the IP address assigned I use to login (10.10.10.22 and 10.10.10.25).
First machine:
br-5443622090a7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:1aff:fec8:448c prefixlen 64 scopeid 0x20<link>
ether 02:42:1a:c8:44:8c txqueuelen 0 (Ethernet)
RX packets 178271 bytes 4991588 (4.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 39 bytes 5694 (5.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:e5ff:fe0a:b382 prefixlen 64 scopeid 0x20<link>
ether 02:42:e5:0a:b3:82 txqueuelen 0 (Ethernet)
RX packets 76931 bytes 4131399 (4.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 86761 bytes 625271597 (625.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.10.22 netmask 255.255.255.0 broadcast 10.10.10.255
inet6 fe80::3eec:efff:fe03:ed46 prefixlen 64 scopeid 0x20<link>
ether 3c:ec:ef:03:ed:46 txqueuelen 1000 (Ethernet)
RX packets 206957342 bytes 291080003217 (291.0 GB)
RX errors 0 dropped 3340746 overruns 0 frame 0
TX packets 47119321 bytes 7465744617 (7.4 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc1320000-c133ffff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 31035316 bytes 3161891159 (3.1 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 31035316 bytes 3161891159 (3.1 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth212871c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::e400:bff:feb2:acb0 prefixlen 64 scopeid 0x20<link>
ether e6:00:0b:b2:ac:b0 txqueuelen 0 (Ethernet)
RX packets 339096 bytes 21260368 (21.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 359220 bytes 21820500 (21.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth406ee85: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::d827:24ff:fe8c:3c0f prefixlen 64 scopeid 0x20<link>
ether da:27:24:8c:3c:0f txqueuelen 0 (Ethernet)
RX packets 1807 bytes 131066 (131.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2265 bytes 13311970 (13.3 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethb0ca500: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::384f:23ff:fe37:a7b9 prefixlen 64 scopeid 0x20<link>
ether 3a:4f:23:37:a7:b9 txqueuelen 0 (Ethernet)
RX packets 338768 bytes 21241184 (21.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 359348 bytes 21826112 (21.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Second machine:
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:e5ff:fe55:c1a2 prefixlen 64 scopeid 0x20<link>
ether 02:42:e5:55:c1:a2 txqueuelen 0 (Ethernet)
RX packets 990861 bytes 50900887 (50.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1872198 bytes 4152860383 (4.1 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp49s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.10.25 netmask 255.255.255.0 broadcast 10.10.10.255
inet6 fe80::3eec:efff:fe1e:dd5b prefixlen 64 scopeid 0x20<link>
ether 3c:ec:ef:1e:dd:5b txqueuelen 1000 (Ethernet)
RX packets 82711207 bytes 110825052923 (110.8 GB)
RX errors 0 dropped 489968 overruns 0 frame 0
TX packets 25754173 bytes 2401860983 (2.4 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 36992814 bytes 2445844459 (2.4 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 36992814 bytes 2445844459 (2.4 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Tried setting NCCL_SOCKET_IFNAME=eno1, enp49s0f1 on both servers but it didn’t help unfortunately.
I replaced the barrier with an allreduce like so:
x = torch.tensor([args.global_rank], dtype=torch.float, device=torch.device("cuda", 0))
torch.distributed.all_reduce(x)
print(x)
but it hangs the same way as with barrier.
The two network interfaces can talk to each other, I verified that I can listen on one machine and send a message through telnet to the other machine:
# on one server
nc -l 1191
# on the other server
telnet 10.10.10.22 1191
and this works both ways. |
st176493 | Hi All,
I’m facing this strange issue. I’m trying to make my CNN (PINet - A lane detection CNN) compatible with (DistrubutedDataParallel) distributed training.
My problem:
The data loader fails when I use num_worker>0 and spawn my script from torch.multiprocessing.spawn().
Without multiprocessing, I do not have any issue with num_worker being > 0.
Some info on my set up:
I have one node, 2 GPU machine.
Some info on essential config:
batch-size: 6
num-worker:2
Here is the complete stack trace.
Traceback (most recent call last):
File "/snap/pycharm-community/233/plugins/python-ce/helpers/pydev/pydevd.py", line 1477, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/snap/pycharm-community/233/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/mvish7/PINet/core/train.py", line 222, in <module>
mp.spawn(training, nprocs=world_size, join=True)
File "/home/anaconda3/envs/cam_section/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/anaconda3/envs/cam_section/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/home/anaconda3/envs/cam_section/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 119, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/anaconda3/envs/cam_section/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/mvish7/PINet/core/train.py", line 127, in training
for t_batch, sample in enumerate(train_loader):
File "/home/anaconda3/envs/cam_section/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 291, in __iter__
return _MultiProcessingDataLoaderIter(self)
File "/home/anaconda3/envs/cam_section/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 737, in __init__
w.start()
File "/home/anaconda3/envs/cam_section/lib/python3.6/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/home/anaconda3/envs/cam_section/lib/python3.6/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/anaconda3/envs/cam_section/lib/python3.6/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/anaconda3/envs/cam_section/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/anaconda3/envs/cam_section/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/anaconda3/envs/cam_section/lib/python3.6/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/home/anaconda3/envs/cam_section/lib/python3.6/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: 'NoneType' object is not callable
Here is a DEMO code (not sufficient to reproduce error – apologies as I can’t make some part of code public)
def find_free_port():
""" https://stackoverflow.com/questions/1365265/on-localhost-how-do-i-pick-a-free-port-number """
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return str(s.getsockname()[1])
def init_process(rank, world_size):
# IP address of machine on which process 0 is located
# free port on the machine on which process 0 is located
os.environ['MASTER_ADDR'] = '192.168.178.26'
os.environ['MASTER_PORT'] = find_free_port()
dist.init_process_group(
backend="nccl", init_method='env://', world_size=world_size, rank=rank)
dist.barrier()
def setup_train_loader(train_dataset, train_sampler, world_size, rank):
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=int(cfg.batch_size/world_size),
num_workers=cfg.num_worker,
shuffle=(train_sampler is None),
sampler=train_sampler,
drop_last=True,
pin_memory=True,
)
return train_loader
def setup_val_loader(val_dataset, val_sampler, world_size, rank):
val_loader = torch.utils.data.DataLoader(
val_dataset,
batch_size=int(cfg.batch_size/world_size),
num_workers= cfg.num_worker,
shuffle=(val_sampler is None),
sampler=val_sampler,
drop_last=True,
pin_memory=True,
)
return val_loader
def train(rank):
world_size = torch.cuda.device_count()
init_process(rank, world_size)
if dist.is_initialized():
print(f"Rank {rank + 1}/{world_size} process initialized.\n")
else:
sys.exit()
torch.manual_seed(0)
torch.cuda.set_device(rank)
print('Getting dataset')
train_dataset = Generator(cfg, mode='Train')
val_dataset = Generator(cfg, mode='Validate')
# setting up model
Model = PINet()
# setting up optimizer
# setting up scheduler
print('Setting up dataloader')
if dist.is_initialized():
train_sampler = DistributedSampler(train_dataset, num_replicas=world_size, rank=rank)
else:
train_sampler = None
train_loader = setup_train_loader(train_dataset, train_sampler, world_size, rank)
if dist.is_initialized():
val_sampler = DistributedSampler(val_dataset, num_replicas=world_size, rank=rank)
else:
val_sampler = None
val_loader = setup_val_loader(val_dataset, val_sampler, world_size, rank)
start_time = time.time()
for epoch in range(cfg.n_epoch):
Model.training_mode()
if dist.is_initialized():
train_sampler.set_epoch(epoch)
for t_batch, sample in enumerate(train_loader):
imgs = sample['imgs']
labels = sample['labels']
# regular training loop
if __name__ == '__main__':
# training()
# considering 1 machine and N GPUs. for multiple machines and multiple GPUs, this process gets complicated.
if torch.cuda.is_available():
world_size = torch.cuda.device_count()
mp.spawn(training, nprocs=world_size, join=True)
The configurations I’m using:
Ubuntu 18.04
CUDA 10.02
Pytorch 1.6.0
Pycharm 2020.0.3
Can someone guide me here??
Best,
mvish7 |
st176494 | Solved by Vishal_Mhasawade in post #4
Hi,
I managed to solve the problem. I still don’t know what went wrong with the torch data loader internally but here is what worked for me.
When I was creating dataset class instance, I was doing something like this
train_dataset = Generator(cfg, mode='Train')
In the init() function of Generato… |
st176495 | Hi,
I managed to solve the problem. I still don’t know what went wrong with the torch data loader internally but here is what worked for me.
When I was creating dataset class instance, I was doing something like this
train_dataset = Generator(cfg, mode='Train')
In the init() function of Generator class, i was doing something like this
self.cfg = cfg
self.actual_batchsize = None
self.mode = mode
self.dataset_size = None
self.train_data = []
self.test_data = []
self.val_data = []
self.process_data = ProcessData(cfg)
Where ‘cfg’ was an instance of a class holding some configurations and ‘ProcessData(cfg)’ was also creating an instance of a class.
When I removed ‘cfg’ and ‘ProcessData’ class instances from init_function(), the data loader started working fine even with num_worker>0 and torch multiprocessing.
I dont understand this behavior and I would like to know what is the root cause of this problem?
best,
mvish7 |
st176496 | I am not sure about the implementation of your ProcessData. But, based on the traceback and code, I suspect some objects in either cfg or process_data has some. problem with pickling by Python.
In the DataLoader, we pass dataset into worker_loop to fetch data.
github.com
pytorch/pytorch/blob/727c1d69d72b00acd7cf7aafd79923ae8caf550c/torch/utils/data/dataloader.py#L904-L918
w = multiprocessing_context.Process(
target=_utils.worker._worker_loop,
args=(self._dataset_kind, self._dataset, index_queue,
self._worker_result_queue, self._workers_done_event,
self._auto_collation, self._collate_fn, self._drop_last,
self._base_seed, self._worker_init_fn, i, self._num_workers,
self._persistent_workers))
w.daemon = True
# NB: Process.start() actually take some time as it needs to
# start a process and pass the arguments over via a pipe.
# Therefore, we only add a worker to self._workers list after
# it started, so that we do not call .join() if program dies
# before it starts, and __del__ tries to join but will get:
# AssertionError: can only join a started process.
w.start()
Since the Error happens when kicking off worker, I suspect there are something within your process_data can not be pickled.
To verify the problem, can you try to pickle dump one process_data object and see whether there is a problem. |
st176497 | Hey everyone. I have installed CUDA 11 + cudnn 8.2 globally on my machine, but I need to use exact Pytorch=1.4.0 for some repo to run, so I created an environment and installed:
conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
When running some code in this environment I have some weird cudnn errors (E.g. RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED or CUBLAS_STATUS_EXECUTION_FAILED). It seems the reason can be because of interference of local environment cudnn version and global cudnn. Result of my python -m torch.utils.collect_env in the newly created environment:
Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 20.04.2 LTS
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
CMake version: Could not collect
Python version: 3.8
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce RTX 3070
Nvidia driver version: 460.73.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0
Versions of relevant libraries:
[pip3] numpy==1.20.1
[pip3] torch==1.4.0
[pip3] torchvision==0.5.0
[conda] blas 1.0 mkl
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.3.0 py38h27cfd23_1
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.1 py38ha9443f7_2
[conda] pytorch 1.4.0 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] torchvision 0.5.0 py38_cu101 pytorch
As you can see pytorch is installed with cudnn7.6.3 but cudnn version used is 8.2.0. Is this the reason for cudnn errors or the reason is in the repo itself? |
st176498 | Solved by ptrblck in post #2
The global cudnn and CUDA installations won’t be used, if you install the binaries.
However, since you are using an Ampere GPU, you would need to use CUDA>=11.0 with cudnn>=8, which aren’t used in the PyTorch 1.4.0 binaries. |
st176499 | The global cudnn and CUDA installations won’t be used, if you install the binaries.
However, since you are using an Ampere GPU, you would need to use CUDA>=11.0 with cudnn>=8, which aren’t used in the PyTorch 1.4.0 binaries. |
st176500 | I see, thanks for the quick answer. Just to make sure, it is possible to build Pytorch 1.4.0 with latest CUDA from source, right? |
st176501 | No, I don’t think it will work directly, since a few changes were needed to be able to build with CUDA>=11.0 and cudnn>=8, so you could run into build issues. |
st176502 | I met a problem that sub-processes in python will compete for pipe resources when one subprocess tries to receive PyTorch tensors and another subprocess requests a different pipe connection.
Here is my code (run on Ubuntu 18.04):
import time
from multiprocessing import Process, Pipe
import torch
def send(conn):
a, b = Pipe()
data = [torch.rand(1) for _ in range(20)]
print('start sending', time.time())
conn.send(data)
print('finish sending', time.time())
while True:
if a.poll():
pass
# else:
# time.sleep(0.1)
def recv(conn):
while True:
if conn.poll():
print('start receiving', time.time())
conn.recv()
print('finish receiving', time.time())
break
else:
time.sleep(0.1)
if __name__ == '__main__':
recv_conn, send_conn = Pipe()
send_proc = Process(target=send, args=(send_conn,))
recv_proc = Process(target=recv, args=(recv_conn,))
send_proc.start()
recv_proc.start()
recv_proc.join()
send_proc.terminate()
This code needs several seconds to receive the torch tensors. But if I uncomment the last two lines in the send method, the speed of receiving the torch tensors will be very quick.
And this problem only appears while sending torch tensors.
Does anyone know the reason for this problem? |
st176503 | Hi Rohan, thanks for your reply. I’ve solved this problem, and I just want to know the reason for it. Can you give me some advice? |
st176504 | Consider a model training with DDP, all parameters are updated with gradient, except one that is only updated to keep recording the moving average of a value (generated after each forward pass) during training.
Is there an easy way to implemented it in DDP?
Thanks! |
st176505 | Hi,
Do you mean a separate parameter that is not part of model.paramters() which needs to keep track of a moving average cross all workers?
If so, you should be able to just use all_reduce to get the average value of this variable across all workers, and use any typical running mean algorithm to update this on the desired worker. |
st176506 | Hi,
I’m using DistributedDataParallel to train a simple classification model. I have some experience with distributed training, but I can’t seem to wrap my head around one specific detail.
Let me refer you to an example provided by PyTorch: examples/main.py at master · pytorch/examples · GitHub 11
Here, you will see that the accuracy is calculated by a accuracy() function, and the average accuracy is updated using the AverageMeter in the following lines.
From my understanding, this calculates the accuracy for the samples that each GPU receives, not the accuracy of samples across all GPUs. So, this function returns top1.avg, and now we head over to L248 2, where the model is saved if the accuracy from GPU rank 0 is larger than the best accuracy.
Am I going crazy, or is this intended behavior? Are we assuming that all GPUs receive the same samples, or that the accuracy on GPU0 is somehow representative of entire accuracy?
To show that my interpretation is correct, I wrote a short sandbox code that mimics the code attached above. The AverageMeter and accuracy() functions were all copied from the linked code base. This assumes a 2-class classification scenario, with batch_size=5, and I ran it on 4 GPUs:
acc_meter = AverageMeter()
model = nn.Linear(10, 2).cuda(gpu)
model = DistributedDataParallel(model, device_ids=[gpu])
a = torch.randn(5, 10).cuda(gpu)
gt = torch.randint(0, 2, (5, 1)).cuda(gpu)
outputs = model(a)
acc = accuracy(outputs, gt, topk=(1,))
acc_meter.update(acc[0], a.size(0))
print("Avg: ", acc_meter.avg)
print("Sum: ", acc_meter.sum)
print("Count: ", acc_meter.count)
return acc_meter.avg
There are two issues:
As suspected, returning acc.avg will only return the accuracy for current GPU. This means that saving a checkpoint or logging from rank=0 will only checkpoint or log the accuracy from rank=0.
The accuracy calculation is wrong. The accuracy() function divides by the batch_size, so returning acc_meter.avg divides by the batch_size again. The return value should be acc_meter.sum.
Ultimately, I would like to write code that uses DistributedDataParallel but can compute the accuracy correctly. For now, I have resorted to the following method:
Compute num_correct for all GPUs
all reduce num_correct, as well as num_samples as such: dist.all_reduce(num_correct), dist.all_reduce(num_samples). For this step, num_samples must be cast to GPU first.
Cast back CPU, then update the average meter.
To me, this does not seem like an elegant solution. Perhaps this could mean creating an AverageMeter that can handle Distributed updates? In search of a more elegant solution, I’ve looked at multiple code bases, but they all seem to do it incorrectly, as shown above. Am I completely missing something big here? If anyone has suggestions/solutions, I would love to hear from you. |
st176507 | Thanks for your question!
In the code you linked, a DistributedSampler is used to ensure that each GPU is fed different data which is the typical scenario in DDP training.
or that the accuracy on GPU0 is somehow representative of entire accuracy
In general, we take the model on an arbitrary GPU and use that to measure the accuracy, in this case, GPU 0. The actual model is the same across all GPUs though since gradients are synchronized.
The accuracy() function divides by the batch_size , so returning acc_meter.avg divides by the batch_size again. The return value should be acc_meter.sum .
Dones’t acc_meter.avg take the average accuracy across all the updates? Or does it actually also divide by the batch size? If so, this seems incorrect and an issue should be filed in the pytorch/examples repo.
Your approach if you would like to get the accuracy across all ranks seems correct. In general, maybe it would be good to have something like DistributedAverageMeter. Although in most training scenarios I’ve come across, it’s enough to evaluate the model on only one rank during training and then the entire dataset during eval. This of course assumes the data is shuffled properly and one rank doesn’t get all positive/negative samples, for example. |
st176508 | Hi, thanks for the reply.
Regarding DistributedSampler:
What you mentioned is exactly the reason why I raised the question. Since each GPU is fed different data, the accuracy on GPU0 is not necessarily the accuracy on the entire dataset. But as you mentioned, if it is enough to evaluate the model on a single rank, this shouldn’t be an issue.
Regarding AverageMeter:
The acc_meter.avg should take the average accuracy across all updates, but it actually divides by the batch size again. If you look on L340 1, images.size(0) is passed as a second parameter. On the update() function at L376, the variable n represents the number of values (the count), and the sum is divided by the count to compute the average. Thus, we end up dividing by the batch size twice: once in the accuracy function, and once in the average meter. The second parameter in L340 should be left blank. But again, this is assuming that all batch sizes are same (sometimes the last batch is smaller).
I’ve raised an issue 7 on the repo. I could open a pull request as well. The accuracy issue looks like a minor fix, but adding a DistributedAverageMeter may need some work |
st176509 | Hi, I’m working on building a model via weakly-supervised learning.
In order to achieve data generation in a weakly-supervised manner during training process, I’d like to make a GPU dedicated to data processing and pass them to other GPUs, while other GPUs learn using the processed data in parallel.
Is it possible? and could you give me some example or snippet?
I’m thinking of it is important for data synchronization among a data-dedicated GPU and other GPUs.
And I’m also curious it’s possible to select GPU’s device in data loader. |
st176510 | Hello Sangyeon_Kim,
This might help you with your implementation.
Sangyeon_Kim:
I’d like to make a GPU dedicated to data processing
You can use multiple processes and assign a process to be the data processer. The data processer would use a GPU device.
Sangyeon_Kim:
pass them to other GPUs
You can communicate the processed data over RPC.
Sangyeon_Kim:
while other GPUs learn using the processed data in parallel.
Each trainer will be a process. GPU devices will be assigned to the process. Each trainer will create a DDP instance.
ddp doc Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.8.1+cu102 documentation
distributed rpc doc Distributed RPC Framework — PyTorch 1.8.1 documentation 1 |
st176511 | I have used find_unused_parameters=True, but it still reports this error.
the model structure is that i implemented a model (model1) to learn weights (or mask) for the output of Resnet model (model2), use margin softmax loss for classification. so model2 have called model1. the model1 use mse loss for feature similarity comparison.
the model1 is a siamese network, so it has two inputs. img1 and img2.
at first, it reports input/output size not consistent problem, but finally after time consuming modification of input/output size, i no longer reports that.
but it reports the unused parameters problem… i have no idea what i should do
the complete error report:
Traceback (most recent call last):
File "train.py", line 159, in <module>
main(args_)
File "train.py", line 111, in main
f_clean_masked, f_occ_masked, fc, fc_occ = backbone(img1, img2)
File "/home/user1/miniconda3/envs/py377/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/user1/miniconda3/envs/py377/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 606, in forward
if self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Traceback (most recent call last):
File "train.py", line 159, in <module>
main(args_)
File "train.py", line 111, in main
f_clean_masked, f_occ_masked, fc, fc_occ = backbone(img1, img2)
File "/home/user1/miniconda3/envs/py377/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/user1/miniconda3/envs/py377/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 606, in forward
if self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Traceback (most recent call last):
File "train.py", line 159, in <module>
main(args_)
File "train.py", line 111, in main
f_clean_masked, f_occ_masked, fc, fc_occ = backbone(img1, img2)
File "/home/user1/miniconda3/envs/py377/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/user1/miniconda3/envs/py377/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 606, in forward
if self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Traceback (most recent call last):
File "train.py", line 159, in <module>
main(args_)
File "train.py", line 111, in main
f_clean_masked, f_occ_masked, fc, fc_occ = backbone(img1, img2)
File "/home/user1/miniconda3/envs/py377/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/user1/miniconda3/envs/py377/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 606, in forward
if self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
terminate called without an active exception
Traceback (most recent call last):
File "/home/user1/miniconda3/envs/py377/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/user1/miniconda3/envs/py377/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/user1/miniconda3/envs/py377/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/home/user1/miniconda3/envs/py377/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/user1/miniconda3/envs/py377/bin/python3', '-u', 'train.py', '--local_rank=3']' died with <Signals.SIGABRT: 6>.
the weight model (model1) structure:
class MODEL1(nn.Module):
def __init__(self,network,embedding_size,batch_size,dropout,fp16):
super(MODEL1, self).__init__()
self.batch_size = batch_size
self.resnet = eval(network)(pretrained=False, num_features=embedding_size, dropout=dropout, fp16=fp16)
self.features_shape = embedding_size
# mask generator
self.sia = nn.Sequential(
# nn.BatchNorm2d(filter_list[4]),
# conv1x1(self.inplanes, planes * block.expansion, stride),
nn.Conv2d(self.features_shape, 512, kernel_size=3, stride=1, padding=1, bias=False),
nn.PReLU(self.features_shape),
nn.BatchNorm2d(self.features_shape),
nn.Sigmoid(),
)
self.fcMG = nn.Sequential(
# nn.BatchNorm1d(self.features_shape * 7 * 7),
nn.BatchNorm1d(self.features_shape),
# nn.Dropout(p=0),
# nn.Linear(self.features_shape * 7 * 7, self.features_shape),
nn.Linear(self.features_shape, self.features_shape),
nn.BatchNorm1d(self.features_shape),
)
# Weight initialization
for m in self.modules():
if (isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear)):
nn.init.xavier_uniform_(m.weight)
if m.bias is not None:
nn.init.constant_(m.bias, 0.0)
elif (isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def getFeatures(self,batch):
return self.resnet.getEmbedding(batch)
def forward(self,soruce,target):
# MG
f_clean = self.getFeatures(soruce)
f_occ = self.getFeatures(target)
f_diff = torch.add(f_clean, f_occ, alpha=-1.0)
f_diff = torch.abs(f_diff) # [batch_size, 25088]
# f_diff shape should be 4d tensor
f_diff = f_diff.unsqueeze(2).unsqueeze(3) # (batch_size, 512, 1, 1);
# f_diff = f_diff.reshape(self.batch_size, self.features_shape, 7, 7)
mask = self.sia(f_diff) # (batch_size, 512, 1, 1)
# End Siamese branch
mask = mask.reshape(self.batch_size, -1)
f_clean_masked = f_clean * mask # [batch_size, 512, batch_size, 512]
f_occ_masked = f_occ * mask
fc = f_clean_masked.view(f_clean_masked.size(0), -1) # 256*(512*7*6)
fc_occ = f_occ_masked.view(f_occ_masked.size(0), -1) # (batch_size, 1048576),(batch_size, 512)
fc = self.fcMG(fc) # expect input: 512 * 7 * 7, 25088
fc_occ = self.fcMG(fc_occ)
return f_clean_masked, f_occ_masked, fc, fc_occ
the resnet (model2) structure:
def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes,
out_planes,
kernel_size=3,
stride=stride,
padding=dilation,
groups=groups,
bias=False,
dilation=dilation)
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes,
out_planes,
kernel_size=1,
stride=stride,
bias=False)
class IBasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None,
groups=1, base_width=64, dilation=1):
super(IBasicBlock, self).__init__()
if groups != 1 or base_width != 64:
raise ValueError('BasicBlock only supports groups=1 and base_width=64')
if dilation > 1:
raise NotImplementedError("Dilation > 1 not supported in BasicBlock")
self.bn1 = nn.BatchNorm2d(inplanes, eps=1e-05,)
self.conv1 = conv3x3(inplanes, planes)
self.bn2 = nn.BatchNorm2d(planes, eps=1e-05,)
self.prelu = nn.PReLU(planes)
self.conv2 = conv3x3(planes, planes, stride)
self.bn3 = nn.BatchNorm2d(planes, eps=1e-05,)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.bn1(x)
out = self.conv1(out)
out = self.bn2(out)
out = self.prelu(out)
out = self.conv2(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
return out
class IResNet(nn.Module):
fc_scale = 7 * 7
def __init__(self,
block, layers, num_features, dropout=0, zero_init_residual=False,
groups=1, width_per_group=64, replace_stride_with_dilation=None, fp16=False):
super(IResNet, self).__init__()
self.fp16 = fp16
self.inplanes = 64
self.dilation = 1
if replace_stride_with_dilation is None:
replace_stride_with_dilation = [False, False, False]
if len(replace_stride_with_dilation) != 3:
raise ValueError("replace_stride_with_dilation should be None "
"or a 3-element tuple, got {}".format(replace_stride_with_dilation))
self.groups = groups
self.base_width = width_per_group
self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(self.inplanes, eps=1e-05)
self.prelu = nn.PReLU(self.inplanes)
self.layer1 = self._make_layer(block, 64, layers[0], stride=2)
self.layer2 = self._make_layer(block,
128,
layers[1],
stride=2,
dilate=replace_stride_with_dilation[0])
self.layer3 = self._make_layer(block,
256,
layers[2],
stride=2,
dilate=replace_stride_with_dilation[1])
self.layer4 = self._make_layer(block,
512,
layers[3],
stride=2,
dilate=replace_stride_with_dilation[2])
self.bn2 = nn.BatchNorm2d(512 * block.expansion, eps=1e-05,)
self.dropout = nn.Dropout(p=dropout, inplace=True)
self.fc = nn.Linear(512 * block.expansion * self.fc_scale, num_features)
self.features = nn.BatchNorm1d(num_features, eps=1e-05)
nn.init.constant_(self.features.weight, 1.0)
self.features.weight.requires_grad = False
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.normal_(m.weight, 0, 0.1)
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
if zero_init_residual:
for m in self.modules():
if isinstance(m, IBasicBlock):
nn.init.constant_(m.bn2.weight, 0)
def _make_layer(self, block, planes, blocks, stride=1, dilate=False):
downsample = None
previous_dilation = self.dilation
if dilate:
self.dilation *= stride
stride = 1
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
nn.BatchNorm2d(planes * block.expansion, eps=1e-05, ),
)
layers = []
layers.append(
block(self.inplanes, planes, stride, downsample, self.groups,
self.base_width, previous_dilation))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(
block(self.inplanes,
planes,
groups=self.groups,
base_width=self.base_width,
dilation=self.dilation))
return nn.Sequential(*layers)
def forward(self, x):
with torch.cuda.amp.autocast(self.fp16):
x = self.conv1(x)
x = self.bn1(x)
x = self.prelu(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.bn2(x)
x = torch.flatten(x, 1)
x = self.dropout(x) # [128, 25088]
x = self.fc(x.float() if self.fp16 else x) # [128, 512]
x = self.features(x)
# axis = 1
# norm = torch.norm(x,2,axis,True)
# x = torch.div(x,norm)
return x
def getEmbedding(self,batch):
features = self.forward(batch)
return features
i have used the distributed computing in pytorch, the train.py:
def main(args):
world_size = int(os.environ['WORLD_SIZE'])
rank = int(os.environ['RANK'])
dist_url = "tcp://{}:{}".format(os.environ["MASTER_ADDR"], os.environ["MASTER_PORT"])
dist.init_process_group(backend='nccl', init_method=dist_url, rank=rank, world_size=world_size)
local_rank = args.local_rank
torch.cuda.set_device(local_rank)
if not os.path.exists(cfg.output) and rank is 0:
os.makedirs(cfg.output)
else:
time.sleep(2)
log_root = logging.getLogger()
init_logging(log_root, rank, cfg.output)
trainset = MXFaceDataset(root_dir=cfg.rec, local_rank=local_rank)
train_sampler = torch.utils.data.distributed.DistributedSampler(
trainset, shuffle=True)
train_loader = DataLoaderX(
local_rank=local_rank, dataset=trainset, batch_size=cfg.batch_size,
sampler=train_sampler, num_workers=0, pin_memory=True, drop_last=True)
dropout = 0.4 if cfg.dataset is "webface" else 0
backbone = MODEL1(network=args.network, embedding_size=cfg.embedding_size, batch_size=cfg.batch_size,
dropout=dropout, fp16=cfg.fp16).to(local_rank)
for ps in backbone.parameters():
dist.broadcast(ps, 0)
backbone = torch.nn.parallel.DistributedDataParallel(
module=backbone, broadcast_buffers=False, device_ids=[local_rank], find_unused_parameters=True)
backbone.train()
margin_softmax = eval("losses.{}".format(args.loss))()
module_partial_fc = PartialFC(
rank=rank, local_rank=local_rank, world_size=world_size, resume=args.resume,
batch_size=cfg.batch_size, margin_softmax=margin_softmax, num_classes=cfg.num_classes,
sample_rate=cfg.sample_rate, embedding_size=cfg.embedding_size, prefix=cfg.output)
opt_backbone = torch.optim.SGD(
params=[{'params': backbone.parameters()}],
lr=cfg.lr / 512 * cfg.batch_size * world_size,
momentum=0.9, weight_decay=cfg.weight_decay)
opt_pfc = torch.optim.SGD(
params=[{'params': module_partial_fc.parameters()}],
lr=cfg.lr / 512 * cfg.batch_size * world_size,
momentum=0.9, weight_decay=cfg.weight_decay)
scheduler_backbone = torch.optim.lr_scheduler.LambdaLR(
optimizer=opt_backbone, lr_lambda=cfg.lr_func)
scheduler_pfc = torch.optim.lr_scheduler.LambdaLR(
optimizer=opt_pfc, lr_lambda=cfg.lr_func)
start_epoch = 0
total_step = int(len(trainset) / cfg.batch_size / world_size * cfg.num_epoch)
if rank is 0: logging.info("Total Step is: %d" % total_step)
callback_verification = CallBackVerification(1000, rank, cfg.val_targets, cfg.rec) # 150 for debug, self.frequent = 1000
callback_logging = CallBackLogging(50, rank, total_step, cfg.batch_size, world_size, None) # verbose = 50
callback_checkpoint = CallBackModelCheckpoint(1000, rank, cfg.output)
loss = AverageMeter()
global_step = 0
grad_scaler = MaxClipGradScaler(cfg.batch_size, 128 * cfg.batch_size, growth_interval=100) if cfg.fp16 else None
mmd_loss = nn.MSELoss(reduction="none")
kl_loss = DistillationLoss(temp=3)
for epoch in range(start_epoch, cfg.num_epoch):
train_sampler.set_epoch(epoch)
for step, (img, label) in enumerate(train_loader):
img1, img2 = img[:,:,:,:112], img[:,:,:,112:]
global_step += 1
f_clean_masked, f_occ_masked, fc, fc_occ = backbone(img1, img2)
features1 = F.normalize(f_clean_masked)
features2 = F.normalize(f_occ_masked)
mmdLoss_v = mmd_loss(features1, features2)
mmdLoss_v = torch.mean(mmdLoss_v)
# fc7
loss_v1 = module_partial_fc.forward_backward(label, fc, opt_pfc)
loss_v2 = module_partial_fc.forward_backward(label, fc_occ, opt_pfc)
# fc1
lossAll = (loss_v1 + loss_v2 + mmdLoss_v).mean()
lossAll.backward()
clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2)
opt_backbone.step()
opt_pfc.step()
module_partial_fc.update()
opt_backbone.zero_grad()
opt_pfc.zero_grad()
loss.update(lossAll, 1)
callback_logging(global_step, loss, epoch, cfg.fp16, grad_scaler)
callback_verification(global_step, backbone)
callback_checkpoint(global_step, backbone, module_partial_fc)
scheduler_backbone.step()
scheduler_pfc.step()
dist.destroy_process_group()
Does anyone have any clue? really appreciated
Dear Pytorch master, my old friend, @ptrblck , do you have any suggestions?
Maybe the codes too long? |
st176512 | karl7:
making sure all forward function outputs participate in calculating loss
My first suspicion was probably this, but it does look like all outputs are participating in loss computation. Although, to double check this can you share the code for PartialFC since it is used in the loss computation? |
st176513 | yes, of course sure.
Thank you so much for paying attention to my post.
the partial fc in my code was just copied from another popular opensource repository 7 for face recognition.
in fact, the major part of my code is based on this version of pytorch implementation for Arcface face recognition method. But maybe for friends who are not famillar with Arcface, they can ignore this name since it just implemented a type of loss function.
longing to find the cause of this problem with you and thanks again for your time! |
st176514 | Could you temporarily get rid of loss_v1 and loss_v2, and only call backward on mmdLoss_v, and skip all lines related to module_partial_fc and opt_pfc, and see if still problem persists? |
st176515 | I think the problem might be coming from this line: insightface/partial_fc.py at master · deepinsight/insightface · GitHub 22. dist.all_gather doesn’t perform autograd recording, so from an autograd point of view features is not used to produce loss and this might be causing the issue. One way to validate this would be to initialize total_features with copies of features (so its recorded as part of autograd) and see if that resolves the issue? |
st176516 | good insight!
may i ask how to ’ initialize total_features with copies of features’ exactly? simply use copy.deepcopy will cause error…
RuntimeError("Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment"
another question is, does all the distributed operators not record autograd? like dist.all_reduce and so on… where can i get information about these characteristics (record or not)? i didn’t see that in official doc 1
thank you! |
st176517 | Looking at again, I am abit confused about your code. forward_backward function returns the pair (grad,loss), but you have treated the result (loss_v1 and loss_v2) as simple tensors. Maybe you pointed to the wrong commit of the insightface repository ? |
st176518 | Dear @mrzzd, thanks for your careful check, it’s my fault and sorry (i forgot that i have done a bit modification to the original partial_fc.py). Now i pasted the partial_fc.py here:
If you have any new discovery, please tell me. thank you!
import logging
import os
import torch
import torch.distributed as dist
from torch.nn import Module
from torch.nn.functional import normalize, linear
from torch.nn.parameter import Parameter
class PartialFC(Module):
"""
Author: {Xiang An, Yang Xiao, XuHan Zhu} in DeepGlint,
Partial FC: Training 10 Million Identities on a Single Machine
See the original paper:
https://arxiv.org/abs/2010.05222
"""
@torch.no_grad()
def __init__(self, rank, local_rank, world_size, batch_size, resume,
margin_softmax, num_classes, sample_rate=1.0, embedding_size=512, prefix="./"):
super(PartialFC, self).__init__()
#
self.num_classes: int = num_classes
self.rank: int = rank
self.local_rank: int = local_rank
self.device: torch.device = torch.device("cuda:{}".format(self.local_rank))
self.world_size: int = world_size
self.batch_size: int = batch_size
self.margin_softmax: callable = margin_softmax
self.sample_rate: float = sample_rate
self.embedding_size: int = embedding_size
self.prefix: str = prefix
self.num_local: int = num_classes // world_size + int(rank < num_classes % world_size)
self.class_start: int = num_classes // world_size * rank + min(rank, num_classes % world_size)
self.num_sample: int = int(self.sample_rate * self.num_local)
self.weight_name = os.path.join(self.prefix, "rank:{}_softmax_weight.pt".format(self.rank))
self.weight_mom_name = os.path.join(self.prefix, "rank:{}_softmax_weight_mom.pt".format(self.rank))
if resume:
try:
self.weight: torch.Tensor = torch.load(self.weight_name)
logging.info("softmax weight resume successfully!")
except (FileNotFoundError, KeyError, IndexError):
self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device)
logging.info("softmax weight resume fail!")
try:
self.weight_mom: torch.Tensor = torch.load(self.weight_mom_name)
logging.info("softmax weight mom resume successfully!")
except (FileNotFoundError, KeyError, IndexError):
self.weight_mom: torch.Tensor = torch.zeros_like(self.weight)
logging.info("softmax weight mom resume fail!")
else:
self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device)
self.weight_mom: torch.Tensor = torch.zeros_like(self.weight)
logging.info("softmax weight init successfully!")
logging.info("softmax weight mom init successfully!")
self.stream: torch.cuda.Stream = torch.cuda.Stream(local_rank)
self.index = None
if int(self.sample_rate) == 1:
self.update = lambda: 0
self.sub_weight = Parameter(self.weight)
self.sub_weight_mom = self.weight_mom
else:
self.sub_weight = Parameter(torch.empty((0, 0)).cuda(local_rank))
def save_params(self):
torch.save(self.weight.data, self.weight_name)
torch.save(self.weight_mom, self.weight_mom_name)
@torch.no_grad()
def sample(self, total_label):
index_positive = (self.class_start <= total_label) & (total_label < self.class_start + self.num_local)
total_label[~index_positive] = -1
total_label[index_positive] -= self.class_start
if int(self.sample_rate) != 1:
positive = torch.unique(total_label[index_positive], sorted=True)
if self.num_sample - positive.size(0) >= 0:
perm = torch.rand(size=[self.num_local], device=self.device)
perm[positive] = 2.0
index = torch.topk(perm, k=self.num_sample)[1]
index = index.sort()[0]
else:
index = positive
self.index = index
total_label[index_positive] = torch.searchsorted(index, total_label[index_positive])
self.sub_weight = Parameter(self.weight[index])
self.sub_weight_mom = self.weight_mom[index]
def forward(self, total_features, norm_weight):
torch.cuda.current_stream().wait_stream(self.stream)
logits = linear(total_features, norm_weight)
return logits
@torch.no_grad()
def update(self):
self.weight_mom[self.index] = self.sub_weight_mom
self.weight[self.index] = self.sub_weight
def prepare(self, label, optimizer):
with torch.cuda.stream(self.stream):
total_label = torch.zeros(
size=[self.batch_size * self.world_size], device=self.device, dtype=torch.long)
dist.all_gather(list(total_label.chunk(self.world_size, dim=0)), label)
self.sample(total_label)
optimizer.state.pop(optimizer.param_groups[-1]['params'][0], None)
optimizer.param_groups[-1]['params'][0] = self.sub_weight
optimizer.state[self.sub_weight]['momentum_buffer'] = self.sub_weight_mom
norm_weight = normalize(self.sub_weight)
return total_label, norm_weight
def forward_backward(self, label, features, optimizer):
total_label, norm_weight = self.prepare(label, optimizer)
total_features = torch.zeros(
size=[self.batch_size * self.world_size, self.embedding_size], device=self.device)
dist.all_gather(list(total_features.chunk(self.world_size, dim=0)), features.data)
total_features.requires_grad = True
logits = self.forward(total_features, norm_weight)
logits = self.margin_softmax(logits, total_label)
with torch.no_grad():
max_fc = torch.max(logits, dim=1, keepdim=True)[0]
dist.all_reduce(max_fc, dist.ReduceOp.MAX)
# calculate exp(logits) and all-reduce
logits_exp = torch.exp(logits - max_fc)
logits_sum_exp = logits_exp.sum(dim=1, keepdims=True)
dist.all_reduce(logits_sum_exp, dist.ReduceOp.SUM)
# calculate prob
logits_exp.div_(logits_sum_exp)
# get one-hot
grad = logits_exp
index = torch.where(total_label != -1)[0]
one_hot = torch.zeros(size=[index.size()[0], grad.size()[1]], device=grad.device)
one_hot.scatter_(1, total_label[index, None], 1)
# calculate loss
loss = torch.zeros(grad.size()[0], 1, device=grad.device)
loss[index] = grad[index].gather(1, total_label[index, None])
dist.all_reduce(loss, dist.ReduceOp.SUM)
loss_v = loss.clamp_min_(1e-30).log_().mean() * (-1)
loss_v.requires_grad = True
return loss_v |
st176519 | Hello everyone,
I’m looking for an identifier that specifies a neural net’s layer in this 2.
Also, I would like to know if this 1 loop iterates over all layers?
Cheers, |
st176520 | Solved by rvarm1 in post #4
bucket.get_index() does not necessarily correspond to layer indices.
We use GradBuckets to store gradients in DDP’s reducer which is passed to the gradient communication hook. Buckets can contain gradients from 1 or more parameters that correspond to 1 or more layers, and the index is just the inde… |
st176521 | Hello hamidreza_ramezani
I’m looking for an identifier that specifies a neural net’s layer in this.
That is ddp communication hook. It is used during the backward pass.
Also, I would like to know if this 2 loop iterates over all layers?
That loop iterates over all the parameters stored in the bucket. The parameters in the bucket are determined at construction time.
ddp doc Distributed Data Parallel — PyTorch 1.8.1 documentation 1
ddp communication hook doc DDP Communication Hooks — PyTorch 1.8.1 documentation 1 |
st176522 | Hey Garrett,
Thanks for the reply.
Blockquote That is ddp communication hook. It is used during the backward pass.
I see. I guess there should be a variable that represents a layer like this 1. I’m not sure if bucket_index represents a layer though. Are bucket and layer the same thing in this context? I printed the value of bucket_index and noticed that it only takes two values 0 and 1 (the application was training RN20 on CIFAR10). |
st176523 | bucket.get_index() does not necessarily correspond to layer indices.
We use GradBuckets to store gradients in DDP’s reducer which is passed to the gradient communication hook. Buckets can contain gradients from 1 or more parameters that correspond to 1 or more layers, and the index is just the index of this bucket in the list of all buckets.
In addition, there is an API bucket.get_per_parameter_tensors() (pytorch/powerSGD_hook.py at master · pytorch/pytorch · GitHub 1) that will allow you to get the tensors for a given parameter. |
st176524 | Hello,
I am new to GPU training, especially training in parallel on multiple GPU’s. I sometimes get lost moving data around devices and figuring out which model is where. Right now I am working with 4 V100 GPUs and training using parallel GPU training.
My issue currently is using an autoencoder for inference (i.e. generating reduced dimensionality data) after training on multiple GPUs in parallel. I need to first train or load an autoencoder, then use the ‘encode’ method of this autoencoder to generate data to train a second model. The code for my autoencoder is here:
# General neural net class
class Net(nn.Module):
"""
This class implements a decoder or encoder for the autoencoder class
"""
# initialize model
def __init__(self, n_input, n_hidden_layer, n_hidden, n_output):
super(Net, self).__init__()
# dense input layer
self.input_layer = nn.Linear(n_input, n_hidden)
# leaky ReLU nonlinear activation
self.internal_act = nn.SELU()
self.output_act = nn.Tanh()
# number of hidden layer for looping operations
self.n_hidden_layer = n_hidden_layer
# dropout layer
self.drop = nn.Dropout(p=0.001)
# loop to generate uniform dense hidden layers
for i in range(n_hidden_layer):
setattr(self, "h"+str(i), nn.Linear(n_hidden, n_hidden))
# output layer with specified shape
self.output_layer = nn.Linear(n_hidden, n_output)
# feedforward calculation
def forward(self, x):
# take in input and output to hidden layer shape
x = self.input_layer(x)
# loop through nested hidden layer + LR activation
for i in range(self.n_hidden_layer):
x = getattr(self,"h"+str(i))(self.internal_act(x))
# x = self.drop(x)
# pass through final output layer
x = self.output_layer(self.internal_act(x))
# return output normalized to (-1,1) using Tanh
return self.output_act(x)
# save generated model
def save(self):
torch.save(self.state_dict(), "ED_net.pkl")
# load existing model from pickle file
def load(self):
self.load_state_dict(torch.load("ED_net.pkl"))
# autoencoder class, inherits from NN class
class AutoEncoder(nn.Module):
"""
Implements an autoencoder using the above net class
"""
# initialize autoencoder
def __init__(self, n_input, n_hidden_layer, n_hidden, n_reduced):
super(AutoEncoder,self).__init__()
# generate two Nets, for decoder and encoder operations
self.encoder = Net(n_input, n_hidden_layer, n_hidden, n_reduced)
self.decoder = Net(n_reduced, n_hidden_layer, n_hidden, n_input)
# pass to nn.Sequential object
self.train_pipeline = nn.Sequential(self.encoder, self.decoder)
# forward propagation (encode + decode)
def forward(self, x):
return self.train_pipeline(x)
# save generated model
def save(self, fname):
if fname is not None:
torch.save(self.state_dict(), "Models/"+fname+".pkl")
else:
torch.save(self.state_dict(), "Models/Autoencoder.pkl")
# load existing model from pickle file
def load(self, fname):
if fname is not None:
self.load_state_dict(torch.load("Models/"+fname+".pkl"))
else:
self.load_state_dict(torch.load("Models/Autoencoder.pkl"))
# encode data to reduced dimensionality form
def encoder_forward(self,x):
return self.encoder(x)
# decode reduced dimensionality data
def decoder_forward(self,x):
return self.decoder(x)
###
# GENERATE AUTOENCODER MODEL
###
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# store feature number constant
NUM_FEATURES = 54
NUM_REDUCED = 20
NUM_HIDDEN = 2
SIZE_HIDDEN = 40
# instantiate model and send to device
auto = AutoEncoder(NUM_FEATURES,NUM_HIDDEN,SIZE_HIDDEN,NUM_REDUCED).to(DEVICE)
# use all GPU's of available
if torch.cuda.device_count() > 1 and not args.loadauto:
auto = nn.DataParallel(auto, device_ids=[0,1,2,3])
After I train this model, I try to use it for inference. I was having trouble with trying to use this parallel GPU model for inference, mainly because I need to call the encoder_forward function so I only use the encoder, but I can’t access that because my function is wrapped in ‘DataParallel’. What I’ve been trying to do is save the model parameters and reinitialize the model by loading those saved parameters. It was the easiest solution since I need to do lots of inference later.
# save autoencoder from CPU or GPU training
if isinstance(auto, DataParallel):
auto.module.save(args.autoencfname)
else:
auto.save(args.autoencfname)
###
# GENERATE INFERENCE MODE MODEL FOR AUTOENCODER
###
# regenerate model for inference
autoinf = AutoEncoder(NUM_FEATURES,NUM_HIDDEN,SIZE_HIDDEN,NUM_REDUCED).to(DEVICE)
print('\n### RELOADING MODEL FOR INFERENCE ###\n')
autoinf.load(args.autoencfname)
autoinf.to(DEVICE)
print('### MODEL RELOADED ###\n')
###
# CONVERT DATA TO REDUCED REPRESENTATION
###
# generate array with all timestep data
fname = os.path.join(os.getcwd(),'timestepdata_gri.npy')
timestepdata = np.load(fname)
# convert (selected) timestep data to tensor
normeddata = torch.tensor(traindata.scale_extern(timestepdata[:,3:]), dtype=torch.float32)
# infer reduced dimension data
autoinf.eval()
with torch.no_grad():
normeddata.to(DEVICE)
reduceddata = autoinf.encoder_forward(normeddata).detach().numpy()
The issue arises when I try to use autoinf.encoder_forward method with this autoencoder. I get this error:
Traceback (most recent call last):
File "/panfs/roc/groups/13/suo-yang/dikem003/DimensionReductionNLE/auto_ode/DerivativeEstimator.py", line 475, in <module>
reduceddata = autoinf.encoder_forward(normeddata).detach().numpy()
File "/panfs/roc/groups/13/suo-yang/dikem003/DimensionReductionNLE/auto_ode/AutoEncoderModels.py", line 127, in encoder_forward
return self.encoder(x)
File "/home/suo-yang/dikem003/.conda/envs/torchcombust/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/panfs/roc/groups/13/suo-yang/dikem003/DimensionReductionNLE/auto_ode/AutoEncoderModels.py", line 69, in forward
x = self.input_layer(x)
File "/home/suo-yang/dikem003/.conda/envs/torchcombust/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/suo-yang/dikem003/.conda/envs/torchcombust/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "/home/suo-yang/dikem003/.conda/envs/torchcombust/lib/python3.9/site-packages/torch/nn/functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU (while checking arguments for addmm)
I think argument #2 means the weights, meaning the weights of my encoder model.
Here’s my thinking: I remake the autoencoder from scratch using generically saved parameters, I push this model to the GPU since my device is listed as ‘cuda’ and training worked fine anyways. Why would this not work? If my overall model is on the GPU, shouldn’t its submodules also be on the GPU? I’m having a hard time understanding why it isn’t working how I expect it to. Is this some weirdness with training a model in parallel or am I screwing up pushing the data to the GPU? |
st176525 | Solved by ptrblck in post #2
The to() operation is not an inplace operation on tensors, so you would need to reassign normeddata:
normeddata = normeddata.to(DEVICE) |
st176526 | The to() operation is not an inplace operation on tensors, so you would need to reassign normeddata:
normeddata = normeddata.to(DEVICE) |
st176527 | I wanna use ddp to train my model, while I encounter this issue:
RuntimeError: initialize_buckets must NOT be called during autograd execution.
Could anyone give me help ?
This error will appear at the end of the second epoch, and the first epoch is good.
Snipaste_2021-05-04_16-24-001129×766 42.7 KB |
st176528 | Could you please share a self-contained reproducible source file for thie problem? |
st176529 | A repro would be helpful, and in addition DistributedDataParallel cannot handle multi-device runs with non-evenly divisible batch sizes · Issue #46175 · pytorch/pytorch · GitHub 22 might also provide valuable context.
Are you using single GPU per process or multiple GPUs per process? The latter mode is deprecated and no longer maintained by PyTorch and we suggest moving to single GPU per process which is the more performant and supported way going forward. |
st176530 | Hi, I’m working on a model with distributed data parallel. Referring to this 3, I got
import os
import torch
import torch.nn as nn
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
def main_worker(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12345'
dist.init_process_group('nccl', rank=rank, world_size=world_size)
...
model = DDP(model, device_ids=[rank])
...
loss = criterion(y_batch, y_pred)
dist.destroy_process_group()
if __name__ == "__main__":
mp.spawn(main_worker, args=(ngpus,), nprocs=ngpus, join=True)
And I’d like to get return loss from main_worker.
I tried this, but it does not work for me.
How can I get returns from a function using distributed data parallel? |
st176531 | Solved by minsikseo in post #3
Thx!
In my case, however, I solved using torch.multiprocessing.Pipe as below:
def main_worker(rank, world_size, conn):
...
conn.send(loss)
if __name__ == "__main__":
parent_conn, child_conn = mp.Pipe()
mp.spawn(main_worker, args=(ngpus, child_conn,), nprocs=ngpus, join=True)
… |
st176532 | Hi, you can use
torch.multiprocessing.SimpleQueue to let the child processes to put the results in the queue.
point-to-point communication 1 functions to send tensors between different distributed processes.
You may want to refer to this thread 3 for more explanation. |
st176533 | Thx!
In my case, however, I solved using torch.multiprocessing.Pipe as below:
def main_worker(rank, world_size, conn):
...
conn.send(loss)
if __name__ == "__main__":
parent_conn, child_conn = mp.Pipe()
mp.spawn(main_worker, args=(ngpus, child_conn,), nprocs=ngpus, join=True)
losses = []
while parent_conn.poll():
losses.append(parent_conn.recv())
Then I can gather all losses from every worker. |
st176534 | I am wondering about the recommended approach to balancing dataset sizes across different devices while training with DDP. I have split my dataset across four GPUs, but one of them receives a single extra batch, which causes training to hang and wait indefinitely for gradient synchronization with the other devices. I have thought of a few fixes but each seems like it has a drawback:
1.Throw out the final batch to guarantee equal number of iterations
2. Use torch.cuda.no_sync() decorator on the final batch. This will cause one device to have different model weights.
3. Proceed to the next epoch on the other devices and allow the first batch of epoch 2 to synchronize with this final batch from epoch 1.
I appreciate any suggestions you can give! |
st176535 | I saw people doing option 1.
People reporting this issue was usually because applications do not know how many batches each process will take prior to training. It seems in your case, you deterministically know what processes will take one more batch? In that case, I think we might be able to do better. For example,
option 1. randomly skipping one batch in each of the processes that takes one more input batch
option 2. using no_sync on the first batch in each of the processes that takes one more input batch. no_sync won’t lead to parameter disparities, it will just accumulate the grad in param.grad. As long as you don’t run optimizer.step() in that iteration, it should be fine. The next forward-backward pass out of the no_sync context will accumulate more grad to param.grad and consume them together. |
st176536 | Would sth like this 56 work for you? This can be implemented in the application using allreduce. |
st176537 | I think that would accomplish it but I basically adopted approach 3 and it has been working fine. |
st176538 | Just to follow up, a feature to support uneven dataset sizes has been added natively to DDP. You can find the docs for the feature here: DistributedDataParallel — PyTorch 1.8.1 documentation 47 |
st176539 | I have an input, which is a list of tensors. the element of list is edge list of a graph. since the number of edge is different. I cannot convert it into a tensor. So how to do DataParallel so I can split the list. |
st176540 | Arbitrary positional and keyword inputs are allowed to be passed into DataParallel EXCEPT Tensors. All tensors will be scattered on dim specified (default 0). Primitive types will be broadcasted, but all other types will be a shallow copy and can be corrupted if written to in the model’s forward pass. |
st176541 | I have read some tutorials about Distributed data parallel, however, I didn’t find out how to calculate train loss and accuracy after training one epoch correctly.
With DataParallel, we can easily calculate loss and accuracy since there is only one process. But with DDP, every gpu is running its own process and training its own data. The problem is,
How to evaluate the training accuracy correctly?
I follow the example here. ImageNet Example 10
Does the code redundantly calculate the same test accuray across multiple gpus? If so, is there any way to sample the testloader just like trainloader and avoid repetitive computing? |
st176542 | Finally find out. It seems that it can not be easily done. We need to send message from other process and gather information together. |
st176543 | 111519:
We need to send message from other process and gather information together.
Yes. I use all-reduce function something like this:
import torch
import torch.distributed as dist
def global_meters_all_avg(args, *meters):
"""meters: scalar values of loss/accuracy calculated in each rank"""
tensors = [torch.tensor(meter, device=args.gpu, dtype=torch.float32) for meter in meters]
for tensor in tensors:
# each item of `tensors` is all-reduced starting from index 0 (in-place)
dist.all_reduce(tensor)
return [(tensor / args.world_size).item() for tensor in tensors] |
st176544 | For your questions:
Use all reduce method 23 to communicate across processes;
Yes. And if you want to distributedly conducting evaluation, just follow how the example deal with training data. e.g. create a test_sampler for distribute data into GPUs. |
st176545 | Hi,
I am getting this replicas error today.
setup: windows 10, torch 1.7.1, pytorch-lightning 1.1.7 with 3 gpus.
The model training was working well with ddp and 2 gpus, on another machine (same setup w/ win10, torch 1.7.1 and pl 1.1.7)
the code crashed after printed the following error message:
self.reducer = dist.Reducer(
RuntimeError: replicas[0][0] in this process with sizes [12, 6] appears not to match sizes of the same param in process 0.
Please help! |
st176546 | This happens if the model parameters are not the same across all replicas in DDP. Have you tried printing the sizes of all the params in the model from each rank (using model.parameters())? This would be the first thing to verify mismatched sizes.
Can you also provide your code to repro? |
st176547 | Hi thanks for quick reply!
I am using pytorch-lightning 1.1.7 on top of the torch 1.7.1. I don’t directly call torch api’s, but using lightning’s Trainer, and model.fit .
The lightning underline indeed printed out the model parameters, all three are the same, (they all rounded to thousand though).
The weird thing is that, the training worked very well on first machine, which has two gpus. The problem happened to the second machine, which has 3 gpus. But even after i removed 1 gpu from the the machine, training with 2 gpus on this machine still fails w/ replicas error. |
st176548 | This is how lightning Trainer is initialed and then fit is called:
“”"
self.trainer = pl.Trainer(
max_epochs=configs[“max_epochs”],
gpus=[0, 1],
accelerator=‘ddp’,
weights_summary=“top”,
gradient_clip_val=0.1,
limit_train_batches=30,
callbacks=[lr_logger, early_stop_callback, checkpoint_callback],
)
model = …
self.trainer.fit(
model,
train_dataloader=self.train_dataloader,
val_dataloaders=self.val_dataloader,
)
“”" |
st176549 | Are there any findings for this?
Later tried with accelerator=‘ddp_spawn’, and the replicas error seemingly disappeared.
But the training with ‘ddp_spawn’ very easily get stuck or crash after a few epochs, with error messages like this:
File “D:\installed\anaconda3\envs\TorchB\lib\site-packages\pytorch_lightning\trainer\training_loop.py”, line 720, in train_step_and_backward_closure
result = self.training_step_and_backward(
File “D:\installed\anaconda3\envs\TorchB\lib\site-packages\pytorch_lightning\trainer\training_loop.py”, line 828, in training_step_and_backward
self.backward(result, optimizer, opt_idx)
File “D:\installed\anaconda3\envs\TorchB\lib\site-packages\pytorch_lightning\trainer\training_loop.py”, line 850, in backward
result.closure_loss = self.trainer.accelerator_backend.backward(
File “D:\installed\anaconda3\envs\TorchB\lib\site-packages\pytorch_lightning\accelerators\accelerator.py”, line 104, in backward
model.backward(closure_loss, optimizer, opt_idx, *args, **kwargs)
File “D:\installed\anaconda3\envs\TorchB\lib\site-packages\pytorch_lightning\core\lightning.py”, line 1158, in backward
loss.backward(*args, **kwargs)
File “D:\installed\anaconda3\envs\TorchB\lib\site-packages\torch\tensor.py”, line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “D:\installed\anaconda3\envs\TorchB\lib\site-packages\torch\autograd_init_.py”, line 130, in backward
Variable._execution_engine.run_backward(
RuntimeError: bad allocation
Additional info: I am using ‘gloo’ backend, and init_method=“file:/// …”. |
st176550 | PyTorch DDP requires the params in all GPUs follow the same order. e.g.
GPU0: weight [4,4], bias [4]
GPU1: bias[4], weight [4,4]
This is invalid.
Try print all the parameters names and theirs sizes to text files. Just make sure they follow the same order.
with open(f"params_{args.rank}.txt", "w") as fo:
for name, param in model.parameters():
fo.write(f"{name}\t{param.size()}\n") |
st176551 | I want to use my custom sampler (for example, I need oversampling and I want to use this repo: https://github.com/ufoym/imbalanced-dataset-sampler 137), but I already use DistributedSampler for DataLoader, because I use multi-gpu training. How can I pass to DataLoader one more sampler or maybe I can do it using Dataset? Currently, I use pretty simple ImageFolder dataset and it would be cool if I didn’t need to rewrite it. |
st176552 | Solved by sytelus in post #22
Just found DistributedSamplerWrapper from here. It allows you to wrap DistributedSampler on the top of existing sampler. Might be good feature to add in PyTorch! |
st176553 | You can implement a Wrapper class for your dataset and do the sampling there. For example, if you were to combine DistributedSampler with SubsetRandomSampler, you can implement a dataset wrapper like this:
class DistributedIndicesWrapper(torch.utils.data.Dataset):
"""
Utility wrapper so that torch.utils.data.distributed.DistributedSampler can work with train test splits
"""
def __init__(self, dataset: torch.utils.data.Dataset, indices: torch.Tensor):
self.dataset = dataset
self.indices = indices
def __len__(self):
return self.indices.size(0)
def __getitem__(self, item):
# TODO: do the sampling here ?
idx = self.indices[item]
return self.dataset[idx] |
st176554 | Thanks for idea, danielhavir!
For everyone who is looking for oversampling wrapper under FolderDataset, you can look at this:
class OversamplingWrapper(torch.utils.data.Dataset):
def __init__(self, folder_dataset, oversampling_size=1000):
self.folder_dataset = folder_dataset
self.oversampling_size = oversampling_size
self.num_classes = len(folder_dataset.classes)
self.class_idx_to_sample_ids = {i: [] for i in range(self.num_classes)}
for idx, (_, class_id) in enumerate(folder_dataset.samples):
self.class_idx_to_sample_ids[class_id].append(idx)
def __len__(self):
return self.num_classes * self.oversampling_size
def __getitem__(self, index):
class_id = index % self.num_classes
sample_idx = random.sample(self.class_idx_to_sample_ids[class_id], 1)
return self.folder_dataset[sample_idx[0]] |
st176555 | Hi,
I’ve got a similar goal for distributed training only with WeightedRandomSampler and a custom torch.utils.data.Dataset .
I have 2 classes, positive (say 100) and negative (say 1000).
Each epoch, I want all positive examples, and an equal number of random negative samples.
ds = custom_dataset(args)
weights = 1. /torch.tensor([ds.n_positive, ds.n_negative], dtype=torch.float)
samples_weights = weights[ds.all_targets]
WRsampler = WeightedRandomSampler(
weights=samples_weights,
num_samples=len(samples_weights),
replacement=True
)
But I can’t figure this out.
If I want random negative samples regardless of batch_size, wouldn’t I need a dataloader wrapper? How would I go about a dataloader wrapper?
Any hints or suggestions, much appreciated. |
st176556 | Help us @ptrblck, you’re our only hope. (and have you considered running for president 2020?) |
st176557 | Are you using nn.DistributedDataParallel as shown in this tutorial 178?
If so, I assume you are using a DistributedSampler to only use a valid subset of your dataset in each process?
In that case we should be able to add weighted sampling into the sampler, but let me know, if my assumptions are correct before diving into it.
James_Condon:
and have you considered running for president 2020?
Hahaha, president of discuss-land? |
st176558 | I’ve been using pytorch lightning with the ‘ddp’ distributed data parallel backend and torch.utils.data.distributed.DistributedSampler(ds) as the DataLoader sampler argument. To be honest, I’m unsure of the subsetting that this represents, despite having a look at the source code, but happy to learn. Also happy to refactor for a clean, robust solution. Cheers |
st176559 | I’m not familiar with lightning, but I assume it’s just using the torch.utils.data.DistributedSampler.
Based on the implementation of DistributedSampler and WeightedRandomSampler, this code might work:
class DistributedWeightedSampler(Sampler):
def __init__(self, dataset, num_replicas=None, rank=None, replacement=True):
if num_replicas is None:
if not dist.is_available():
raise RuntimeError("Requires distributed package to be available")
num_replicas = dist.get_world_size()
if rank is None:
if not dist.is_available():
raise RuntimeError("Requires distributed package to be available")
rank = dist.get_rank()
self.dataset = dataset
self.num_replicas = num_replicas
self.rank = rank
self.epoch = 0
self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas))
self.total_size = self.num_samples * self.num_replicas
self.replacement = replacement
def calculate_weights(self, targets):
class_sample_count = torch.tensor(
[(targets == t).sum() for t in torch.unique(targets, sorted=True)])
weight = 1. / class_sample_count.double()
samples_weight = torch.tensor([weight[t] for t in targets])
return samples_weight
def __iter__(self):
# deterministically shuffle based on epoch
g = torch.Generator()
g.manual_seed(self.epoch)
if self.shuffle:
indices = torch.randperm(len(self.dataset), generator=g).tolist()
else:
indices = list(range(len(self.dataset)))
# add extra samples to make it evenly divisible
indices += indices[:(self.total_size - len(indices))]
assert len(indices) == self.total_size
# subsample
indices = indices[self.rank:self.total_size:self.num_replicas]
assert len(indices) == self.num_samples
# get targets (you can alternatively pass them in __init__, if this op is expensive)
targets = self.dataset.targets
targets = targets[self.rank:self.total_size:self.num_replicas]
assert len(targets) == self.num_samples
weights = self.calculate_weights(targets)
return iter(torch.multinomial(weights, self.num_samples, self.replacement).tollist())
def __len__(self):
return self.num_samples
def set_epoch(self, epoch):
self.epoch = epoch
This DistributedWeightedSampler will get the targets of your dataset, create the weights for the current split, and use torch.multinomial to sample from these samples as is done in the WeightedRandomSampler.
This code is untested and I just hacked it together, so please let me know, if this would work at all or if you are seeing any issues. |
st176560 | Above and beyond, as usual. Just need to verify whats doing through but pretty sure this has done the trick. Thanks a million! |
st176561 | I have a slightly different but related question here. Is it possible to have a SequentialSampler followed by a DistributedSampler? I am not sure if this would work when using multi GPUs as data could have been split randomly already.
The reason I am asking this question is that I would like to create a single dataloader from multiple data sources, and I would like each mini-batch of the dataloader to contain only one kind of data. This can easily be done if I create one dataloader for every single data source (and when training go through each of them one by one), but for my purpose here I am wondering if something similar can be achieved by only using one dataloader for all data sources. |
st176562 | Thanks again. This is working great but it seems to be responsible for processes hanging / getting stuck on GPU when main script is terminated or ‘early-stopped’.
~python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 18 leaked semaphores to clean up at shutdown
Any hints as to how this can be cleaned up?
Cheers |
st176563 | Which code hangs or yields the semaphore warning?
Is is the DistributedWeightedSampler? |
st176564 | If the main training script gets early stopped or I keyboard interrupt it, my nvidia-smi memory usage on one of my (two) gpus stays almost full and VGPU stays at 100%. semaphore warning is after pkill python. Doesn’t seem to happen if I’m using any other sampler. |
st176565 | Thanks for the notice. As I haven’t tested the code, it might yield these side effects.
Could you take a look at this issue 151 and see, it this approach would better fit your needs? |
st176566 | Cheers. Pretty sure this was a rookie error, forgot a torch.no_grad() for my val loop. Hasn’t been an issue since adding that in. Thanks. |
st176567 | Superb, thanks very much for this @ptrblck.
Maybe I’m missing something here, but in the __iter__ function, shouldn’t
targets = targets[indices]
or similar, rather than what we currently have:
targets = targets[self.rank:self.total_size:self.num_replicas]
otherwise don’t we just leave indicies hanging in the breeze and not doing anything? Just realised we still need a way to map the selected targets back to the original dataset indicies. I will post when I have something…
thanks again for your awesomeness |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.