id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st176568 | Yes, you are probably right. indices has already the extra samples and also is already subsampled, so that it should be used instead of indexing the targets directly.
I’ll observe the linked issue on GitHub, as it should provide a cleaner way of implementing this behavior. |
st176569 | thanks, I just looked at linked issue, and agree that it is exactly what I was looking for. It looks like it is not ready yet. so here is what I will use in the meantime …
def __iter__(self):
# deterministically shuffle based on epoch
g = torch.Generator()
g.manual_seed(self.epoch)
if self.shuffle:
indices = torch.randperm(len(self.dataset), generator=g).tolist()
else:
indices = list(range(len(self.dataset)))
# add extra samples to make it evenly divisible
indices += indices[:(self.total_size - len(indices))]
assert len(indices) == self.total_size
# subsample
indices = indices[self.rank:self.total_size:self.num_replicas]
assert len(indices) == self.num_samples
# get targets (you can alternatively pass them in __init__, if this op is expensive)
targets = self.dataset.targets
# select only the wanted targets for this subsample
targets = torch.tensor(targets)[indices]
assert len(targets) == self.num_samples
# randomly sample this subset, producing balanced classes
weights = self.calculate_weights(targets)
subsample_balanced_indicies = torch.multinomial(weights, self.num_samples, self.replacement)
# now map these target indicies back to the original dataset index...
dataset_indices = torch.tensor(indices)[subsample_balanced_indicies]
return iter(dataset_indices.tolist()) |
st176570 | Interesting thread.
If we want to do a simple random sampler, wouldn’t something like this work? We just take a random sample of our whole dataset on point of dataset creation, than wrap our Dataset class in DistributedSampler, which would take care of splitting it among processes?
We make reload of the Dataset every epoch so a new sample is drawn.
class Dataset():
def __init__(self, seed, sample_size, *args, **kwargs):
random.seed(seed)
random.shuffle(self.ids)
self.ids = self.ids[:sample_size]
dataset = Dataset()
dataset = torch.utils.data.distributed.DistributedSampler(dataset) |
st176571 | This Dataset would sample the same data points in each process wouldn’t it?
The original DistributedSampler will split the indices such that each process would draw its own samples. |
st176572 | hi everyone,
I met a strange illegal memory access error during evaluation step. It happens randomly. I don’t think there is anything wrong in my evaluation code.
I was training on 4 GPUs(tesla v100), pytorch 1.6, at the same time I was training with mixed precise by using apex.
the error looks like the following error information:
error21112×368 17.9 KB
my evaluation code is
def evaluate(args, model, features, tag="dev"):
dev_sampler = torch.utils.data.distributed.DistributedSampler(features)
dataloader = DataLoader(features, batch_size=args.test_batch_size, num_workers=
args.n_gpu, pin_memory=True, shuffle=False, collate_fn=collate_fn, drop_last=True,
sampler=dev_sampler)
preds = []
labels = []
tensor_list_pred = [torch.zeros([int(len(dataloader)*args.test_batch_size), args.num_class],
dtype= torch.float32, device = args.device) for _ in range(args.n_gpu)]
tensor_list_label = [torch.zeros([int(len(dataloader)*args.test_batch_size),args.num_class],
dtype= torch.float32, device = args.device) for _ in range(args.n_gpu)]
for batch in dataloader:
model.eval()
inputs = {'input_ids': batch[0].to(args.device),
'attention_mask': batch[1].to(args.device),
'entity_pos': batch[3],
'hts': batch[4],
}
label = np.array(batch[2])
label_t = torch.from_numpy(label)
label_t = label_t.squeeze(1)
labels.append(label_t.to(args.device))
with torch.no_grad():
pred, *_ = model(**inputs)
pred = pred.cpu().numpy()
pred[np.isnan(pred)] = 0
pred = torch.from_numpy(pred)
preds.append(pred.to(args.device))
label_s = torch.cat(labels, axis=0)
pred_s = torch.cat(preds, axis=0)
dist.all_gather(tensor_list_pred, pred_s)
dist.all_gather(tensor_list_label, label_s)
labels = torch.cat(tensor_list_label, axis=0)
preds = torch.cat(tensor_list_pred, axis=0)
labels_c = labels.cpu()
preds_c = preds.cpu()
r,c = preds_c.size()
_ , index = torch.topk(preds_c, c, dim=1)
pred_nr = index[:,0].numpy()
_ , index_label = torch.topk(labels_c, c, dim=1)
y_true = index_label[:,0].numpy()
f1_macro = f1_score(y_true, pred_nr, average='macro')
f1_micro = f1_score(y_true, pred_nr, average='micro')
f1_weighted = f1_score(y_true, pred_nr, average='weighted')
output = {
tag + "_F1_micro": f1_micro * 100,
tag + "_F1_macro": f1_macro * 100,
tag + "_F1_weighted": f1_weighted * 100,
tag + "_class_report": classification_report(y_true, pred_nr),
}
return f1_weighted, output
I have been stucked here for several days. who can tell me which reason causes this error? Could you give me some suggestions for fixing this problem.
Thanks in advance |
st176573 | I don’t see where apex is used, but note that we recommend to use the native mixed-precision implementation via torch.cuda.amp as well as the native DistributedDataParallel implementation. |
st176574 | instead of apex I have used torch.cuda.amp for mixed precision implementation. but same error information has been shown. because the variablel label is on CPU. In order to accumulate
the label value from 4 different processes I have used the torch API torch.distributed.all_gather. But the API must run on cuda. And then I transfered the label value from CPU to GPU. After getting all of label value, I transfered the label value from GPU to CPU for calculating the metric value(f1 score) by using sklearn.metrics.f1_score. But when the code runs to this row labels_c = labels.cpu(). the error information has been shown again.
I dont know what caused this. |
st176575 | Could you post an executable code snippet so that we could reproduce this error, please? |
st176576 | I have the following code below using torch.multiprocessing.spawn to parallelize over multiple GPUs:
import numpy as np
import torch
from torch.multiprocessing import Pool, set_start_method, spawn
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X)
def X_power_func(j):
X_power = X.cuda()**j
return X_power
if __name__ == '__main__':
results = spawn(X_power_func, range(4), nprocs=1)
results
But I am getting this error below when I run the code:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-9-97eb990d7396> in <module>()
12
13 if __name__ == '__main__':
---> 14 results = spawn(X_power_func, range(4), nprocs=1)
15
16 results
2 frames
/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py in join(self, timeout)
111 raise Exception(
112 "process %d terminated with exit code %d" %
--> 113 (error_index, exitcode)
114 )
115
Exception: process 0 terminated with exit code 1
What I have done wrong in my code? |
st176577 | Solved by iffiX in post #21
then you may use a for-loop to divide X_prime into smaller chunks, just as what your old code was doing, just don’t split them too fine, like into row-by-row operations.
Time for space, or space for time. |
st176578 | First, please read API document carefully:
torch.multiprocessing.spawn(fn, args=(), nprocs=1, join=True, daemon=False, start_method=‘spawn’)
Parameters
fn (function) – …The function is called as fn(i, *args), where i is the process index and args is the passed through tuple of arguments.
args (tuple) – Arguments passed to fn.
nprocs (int) – Number of processes to spawn.
Returns
None if join is True, ProcessContext if join is False
First, you should call your function as:
if __name__ == '__main__':
# start with 4 processes
# your original method will invoke your function as X_power_func(0, 0, 1, 2, 3)
spawn(X_power_func, nprocs=4)
Secondly, do not put result at:
if __name__ == '__main__':
results = spawn(X_power_func, range(4), nprocs=1)
results
Because in your subprocesses, they will try to access “results”, but since they are not “main”, “results” is not defined,
Thirdly, spawn will not return results.
Returns
None if join is True, ProcessContext if join is False
In summary, please re-read documents, they take a lot of time to write. |
st176579 | Many thanks @iffiX for input on this. I am new to torch.multiprocessing.spawn and PyTorch in general, so I guess I just got confused when I read the documentation on it.
With your first point above, are you missing args=()?, ie.
if __name__ == '__main__':
# start with 4 processes
# your original method will invoke your function as X_power_func(0, 0, 1, 2, 3)
spawn(X_power_func, args=(0, 1, 2, 3), nprocs=4)
I don’t understand why must we start with 4 processes?
With your second point, if the results are kept within main, how would I then output the results outside of main?
With your third point, can I ask what is ProcessContext? If I want to return results, does that mean I need to set join=False?
I am sorry for having so many questions. Would really be grateful if you could help. |
st176580 | Well, being new is not an excuse!
Yes, I am missing args, because torch.multiprocessing will invoke your function X_power_func(rank), the default argument is the rank of the started process.
If you want to print results outside of main, you main print it in the invoked function:def X_power_func(j):
X_power = X.cuda()**j
print(X_power)
return X_power
No, in order to properly return results, you should either use torch.multiprocessing.pool or passing a pipe object or anything that can be used to perform inter-process communication.
Torch multiprocessing module is a very thin wrapper of the original multiprocessing module, it basically just registers some customized serializers. |
st176581 | Or you could post your detailed requirements so that we can workout a proper solution for you. |
st176582 | Yeah you’re right @iffiX, I should mention here what I actually want to do.
I am trying to figure out a way to parallelize over multiple GPUs on non-neural net computations in PyTorch. More specifically, I have developed an estimator (non-neural net) in Scikit-learn and the speed performance is slow when a certain hyperparameter is increased. To solve this, I am re-writing the estimator in PyTorch so that I can make use of GPU processing and hopefully multiple GPUs as well.
I have posted a question here in PyTorch Forums (1-2 days ago), Stackoverflow, PyTorch Github and multiple channels in Reddit and I haven’t gotten a reply yet or no one seems to know the full solution. In fact, what I got is the opposite where a lot of folks over at Reddit wanted to know how this is done too. So far, I feel like you’re the only one who seems to know
Anyhow, I did initially tried using torch.multiprocessing.pool. The MRE code is as below:
import numpy as np
import torch
from torch.multiprocessing import Pool, set_start_method
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X)
def X_power_func(j):
X_power = X.cuda()**j
return X_power
if __name__ == '__main__':
set_start_method('spawn', force=True)
with Pool(processes = 2) as p: # Parallelizing over 2 GPUs
results = p.map(X_power_func, range(4))
However when I run this code, it hangs or keeps running forever without any errors.
When I removed set_start_method('spawn', force=True), the code ran properly and gave me the results, but this only works for when I run the code once. When I ran the code again in subsequent runs, I get the error RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method.
Someone in Reddit suggested that I should use torch.multiprocessing.spawn which lead me here back to PyTorch Forum with this post. |
st176583 | This error is here because:
X = torch.DoubleTensor(X)
def X_power_func(j):
X_power = X.cuda()**j
return X_power
You are referencing a global variable here, since “fork” will map the memory of forker to forkee, you can
pass this function to subprocesses correctly, however, “fork” is not compatible with cuda, and therefore the error is thrown.
Could you please show the full code of your estimator? Writing parallel programs in python could be a real pain, with your full code we can choose the most efficient and simplest solution. |
st176584 | Ok, the source code for the estimator is here:
github.com
leockl/helstrom-quantum-centroid-classifier/blob/master/hqc/hqc.py 9
import numpy as np
from sklearn.base import BaseEstimator, ClassifierMixin
from sklearn.utils.validation import check_X_y, check_array, check_is_fitted
from sklearn.utils.multiclass import check_classification_targets
from sklearn.preprocessing import normalize
from joblib import Parallel, delayed
class HQC(BaseEstimator, ClassifierMixin):
"""The Helstrom Quantum Centroid (HQC) classifier is a quantum-inspired supervised
classification approach for data with binary classes (ie. data with 2 classes only).
Parameters
----------
rescale : int or float, default = 1
The dataset rescaling factor. A parameter used for rescaling the dataset.
encoding : str, default = 'amplit'
The encoding method used to encode vectors into quantum densities. Possible values:
'amplit', 'stereo'. 'amplit' means using the amplitude encoding method. 'stereo' means
using the inverse of the standard stereographic projection encoding method. Default set
to 'amplit'.
This file has been truncated. show original
It has already been parallelized for multiple CPUs but it’s still slow when the hyperparameter n_copies is increased. This hyperparameter controls the number of times a Kronecker tensor product is performed, which will result in multiplication of very large matrices when it is increased, therefore slowing down the speed performance when this hyperparameter is large. |
st176585 | You can use share_momery_() 7 and torch.multiprocessing.SimpleQueue to implement IPC. E.g.:
import numpy as np
import torch
import torch.multiprocessing as mp
def func(rank, x, p2c, c2p):
x_power = x.to(rank) ** rank
c2p.put(x_power)
# citing multiprocessing doc: Unlike CPU tensors, the
# sending process is required to keep the original tensor
# as long as the receiving process retains a copy of
# the tensor. The refcounting is implemented under the
# hood but requires users to follow the next best practices.
p2c.get()
print(f"child-{rank} done")
if __name__ == '__main__':
nprocs = 2
x = torch.ones(2, 2)
x.share_memory_()
ctx = mp.get_context('spawn')
c2p, p2c = ctx.SimpleQueue(), ctx.SimpleQueue()
ps = [ctx.Process(target=func, args=(rank, x, p2c, c2p)) for rank in range(nprocs)]
[p.start() for p in ps]
tensors = [c2p.get() for _ in range(nprocs)]
print(tensors)
del tensors
for p in ps:
p2c.put(0)
p.join()
print("parent done") |
st176586 | I have read your code:
So the first outerloop(function) is:L148: def X_prime_class_split_func(j):
What’s the shape and data type of X_prime_class_split[j]? Maybe we could represent it as a tensor
from L162-L166: for k in range(m_class_split):
# Encode vectors into quantum densities
X_prime_class_split_each_row = X_prime_class_split_jth[k, :]
density_each_row = np.dot(X_prime_class_split_each_row.reshape(-1, 1),
X_prime_class_split_each_row.reshape(1, -1))
You could definetly vectorize this inner loop.
from L171-L174else:
density_each_row_copy = density_each_row
for u in range(self.n_copies - 1):
density_each_row = np.kron(density_each_row, density_each_row_copy)
There is no efficient way to parallelize kronecker product over n_copies since these code are iterative and strongly serial. But you could use the einsum function of pytorch to calculate:def kronecker(A, B):
return torch.einsum("ab,cd->acbd", A, B).view(A.size(0)*B.size(0), A.size(1)*B.size(1))
The scale(computation intensity) of your code does not suit process based parallelism, thread based parallelism is not good in your case as well, I suggest you "vectorize’ your code, use tensor operations insead of for loops, it would be at least 10 times more efficient. |
st176587 | Hi @iffiX,
What’s the shape and data type of X_prime_class_split[j] ?
X_prime_class_split[j] is a 2d numpy array. How I have CPU parallelize my code is two-fold. First, it parallelizes over the 2 binary classes. Secondly, for each binary class, the dataset is split into batches and parallelization is performed over the batches. X_prime_class_split[j] is just a dataset batch, therefore it is a 2d numpy array.
Maybe we could represent it as a tensor.
I have actually already converted all of the numpy functions in my code into PyTorch functions. It is here in my Github. Of course, this code is not fully working properly yet because of the issue with torch.multiprocessing.Pool. So in this code, X_prime_class_split[j] is a PyTorch tensor.
You could definetly vectorize this inner loop.
Thanks for the tip! I didn’t realize chunk of codes could also be vectorized (using numpy.vectorize). I googled around but couldn’t find the PyTorch equivalent of numpy.vectorize. If I were to vectorize this part of the code, do you know how I could do it using PyTorch tensors?
There is no efficient way to parallelize kronecker product over n_copies since these code are iterative and strongly serial.
Agree, there is no way to efficient parallelize kronecker product because of it’s iterative and serial nature. As mentioned above, my code is actually parallelized over the 2 binary classes and batches of the dataset, so I am not looking to parallelize the kronecker product.
But you could use the einsum function of pytorch to calculate…
I have actually already done this in my Github link above! This gives me some comfort knowing that I am on the same page as you
I guess if I can’t parallelize my code over multiple GPUs, plan B would be to rewrite my code to just use 1 GPU processing.
Many many thanks again for having a look @iffiX. Really appreciate it heaps! |
st176588 | Many thanks @mrshenli.
Your code looks interesting and I am new to PyTorch, so I will have to investigate it line by line what is it doing to see if it helps in what I want to do. |
st176589 | Hello, I am optimizing your code, what’s the usual size of n_samples and n_features? |
st176590 | If n_samples * n_features < 1e8, and you have a medium to good GPU (>=GTX1080), then there is no need to split by class, below is the “vectorized” implementation, not tested.
# new implementation of kronecker
def kronecker(A, B):
return torch.einsum('nab,ncd->nacbd', A, B)\
.view(A.size(0), A.size(1)*B.size(1), A.size(2)*B.size(2))
Main code, no need to use pools or whatever:
# according to your code, the shape of `X_prime` should be (n_samples, n_features + 1)
# whether encoding = "amplit" or "stereo"
rows = n_samples
cols = n_features + 1
# you may keep this if "n_samples * n_features" > 1e8
# X_prime_class = X_prime[y_class_index == i]
# Number of rows/columns in density matrix
density_nrow_ncol = (n + 1)**self.n_copies
# Encode vectors into quantum densities
# density: [rows, cols, cols], each density[k] is the original `density_each_row`
density = torch.matmul(X_prime.view(rows, cols, 1),
X_prime.view(rows, 1, cols))
# Calculate n-fold Kronecker tensor product
if self.n_copies == 1:
density = density
else:
density_copy = density
for u in range(self.n_copies - 1):
density = kronecker(density, density_copy)
# Calculate sum of quantum densities belonging to either class, per subset split
density_sum = density.sum(dim = 0)
# calculate centroid and q_hels_obs_terms
centroid = (1 / (m_class + 1e-6))*density_sum_class
if self.class_wgt == 'equi':
q_hels_obs_terms = 0.5*centroid_class
elif self.class_wgt == 'weighted':
q_hels_obs_terms = (m_class / m)*centroid_class
else:
raise ValueError('class_wgt should be "equi" or "weighted"') |
st176591 | Hi @iffiX,
The usual size of n_samples and n_features can be anything really, since I have written a classifier that can be used for any general datasets. But I will take note of your rule of thumb about if “n_samples * n_features < 1e8, then no need to split by class”.
Yes you are correct, X_prime is always (n_samples, n_features + 1) regardless of encoding.
I see what you mean now by “vectorize”. First, `torch.matmul()’ itself has the “vectorization” feature where it can multiple vectors inside a matrix in a “vectorization” manner. Second, rather than just using 2d tensors, I can use a 3d (or higher) dimension tensors to incorporate the 2 classes into one tensor object and then vectors inside this one higher dimension tensor object can be “vectorized”.
Questions:
Can I ask how did you determine the value 1e8 in n_samples * n_features < 1e8?
Can I ask why is there a need to add 1e-6 in (m_class + 1e-6)?
So for my code, it is difficult to parallelize over multiple GPUs at all? It would be nice if at most I could perhaps just parallelize over the 2 classes on 2 GPUs. |
st176592 | a general estimation from experience, its very rough, for your algorithm it could be anywhere between 1e6 to 1e9 depending on the platform and n_features.
prevent zero devision, if denominator > 0
For multiple gpus, split by class, coarser ganularity splits fits gpus better. |
st176593 | Ok many thanks once again @iffiX
With parallelizing over the 2 GPUs, I am thinking I guess I can use this:
device = torch.device("cuda:0")
X1 = X1.to(device)
device = torch.device("cuda:1")
X2 = X2.to(device) |
st176594 | Hi @iffiX, really sorry to bother you but would be great if I could get your input on one more question.
If I were to follow the code that you had suggested, where every row vector (of the dataset, or more specifically of X_prime ) is converted to a matrix (ie. a density) and then these matrices are all put into the one tensor object, I think this will run into memory blow-out issues. For eg., if my dataset has say 200,000 rows, then I will have 200,000 matrices that needs to be stored at one time, and this could cause a memory blow-out.
In comparison, the code that I have calculates the running sum of the matrices/densities at every step when a new row vector is converted to a matrix, therefore I do not need to store all the matrices in one tensor object (and then only sum them all up after).
Just wanted to get your thoughts on this. It’s ok too if you’re unsure.
Many thanks once again. |
st176595 | Hi there,
I’m trying to add support to spread my batches across multiple NVIDIA GPU’s when training.
To do this I’ve followed PyTorch documentation outlining that I must start a process group. This is done like so:
dist.init_process_group(
backend="gloo",
init_method="file:///C:/Users/thefi/Voice-Cloning-App-MGPU/distributed_logging",
world_size=2,
timeout=datetime.timedelta(0, 180),
rank=0
)
The distributed_logging file is generated but after 3 minutes I get the following error:
dist.init_process_group(\n File \"C:\\Users\\thefi\\MiniConda3\\envs\\vc\\lib\\site-packages\\torch\\distributed\\distributed_c10d.py\", line 439, in init_process_group\n _default_pg = _new_process_group_helper(\n File \"C:\\Users\\thefi\\MiniConda3\\envs\\vc\\lib\\site-packages\\torch\\distributed\\distributed_c10d.py\", line 517, in _new_process_group_helper\n pg = ProcessGroupGloo(\nRuntimeError: Wait timeout\n"}
Unfortunately, it does not specify why the timeout occurs.
Other things to note:
This training process is being imported as a function in a Thread and not run from the command line
This is on windows, therefore I can only use gloo as the backend |
st176596 | If you are using world_size=2, you need two processes (one with rank 0 and one with rank 1) otherwise with a single process you will see a timeout caused due to the system waiting for one missing process. |
st176597 | Could you elaborate on what you mean. Do you mean I need to running instances of this code in different processes? |
st176598 | Do you mean I need to running instances of this code in different processes?
Yes, you can refer to these docs for concrete examples: Writing Distributed Applications with PyTorch — PyTorch Tutorials 1.8.1+cu102 documentation 22 |
st176599 | Hi
I’m experiencing an issue where distributed models using torch.distributed.launch and distributeddataparallel hang specifically for NCCL Multi-GPU Multi-Node training, but work fine for Single-GPU Multi-Node and Multi-Node, Single-GPU training, and was wondering if anyone else had experienced such an issue?
In the specific case of Multi-GPU Multi-Node, all GPU’s are loaded with models (as in, nvidia-smi reports GPU memory utilisation), but at reaching distributeddataparallel NCCL_DEBUG reports
“SECONDARY_ADDR:6582:6894 [1] NCCL INFO Call to connect returned Connection timed out, retrying
SECONDARY_ADDR:6581:6895 [0] NCCL INFO Call to connect returned Connection timed out, retrying” on the rank 1 variant running on device SECONDARY_ADDR.
But for both single-node/multi-gpu and multi-node/single-gpu, the code proceeds past distributeddataparallel without any issues, which is what is making this particularly perplexing.
Job is being run via slurm using torch 1.8.1+cu111 and nccl/2.8.3-cuda-11.1.1.
Key implementation details are as follows.
The batch script used to run the code has the key details:
export NPROCS_PER_NODE=2 # GPUs per node
export WORLD_SIZE=2 # Total nodes (total ranks are GPUs*World Size
…
RANK=0
for node in $HOSTLIST; do
ssh $node "
module load nccl/2.8.3-cuda-11.1.1
python3 -m torch.distributed.launch --nproc_per_node=$NPROCS_PER_NODE –
nnodes=$WORLD_SIZE --node_rank=$RANK --master_addr=$MASTER_ADDR -
master_port=$MASTER_PORT test.py > test_$RANK" &
RANK=$((RANK+1))
done
wait
The above is the multi-node multi-gpu configuration. For single-node multi-gpu it is modified so that NPROCS_PER_NODE=2, WORLD_SIZE=1; while multi-node single gpu is NPROCS_PER_NODE=1, WORLD_SIZE=2.
Key details of test.py are
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("–local_rank", type=int, help=“Local rank. Necessary for using the torch.distributed.launch utility.”)
arg = parser.parse_args()
local_rank = arg.local_rank
torch.cuda.set_device(arg.local_rank)
torch.distributed.init_process_group(backend=‘nccl’, init_method=‘env://’)
…
model = model.cuda() #to(device)
ddp_model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank], output_device=local_rank)
…
train_sampler = DistributedSampler(dataset=train_set)
…
While torch.distributed.launch has recently been depreciated and replaced with.elastic_launch, moving to elastic_launch as a potential solution does not seem viable, due to the dependence on etcd which I’m unable to install due to access privilege restrictions.
If anyone had any suggestions about how to resolve this, I would greatly appreciate your input.
Thanks |
st176600 | For anyone who comes up against this issue - for me, of the connections reported by ifconfig on each device, only one ip matches the devices host name. Forcing that device in NCCL_SOCKET_IFNAME has fixed that, although runs are no longer consistent - with a moderate probability chance NCCL reports
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:761, internal error, NCCL version 2.7.8
ncclInternalError: Internal check failed. This is either a bug in NCCL or due to memory corruption
That it works at all (even with a little bit of management) is an improvement, but the stability is definitely an issue. Also, on repeated tests it appears that including the change to NCCL_SOCKET_IFNAME works for multi-GPU, multi-node, fails with a moderate probability on multi-node single-GPU, and fails with high probability on single-node single-GPU.
Likely a system issue, but rather a weird one. |
st176601 | Could you share a simple self contained repro for your test.py file that we can run locally and repro? |
st176602 | “While torch.distributed.launch has recently been depreciated and replaced with.elastic_launch, moving to elastic_launch as a potential solution does not seem viable, due to the dependence on etcd which I’m unable to install due to access privilege restrictions.” →
Hi! The “torch.distributed.launch” is going to be deprecated, but we still will support ability to use master based launches. We are going to have 2 modes for this:
BC mode: nothing changes for users, they will still can provide ranks and all parameters as right now
Dynamic: New launcher will be to automatically derive ranks and world size, based on master addr and master port
The etcd will not be required to use pytorch launchers as all. Etcd is relevant for long running jobs, that require high reliability and be able to withstand nodes failures. |
st176603 | Sure, thanks for that. This is the most basic code based implementatoin that would reproduce the error on my system without the NCCL_SOCKET_IFNAME tweaks.
import torch
from torch.utils.data.distributed import DistributedSampler
from torch.utils.data import DataLoader
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import argparse
import os
def main():
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("–local_rank", type=int, help=“Local rank. Necessary for using the torch.distributed.launch utility.”)
arg = parser.parse_args()
local_rank = arg.local_rank
torch.cuda.set_device(arg.local_rank)
print(os.environ["MASTER_PORT"], os.environ["MASTER_ADDR"], arg.local_rank, os.environ["LOCAL_RANK"], os.environ["RANK"], os.environ["WORLD_SIZE"])
torch.distributed.init_process_group(backend='nccl', init_method='env://') #, world_size=int(os.environ["WORLD_SIZE"]), rank=int(os.environ["RANK"]))
model = torchvision.models.resnet18(pretrained=False)
model = model.cuda()
ddp_model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank], output_device=local_rank)
transform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
train_set = torchvision.datasets.CIFAR10(root="data", train=True, download=False, transform=transform)
test_set = torchvision.datasets.CIFAR10(root="data", train=False, download=False, transform=transform)
train_sampler = DistributedSampler(dataset=train_set)
train_loader = DataLoader(dataset=train_set, batch_size=256, sampler=train_sampler, num_workers=8)
test_loader = DataLoader(dataset=test_set, batch_size=128, shuffle=False, num_workers=8)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.1, momentum=0.9, weight_decay=1e-5)
for epoch in range(5):
ddp_model.train()
for data in train_loader:
inputs, labels = data[0].cuda(), data[1].cuda() #data[0].to(device), data[1].to(device)
optimizer.zero_grad()
outputs = ddp_model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if name == “main”:
main()
Thanks for your offer of assistance (and to @aivanou for the comments on elastic - I will have to read more into elastic to better understand it). |
st176604 | My code runs well, but it always finishes the training with an error:
...
[2021-04-24 13:45:52] -- DEBUG: val>>>[94/94-500] ips-9.1, loss-0.2466, liou-0.1080, l1-0.0061, miou-0.89
[2021-04-24 13:45:52] -- DEBUG: Training is done!
free(): invalid pointer
Traceback (most recent call last):
File "/home/space/Public/anaconda3/envs/pytorch17/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/space/Public/anaconda3/envs/pytorch17/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/space/Public/anaconda3/envs/pytorch17/lib/python3.8/site-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/home/space/Public/anaconda3/envs/pytorch17/lib/python3.8/site-packages/torch/distributed/launch.py", line 255, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command '['/home/space/Public/anaconda3/envs/pytorch17/bin/python', '-u', 'train_filter_s.py', '--local_rank=1']' died with <Signals.SIGABRT: 6>.
Although there is no impact on my work, I still want to remove this error message.
part of my code:
if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:
args.world_size = int(os.environ['WORLD_SIZE'])
args.rank = int(os.environ["RANK"])
args.local_rank = int(os.environ['LOCAL_RANK'])
elif 'SLURM_PROCID' in os.environ:
args.rank = int(os.environ['SLURM_PROCID'])
args.local_rank = args.rank % torch.cuda.device_count()
args.master_addr = str(os.environ['MASTER_ADDR']) if 'MASTER_ADDR' in os.environ else '????'
args.master_port = str(os.environ['MASTER_PORT']) if 'MASTER_PORT' in os.environ else '????'
print('| distributed init (rank {} local {}) -- master://{}:{}'.format(
args.rank, args.local_rank, args.master_addr, args.master_port), flush=True)
args.distributed = True
args.device = 'cuda'
torch.cuda.set_device(args.local_rank)
torch.distributed.init_process_group(backend='nccl', init_method='env://') # env -- read from environ
torch.distributed.barrier()
setup_for_distributed(args.rank == 0)
...
...
for epoch in range(args.start_epoch, args.epochs):
args.current_epoch = epoch
# train
if args.distributed:
sampler_train.set_epoch(args.current_epoch)
train_one_epoch(args, model, optimizer, loader_train, logger, writer_train)
# change learning rate
lr_scheduler.step()
# save checkpoint
if (args.current_epoch + 1) % args.save_interval == 0:
save_checkpoint(args, model_without_ddp, optimizer, lr_scheduler)
# validate
if (args.current_epoch + 1) % args.val_interval == 0 and val_flag:
if args.distributed:
sampler_val.set_epoch(args.current_epoch)
validate(args, model, loader_val, logger, writer_val)
# cleanup
if args.distributed:
torch.distributed.destroy_process_group() |
st176605 | Do you have the complete C++ stack trace of the error for free()? If not, is it possible to run the process with gdb and share the stack trace gdb reports when it errors out? |
st176606 | Hi.
I want to concat lists with different lengths across different gpus using torch.distributed.launch. Is there any api like torch.distributed.all_reduce() can help me?
Example Code (test.py):
import random
import torch
l = []
length = random.randint(5, 8)
for i in range(length):
l.append(i)
print(l)
Run:
python -m torch.distributed.launch \
--nproc_per_node=4 \
--use_env \
--master_port=$RANDOM \
test.py
Result:
[1, 2, ..., length in GPU 0]
[1, 2, ..., length in GPU 1]
[1, 2, ..., length in GPU 2]
[1, 2, ..., length in GPU 3]
What I want (concat/synchronize the list in 4 different gpus together):
[1, 2, ..., length in GPU 0, ..., length in GPU 1, ..., length in GPU 2, ..., length in GPU 3]
[1, 2, ..., length in GPU 0, ..., length in GPU 1, ..., length in GPU 2, ..., length in GPU 3]
[1, 2, ..., length in GPU 0, ..., length in GPU 1, ..., length in GPU 2, ..., length in GPU 3]
[1, 2, ..., length in GPU 0, ..., length in GPU 1, ..., length in GPU 2, ..., length in GPU 3]
Thanks! |
st176607 | Solved by pritamdamania87 in post #6
You can pad each tensor to the maximum size and then use allgather. If you don’t know the maximum size before hand, you can first perform an allgather to collect the size of each tensor and then calculate the max. |
st176608 | Hi, Thanks for your nice suggestion!
Another harder problem for me is that, when there are too many 1D tensors with different lengths on each gpu, is there any method to gather them easier without a loop?
Situation:
GPU 0: [torch.Tensor(101), torch.Tensor(102), torch.Tensor(103), ..., torch.Tensor(200)]
GPU 1: [torch.Tensor(201), torch.Tensor(202), torch.Tensor(203), ..., torch.Tensor(300)]
GPU 2: [torch.Tensor(301), torch.Tensor(302), torch.Tensor(303), ..., torch.Tensor(400)]
GPU 3: [torch.Tensor(401), torch.Tensor(402), torch.Tensor(403), ..., torch.Tensor(500)]
Result:
GPU 0: [torch.Tensor(101), torch.Tensor(102), ..., torch.Tensor(200), ... torch.Tensor(201), ..., torch.Tensor(500)]
GPU 1: [torch.Tensor(101), torch.Tensor(102), ..., torch.Tensor(200), ... torch.Tensor(201), ..., torch.Tensor(500)]
GPU 2: [torch.Tensor(101), torch.Tensor(102), ..., torch.Tensor(200), ... torch.Tensor(201), ..., torch.Tensor(500)]
GPU 3: [torch.Tensor(101), torch.Tensor(102), ..., torch.Tensor(200), ... torch.Tensor(201), ..., torch.Tensor(500)]
The order of tensors in the output list doesn’t matter. |
st176609 | Hi, this is indeed what I need!
However, in my situation, data is generated dynamically during training on each GPU. What I need to do is to gather the data, and then use distributedSampler to sample them. I’m stucked at the step of gathering.
Do you have any good ideas? |
st176610 | huangdi:
Another harder problem for me is that, when there are too many 1D tensors with different lengths on each gpu, is there any method to gather them easier without a loop?
You can pad each tensor to the maximum size and then use allgather. If you don’t know the maximum size before hand, you can first perform an allgather to collect the size of each tensor and then calculate the max. |
st176611 | I checked the code in detectron2 and I found that they build new group for each machine. I haven’t learned much about the distributed system and I am just curious about why do they do that. I tried to search for it but I couldn’t find it. Can anyone explain this? |
st176612 | I would recommend reading the following documentation: Distributed communication package - torch.distributed — PyTorch 1.8.1 documentation 2 |
st176613 | I am attempting to use Pytorch Distributed RPC with a large number of requests in flight, using multiprocessing on a single machine. I am using the latest nightly Pytorch.
I am finding that there is a bottleneck inside a weird place in the pytorch code. The vast majority of time is spent in _recursive_compile_class. This seems wrong to me, because I shouldn’t be compiling code for every RPC (every RPC is for the same function).
Here is the most common stack trace:
__pthread_cond_timedwait (/lib/x86_64-linux-gnu/libpthread-2.31.so:0)
> exists (/usr/lib/python3.9/genericpath.py:19)
> getsourcefile (/usr/lib/python3.9/inspect.py:706)
> findsource (/usr/lib/python3.9/inspect.py:817)
> getsourcelines (/usr/lib/python3.9/inspect.py:1006)
> getsource (/usr/lib/python3.9/inspect.py:1024)
> get_type_hint_captures (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/_jit_internal.py:321)
> createResolutionCallbackForClassMethods (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/_jit_internal.py:376)
> _recursive_compile_class (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/jit/_script.py:1164)
> pybind11::detail::simple_collector<(pybind11::return_value_policy)1>::call (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> torch::jit::tryToInferType (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> c10::ivalue::ConcretePyObjectHolder::tryToInferType (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> c10::IValue::getSubValues (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so:0)
> at::cuda::CUDAFuture::extractDataPtrs (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> at::cuda::CUDAFuture::preMarkCompletedHook (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> c10::ivalue::Future::markCompleted (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> std::_Function_handler<void (), c10::ivalue::Future::then(std::function<c10::IValue()>, std::shared_ptr<c10::Type>)::{lambda()#1}>::_M_invoke (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> std::_Function_handler<void (), at::cuda::CUDAFuture::wrapCallback(std::function<void ()>)::{lambda()#1}>::_M_invoke (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> c10::ivalue::Future::markCompletedWithDataPtrs (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> std::_Function_handler<void (), torch::distributed::rpc::toPyJitFuture(std::shared_ptr<c10::ivalue::Future> const&, bool)::{lambda()#1}>::_M_invoke (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> std::_Function_handler<void (), std::function<void ()> at::wrapPropagateTLSState<void>(std::function<void ()>)::{lambda()#1}>::_M_invoke (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> std::_Function_handler<void (), at::cuda::CUDAFuture::wrapCallback(std::function<void ()>)::{lambda()#1}>::_M_invoke (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> std::_Function_handler<void (), torch::distributed::rpc::TensorPipeAgent::markFutureAsComplete(std::shared_ptr<torch::distributed::rpc::TensorPipeAgent::AtomicJitFuture>, torch::distributed::rpc::Message, std::shared_ptr<torch::distributed::rpc::LazyStreamContext>)::{lambda()#1}>::_M_invoke (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libtorch_python.so:0)
> c10::ThreadPool::main_loop (/home/jeremy/PycharmProjects/hearthstone_battlegrounds/venv/lib/python3.9/site-packages/torch/lib/libc10.so:0)
> 0x7f8b6e3eeed0 (/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28:0)
My full test case is available at stone_ground_hearth_battles/test_pytorch_distributed.py at 15534b50902c52d0be39700f783d18655083a794 · JDBumgardner/stone_ground_hearth_battles · GitHub 2 |
st176614 | _recursive_compile_class is specific to jit so would be help to include add the “jit” category in the question. My intuition as to why it must recompile everytime is that RPC calls allow for user defined logic with conditionals/branching so it would not be possible to compile only once, but I am not completely sure of this.
cc: @wanchaol |
st176615 | @H-Huang Thanks for the response. I should also be clear that I don’t see any reason why my code would have to be JIT’d for a RPC call. My code does not use JIT at all. So the JIT that is occurring must be an internal implementation detail of the Pytorch RPC API. I don’t find it that weird that something might be JIT’d once, but it seems really weird to me that internally it would be JITing for every RPC.
I am also unable to figure out how to add the “jit” tag to this post. |
st176616 | @jeremysalwen Thanks for reporting this. Could you create a github issue for this on GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration with a minimal repro?
cc @lcw Since it does seem related to the CudaFuture type inference. |
st176617 | This happens because when using RPC with CUDA tensors we need to inspect the values held by the Future objects in order to extract the tensors they contain (because we need to “track” those tensors across CUDA streams). Since we inspect those values from C++, we need to first convert Python objects to JIT objects in order to be able to handle them. This apparently is more expensive than we thought. Note that this only happens for CUDA tensors, and it only happens for callbacks (i.e., when you use the then method) because in those cases the value is a user-provided one and thus we need to re-inspect it every time as we cannot assume anything about it.
We were discussing possible changes to how CUDA Futures work, and this consideration will factor in, but I don’t yet know what the final outcome will be. |
st176618 | Thanks @lcw, my one question is whether there is a way around using the .then() method. In my case, I am using it to convert a Pytorch Future into an asyncio future. Is there an alternative way to do this without using .then (and without having to JIT the objects?). |
st176619 | @jeremysalwen I’m not exactly what you need, but you could consider using the add_done_callback method (see here). It behaves almost like then, except that it doesn’t return anything (and thus the callback passed to it is also supposed to not return anything).
As a side note, I’d be curious to hear how you’re integrating PyTorch’s RPC with asyncio: we’ve been toying with some ideas to make them more compatible but weren’t sure if there was actually need/demand for it. |
st176620 | Hi @lcw. .add_done_callback() was exactly what I was looking for. Using it instead of .then() resolves my bottleneck issue completely.
I was able to successfully integrate Pytorch’s RPC with asyncio in a way that I think works well.
Basically, I have my long-running Pytorch RPCs launch their own event loop (there is one RPC per core on my machine, each to a different worker process). Then, I just have a thin non-async function to wrap the async function. The non-async function is the one that Pytorch RPC directly calls.
async def async_my_rpc_function():
# real logic goes here...
def my_rpc_function():
asyncio.get_event_loop().run_until_complete(async_rpc_function())
Once I am inside “async-land”, I can create new async tasks, wait for new async tasks, etc. To make sure things are simple, 1. I only have one active rpc to this top level function at a time, meaning each process only has one thread, and one asyncio event loop. 2. I make sure to not create any background tasks that will outlive the RPC, so that when the RPC returns, the event loop is empty, and it is ok that it will be dead until the next RPC is called.
The only other piece necessary to get concurrency working properly is a way to await a Pytorch RPC from async land, so making RPCs doesn’t block the event queue. To do this, I use the following snippet of code (note how I am using .add_done_callback() now )
loop = asyncio.get_event_loop()
f = loop.create_future()
self.inference_queue.rpc_async().my_rpc_function().add_done_callback(
lambda fut:
loop.call_soon_threadsafe(
f.set_result, fut.value()
)
)
result = await f
This allows the asyncio event loop to be safely notified by the pytorch RPC thread when the RPC is complete.
I think that the “nice” think to do would be to implement __await__ directly on the pytorch Future object, with the implementation doing something like the snippet above. Although, I am not an expert on this, and relatively new to asyncio, so take my suggestion with a grain of salt
asyncio itself has some support for multiprocessing worker pools executing tasks, (so something like an internal RPC), but I basically ignored that. It’s possible that a better integration would be possible by understanding how that part of asyncio works (concurrent.futures — Launching parallel tasks — Python 3.9.4 documentation). |
st176621 | Great, thanks for the detailed explanation, it’s really helpful!
You definitely seem to know what you’re doing! Here are a few comments in case you haven’t thought about it already:
The way you “bind” a PyTorch future to an asyncio future, through add_done_callback, works well for successful RPCs but it might not work in case of errors, as calling fut.value() would raise an exception, and this f.set_result would never be called. In those cases your asyncio future would never become complete and any piece of code that’s awaiting it might remain stuck forever, leading to a deadlock.
By running asyncio.get_event_loop().run_until_complete(...) (which BTW I think can be shortened to just asyncio.run(...) starting in Python 3.7) you block the calling thread until all asyncio tasks launched within that loop are complete. The thread, in this case, comes from RPC’s internal pool, which has only a limited number of threads. This means that you can only have so many active RPC calls at the same time. This may be exactly what you want, but another way of doing this could be for you to manually start a dedicated single thread for the asyncio loop, and have all incoming RPC calls schedule a task to that thread. If you use the async_execution decorator you can also create a RPC future, have your asyncio task complete that future at a later point in time, and have your RPC function return immediately. This way you could release the thread to the RPC pool while your asyncio thread continues processing the task, meaning the RPC thread could be reused for another request. At the end, it means your asyncio thread would become able to handle an arbitrarily large number of requests at the same time. |
st176622 | In the pytorch distributed training, I met a RuntimeError as following:
Traceback (most recent call last):
File "visual/distribution_train.py", line 387, in <module>
main()
File "visual/distribution_train.py", line 67, in main
spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
File "/public/home/fengm/.conda/envs/fm_pytorch_env/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 167, in spawn
while not spawn_context.join():
File "/public/home/fengm/.conda/envs/fm_pytorch_env/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 114, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/public/home/fengm/.conda/envs/fm_pytorch_env/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/public/home/fengm/vehicle_reid/pytorch-pose-master/visual/distribution_train.py", line 74, in main_worker
dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url,world_size=args.world_size, rank=args.rank)
File "/public/home/fengm/.conda/envs/fm_pytorch_env/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 354, in init_process_group
store, rank, world_size = next(rendezvous(url))
File "/public/home/fengm/.conda/envs/fm_pytorch_env/lib/python3.6/site-packages/torch/distributed/rendezvous.py", line 95, in _tcp_rendezvous_handler
store = TCPStore(result.hostname, result.port, world_size, start_daemon)
RuntimeError: Address already in use
pytorch distributed initial setting is
torch.multiprocessing.spawn(main_worker, nprocs=8, args=(8, args))
torch.distributed.init_process_group(backend='nccl', init_method='tcp://110.2.1.101:8900',world_size=4, rank=0)
There are 10 nodes with gpu mounted under the master node. The master node doesn’t have GPU. I used the slurm system to submit my task and my task is randomly assigned to worker node. ‘110.2.1.101’ in init_method is the master IP. I don’t kown whether is the init_method wrong?
Is there anyone have met it before ? Who can help me to fix this bug? |
st176623 | What do you run in main_worker and where do the world_size=4 and rank=0 arguments to init_process_group come from? Are they hard coded, or do you list a single example?
The error itself means that multiple processes try to bind to the address and port, so I assume you are trying to run multiple processes with rank=0. |
st176624 | Faced the same issue.
My Solution: It simply means that the GPU is already occupied under some other ddp training. Try deleting all the processes related to the running GPU and run the process again. |
st176625 | Hi,
My model is too large to fit into one GPU, so I split it into two GPUs. Part of the models are as follows:
cur_layer_input = []
for t in range(seq_len):
d1 = self.down1.cuda(self.device_ids[0])(img_seq_ring[:, t]).cuda(self.device_ids[0])
d2 = self.down2.cuda(self.device_ids[0])(d1)
cur_layer_input.append(d2.cuda(self.device_ids[1]))
I used DDP to wrap this model:
from torch.nn.parallel import DistributedDataParallel as DDP
model = DDP(model, device_ids=[device_ids[0]], find_unused_parameters=True).
However, errors occured:
File "xxx.py", line 141, in train_model
loss.backward()
File "/home/anaconda3/envs/VIBE/lib/python3.7/site-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/anaconda3/envs/VIBE/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: grad.device() == bucket_view.device() INTERNAL ASSERT FAILED at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:206, please report a bug to PyTorch.
I do not know what bucket_view.device() is. So I do not know how to solve this problem.
Could you help me?
Thank you very much. |
st176626 | Solved by cs123951 in post #10
Hello, I have figured out what mistakes I take!
I make mistakes in the init() of the model like this:
mp_model = ToyMpModel(devices_list).cuda(devices_list[0])
ddp_mp_model = DDP(mp_model, device_ids=[0], find_unused_parameters=True)
I used multi-GPU model, but when I created model, I gave the mo… |
st176627 | Hi,
I am not sure how exactly you are trying to use DDP to split your model between devices: DDP usecase is to parallelize training. Could you take a look at model parallel tutorial instead? (you can also combine it with DDP once you set it up), e.g.
https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html 4
it requires you to move inputs between GPUs, not just the models e.g.
x = self.relu(self.net1(x.to('cuda:0')))
return self.net2(x.to('cuda:1')) |
st176628 | Yes, I have read the tutorial several times and I have moved inputs to the same device.
I have made some modifications so that the model can be fitted into one GPU for testing.
The question is, when I used a single GPU for data distributed parallel, it worked well.
But when I moved part of the model to another GPU, the error occurred.
I used a BiRNN model. The code is as follows:
def forward(self, img):
self.reverse_net = self.reverse_net.cuda(self.device_ids[1])
img_ring = torch.cat([img, img[:,0].unsqueeze(1)], dim=1).cuda(self.device_ids[1])
seq_len = img_ring.shape[1]
cur_layer_input = []
cur_layer_input_rev = []
for t in range(seq_len):
d2 = F.interpolate(img_ring[:, t], scale_factor=1 / 4, mode="trilinear", align_corners=True)
cur_layer_input.append(d2.cuda(self.device_ids[0]))
cur_layer_input_rev.append(d2.cuda(self.device_ids[1]))
cur_layer_input_rev = cur_layer_input_rev[::-1]
y_out_fwd = self.forward_net(cur_layer_input) # forward RNN
y_out_rev = self.reverse_net(cur_layer_input_rev) # backward RNN
y_out_fwd = torch.stack(y_out_fwd, dim=0)
y_out_rev = torch.stack(y_out_rev, dim=0)
y_out_rev = torch.flip(y_out_rev, dims=[1])
ycat = torch.cat((y_out_fwd.cuda(self.device_ids[1]), y_out_rev), dim=2)
disp_list = []
for t in range(1, seq_len-1):
disp_list.append(self.outconv3.cuda(self.device_ids[1])(ycat[t]))
for t in range(len(disp_list)):
disp_list[t] = F.interpolate(disp_list[t], scale_factor=4, mode="trilinear", align_corners=True)
return disp_list
The error information is:
File "xxx.py", line 141, in train_model
loss.backward()
File "/home/anaconda3/envs/VIBE/lib/python3.7/site-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/anaconda3/envs/VIBE/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: grad.device() == bucket_view.device() INTERNAL ASSERT FAILED at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:206, please report a bug to PyTorch.
So I want to know what is bucket_view and what may cause the above errors.
It is weird. I can train using one GPU. But when I moved part of the model to another GPU, errors occurred. The only changed thing is the device. |
st176629 | @cs123951 for multi-device module, you should not pass device_ids while wrapping your module using DDP.
Try this:
DDP(model, find_unused_parameters=True) |
st176630 | or if it is single-device module, just call torch.cuda.set_device(rank) before wrapping DDP, pass device_ids=[rank] |
st176631 | Thank you for your kind advice. I tried your advice, but problems still happened.
I created two files in my github(GitHub - cs123951/temp_public_files 1) to explain my problem.
I constructed a bidirectional RNN model using model parallelism like this.
image1034×442 28.7 KB
The difference is that the RNN is bidirectional so I made some modifications.
I distributed the GPU-0 to the first several layers, and the GPU-1 to the last layer.
The forward process is normal. But when it comes to the loss.backward(), error will happen:
RuntimeError: grad.device() == bucket_view.device() INTERNAL ASSERT FAILED at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:206, please report a bug to PyTorch.
You can reproduce my results using the simple_bug_file2.py in my github by the following command:
python -m torch.distributed.launch --nproc_per_node=2 --nnodes=1 --node_rank=0 --master_addr=xxx.xxx.xx.xx --master_port=31234 simple_bug_file2.py
Don’t forget to change the master address in the command and the visible GPUs in the file.
My environment is
PyTorch version: 1.4.0
Is debug build: False
CUDA used to build PyTorch: 10.1
ROCM used to build PyTorch: N/A
GPU: GeForce GTX 1080 Ti
OS: Ubuntu 18.04.3 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.3.2
Python version: 3.7 (64-bit runtime)
Nvidia driver version: 455.28
HIP runtime version: N/A
MIOpen runtime version: N/A
pip 21.0.1
By the way, when I changed down_device = self.device_ids[1] into down_device = self.device_ids[0]. It performs well.
I guess maybe bucket_view is related to the unroll operation of the RNN. But I am not familiar with it. Could you give me some more advice?
Thank you very much. |
st176632 | bucket_view is a copy of parameter’s grad. ‘bucket_view.device=param.device’ was set before training loop starts.
‘bucket_view.device != param.grad.device’ means grad of this param changed device during training loop.
Would you please confirm whether param/grad device changed during training loop? |
st176633 | hi, thank you for your explanation.
I have checked the logic of the model and I am sure that it is fine. Now that the forward pass works well, I can not figure out why there is something wrong with the backward pass.
The error occurs in the file reducer.cpp. I have no idea how to debug the C++ file when I was running a Python program.
Could you give me some advice on how to debug and checking the param/grad device for loss.backward() during the training loop?
Thank you very much! |
st176634 | Hello, I have figured out what mistakes I take!
I make mistakes in the init() of the model like this:
mp_model = ToyMpModel(devices_list).cuda(devices_list[0])
ddp_mp_model = DDP(mp_model, device_ids=[0], find_unused_parameters=True)
I used multi-GPU model, but when I created model, I gave the model only one GPU.
Thus self.is_multi_device_module = len({p.device for p in module.parameters()}) > 1 is False in the distributed.py file.
I changed the GPU in the forward function. Thus triggering the error.
The solution is:
I defined the GPU in the init() function of the model. And then I changed the call function to
mp_model = ToyMpModel(devices_list)
ddp_mp_model = DDP(mp_model, device_ids=[], output_device=[], find_unused_parameters=True)
I hope my mistakes could bring some warnings to the latercommers.
I feel sorry for the confusion caused to you.
This problem can be closed. |
st176635 | I’m using DistributedDataParallel to train my model. If one process met an exception and use the try...except block to catch the exception during forward then continue training with a new batch of data, all the process would hang (I guess that is because the fail of synchronization?). How can I handle exceptions in one process and continue training without hanging all the process? Thanks for the help! |
st176636 | All communication done through torch.distributed is collective, meaning all processes expect all their peers to participate in all collective calls they execute. If a single processes ends up not participating, the others will time out or raise an exception. The only way out of this is to let all processes timeout or fail and to reinitialize the distributed module.
You can use torch.distributed.destroy_process_group to deinitialize and then make another call to torch.distributed.init_process_group to reinitialize. This can only work if you’re using either the Gloo or the NCCL backend, and that the underlying initialization method can be reused. I believe this is the case for the the file initialization method as well as the TCP initialization method (see https://pytorch.org/docs/stable/distributed.html#initialization 33 for more information on both).
Good luck! |
st176637 | Thanks for the help! I got the idea, but how can I “let all processes timeout or fail”. How can all the processes know that there is one process meeting an exception and all the processes should destroy_process_group and reinitialize? If one process meet an exception for one minibatch, can all the processes simply just jump the current minibatch and run the next minibatch?
I find the following snippets in pytorch repo which might be helpful, but not sure how to implement the idea in detail.
def test_barrier_timeout_global(self):
dist.destroy_process_group()
# Explicitly pass world size to the barrier because we've
# just destroyed any state in torch.distributed.
self._barrier(wait_for=int(WORLD_SIZE))
# Reinitialize global process group
timeout = timedelta(seconds=0.2)
dist.init_process_group(
init_method=INIT_METHOD,
backend=BACKEND,
world_size=int(WORLD_SIZE),
rank=self.rank,
timeout=timeout,
)
self._test_barrier_timeout(dist.group.WORLD, timeout)
(from test_distributed.py 14) |
st176638 | One way to do this is to use a smaller timeout. The default timeout for the distributed module is 30 minutes. You can override this by specifying the timeout keyword argument to init_process_group as a timedelta type (e.g. datetime.timedelta(seconds=10)). Then if one of the processes crashes, the others will time out. The problem with your proposed solution is that you’re not guaranteed that the crashed process will come back. Therefore you’ll have to rely on some out of band mechanism to figure out which processes are still alive, and only when you know for sure you have WORLD_SIZE machines (or after adjusting WORLD_SIZE), continue and reinitialize. |
st176639 | Is there any clean way of accomplishing this now? I’m training on images with variable sizes and every ~30k iterations there’s an OOM error. I’m having trouble understanding how to synchronize the call to init_process_group between all the processes. |
st176640 | We currently don’t support elasticity of workers, which means that if one of the processes crashes with an OOM error the user is currently responsible for spawning another process and re-initializing distributed communication if they want to continue with training.
You may want to consider the use of the PyTorch elastic framework: https://github.com/pytorch/elastic 29 |
st176641 | Is there any resolution to this now? Right now it looks like a single forward pass failing on a single GPU once during training will bring the training crashing down. If anyone has a code example for how to catch these instances and resume training, I’d love to learn of an alternative. |
st176642 | You probably want to look into using TorchElastic — PyTorch/Elastic master documentation 16 |
st176643 | Does Pytorch 1.8 binaries support distributed data parellel on AMD?
What should I use as the communication backend, nccl or gloo? |
st176644 | If you’re using the ROCm binaries, using the “nccl” backend would work since it would transparently use rccl 6 under the hood. |
st176645 | Thank you, but I still get NCCL error while initializing model with model = DDP(model, …)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:825, internal error, NCCL version 2.7.8
ncclInternalError: Internal check failed. This is either a bug in NCCL or due to memory corruption |
st176646 | The error is unfortunately too generic to indicate the root cause. A few sanity checks:
What command did you use to install the pytorch 1.8 binaries?
Are you ensuring that each rank in your distributed model is being assigned to a different device ID? |
st176647 | I am trying to use dataloader for training. The dataset is 150G, which are all .npz files. Due to the limitation of memory size, only one sample is read at a time from the disk. The following is part of the code.
class VimeoDataset(Dataset):
def __init__(self, mode, batch_size=32, num_workers = 8, num_gpus = 4):
self.batch_size = batch_size
self.num_workers = num_workers
self.num_gpus = num_gpus
self.mode = mode
self.load_data()
self.h = 256
self.w = 448
xx = np.arange(0, self.w).reshape(1,-1).repeat(self.h,0)
yy = np.arange(0, self.h).reshape(-1,1).repeat(self.w,1)
self.grid = np.stack((xx,yy),2).copy()
self.npzs=[]
count = self.batch_size * self.num_workers * self.num_gpus
if self.mode == 'train':
filelist = glob('/data/vimeoFlow2/dataset/train/*.npz')
self.npzs = [filelist[i:i + count] for i in range(0, len(filelist), count)]
else:
filelist = glob('/data/vimeoFlow2/dataset/val/*.npz')
self.npzs = [filelist[i:i + count] for i in range(0, len(filelist), count)]
def __len__(self):
return len(self.npzs)
def load_data(self, index):
self.data = []
self.flow_data = []
for i in range(len(self.npzs[index])):
f = np.load(self.npzs[index][i])
self.data.append(f['i0i1gt'])
if self.mode == 'train':
self.flow_data.append(f['ft0ft1'])
else:
self.flow_data.append(np.zeros((256, 448, 4)))
def getimg(self, index):
data = self.meta_data[index]
img0 = data[0:3].transpose(1, 2, 0)
img1 = data[3:6].transpose(1, 2, 0)
gt = data[6:9].transpose(1, 2, 0)
flow_gt = (self.flow_data[index]).transpose(1, 2, 0)
return img0, gt, img1, flow_gt
def __getitem__(self, index):
img0, gt, img1, flow_gt = self.getimg(index)
...
...
dataset = VimeoDataset(mode = 'train', batch_size=32, num_workers = 8, num_gpus = 4)
sampler = DistributedSampler(dataset)
train_data = DataLoader(dataset, batch_size=args.batch_size, pin_memory=True, num_workers=args.num_workers, drop_last=True, sampler=sampler)
dataset_val = VimeoDataset(mode = 'val', batch_size=32, num_workers = 8, num_gpus = 4)
val_data = DataLoader(dataset_val, batch_size=args.batch_size, pin_memory=True, num_workers=args.num_workers)
However, reading data from the disk one by one causes the dataloader to be very time-consuming. So I want to improve this program, first load the amount of data of num_gpus×num_workers×batch_size into the memory, then read the data from the memory with getitem, and finally replace the data in the memory after each iteration. But I still don’t know how to achieve it. I have tried my idea as in the code above. I don’t konw how to allocate the load_data function parameters. |
st176648 | DDP averages gradients by dividing by world size. Is there any mechanism (current or planned) to run a user-defined function to scale gradients instead of the default DDP behavior?
In my case, I have variable and uneven batch sizes on each replica and need to compute the average by global batch size. |
st176649 | The default for DDP is to use allreduce and average the gradients using the world size. You can choose to allreduce the gradients yourself as specify the reduction op yourself, but in this case you lose out on the better perf of DDP and allreduce will still not accept a custom UDF for the reduction op (though you can use the SUM op and average gradients however you want).
On a slightly different note, the DDP join 2 API can help you better handle uneven inputs across ranks. |
st176650 | Yeah, I know I could just write my own gradient averaging but I lose out on the perf wins. And it seems kind of silly to maintain a fork of DDP just to change the reduction from mean to sum. I’ve looked at the join method on DDP but it doesn’t apply to my issue. Thanks for the reply! |
st176651 | Sounds good! We are also working on a broader effort to make DDP more modular and configurable across the board. It seems like this definitely an interesting direction to support - feel free to make an issue on our GitHub repo or add to this RFC tracking the DDP configurability effort: [RFC] Modularize DistributedDataParallel · Issue #37002 · pytorch/pytorch · GitHub 8 |
st176652 | We have landed gradient compression communication hooks to be used with pytorch DDP in 1.8: DDP Communication Hooks — PyTorch 1.8.1 documentation 10. This seems like it should support your use case quite nicely, as you would be able to run a custom UDF for the gradient reduction instead of the allreduce. |
st176653 | (Running on the latest pytorch nightly)
I am attempting to implement distributed RL training setup with batched inference (similar to Implementing Batch RPC Processing Using Asynchronous Executions — PyTorch Tutorials 1.8.0 documentation 1). I have working setup, with a small number of RPCs per process (12 processes, with 15 “play_game” RPCs per process active at once).
However, when I attempt to increase the number of games played simultaneously by the worker processes (from 15 to 16 RPCs), instead it freezes, eventually outputting the error
[E thread_pool.cpp:112] Exception in thread pool task: The RPC has not succeeded after the specified number of max retries (5).
hundreds of times after several minutes.
The strange thing is that 15 RPCs per process consistently succeeds, while 16 RPCs per process consistently fails. Is this a limit on the number of RPCs that can be in flight?
The test I am running is available at stone_ground_hearth_battles/test_pytorch_distributed.py at master · JDBumgardner/stone_ground_hearth_battles · GitHub |
st176654 | Hey @jeremysalwen, does increase the value of num_worker_threads help? Distributed RPC Framework — PyTorch master documentation 5
Its default value is 16. |
st176655 | @mrshenli Yes, that seems to fix the issue. However, I am still puzzled about the original behavior. Running out of worker threads should cause the execution of the RPC to be blocked until the threads are freed, but it seems like instead it is breaking other things.
This sounds like a bug, no?
To simplify my setup, I basically have two processes, process A and process B. Process A is sending 16 RPCs to process B, and all these RPCs are completely independent of each other. Each RPC from A->B internally sends a sequence of RPCs back to A. I don’t see why this should cause a deadlock, even if the thread pool in each process is size 1, because only 1 simultaneous RPC per process is required to make forward progress.
My guess is that the thread pool is being reused to both send and receive RPCs, so all 16 threads are taken by the RPCs from A to B, but now all these threads are stuck, because they are unable to make RPCs from B to A? |
st176656 | jeremysalwen:
My guess is that the thread pool is being reused to both send and receive RPCs, so all 16 threads are taken by the RPCs from A to B, but now all these threads are stuck, because they are unable to make RPCs from B to A?
This is true for temporary ProcessGroup backend, but I am not 100% if this (send/recv share the same thread pool) is the case for TensorPipe backend. @lcw could you please confirm?
The default RPC functions will block one thread on the callee side until the response is ready. If you would like the response being processed asynchronously, you can decorate the user function with @rpc.functions.async_execution and let the user function return a Future. This should release the thread on callee as soon as it gets the Future object, and can resume running the callback on the Future when the Future is completed. |
st176657 | From what I can remember the TensorPipe agent does indeed only have one thread pool which is shared for many “purposes”, so what you said is totally reasonable.
I don’t immediately recognize the initial error message you got, as it mentions retries, but I don’t think we support retires in “standard” RPC messages. I think we only support them for RRef-related messages and other internal stuff. @mrshenli Is that the case? @jeremysalwen Are you using RRefs, dist autograd, or other such things? Unfortunately the message doesn’t say what the underlying failure is, but I suspect it could be a timeout?
Also note that as @mrshenli said, it’s an antipattern to synchronously block in a remote RPC function or a callback. Doing so will block a thread and eventually lead to starvation. If you must do so, please ensure you’re doing it for a limited number of RPC calls, and size the thread pool accordingly. However, it would be better to use “native” asynchronous patterns. |
st176658 | lcw:
I think we only support them for RRef-related messages and other internal stuff. @mrshenli Is that the case?
Yep, this is true. We don’t yet have a good way to retry messages with user functions. |
st176659 | @mrshenli I am using @rpc.functions.async_execution for the calls from B to A, but I don’t see an easy way to do so for the RPC from A to B. Remember, internally the RPC from A to B then calls back to B multiple times, so I would need to suspend/resume the thread in order to free it. (i.e. I would need something like asyncio, which would much further complicate things).
@lcw I am using RRefs, but not dist autograd, or any other advanced features. I’m not sure how I would transform my code to be non-blocking. Fundamentally, my RPC running on process B needs to wait until its RPC to process A completes before continuing execution. I would expect that pytorch distributed should recognize this situation, and return the thread to the thread pool while it is waiting on the RPC. What could I change in my code so that it “returns the thread” to the thread pool, but continues executing once the RPC completes? |
st176660 | I would expect that pytorch distributed should recognize this situation, and return the thread to the thread pool while it is waiting on the RPC.
Unfortunately that’s not something that can easily be achieved. It can be done through Python’s asyncio package but that’s not supported by PyTorch (for historical reasons I guess, and it’s hard now to retrofit it). Any other alternative would basically consist in reimplementing our own asyncio, and that’s just unreasonable.
What could I change in my code so that it “returns the thread” to the thread pool, but continues executing once the RPC completes?
I think your situation could be addressed by using the async_execution decorator, and futures, as @mrshenli already mentioned. More concretely, here is what such a code could look like:
# Imagine this is the original code:
def my_function(a):
b = foo(a)
c = rpc.rpc_sync("other_worker", my_other_function, args=(b,))
d = bar(c)
return d
# It could become like this:
@rpc.functions.async_execution
def my_function(a):
b = foo(a)
fut_c = rpc.rpc_async("other_worker", my_other_function, args=(b,))
fut_d = fut_c.then(lambda fut: bar(fut.value()))
return fut_d |
st176661 | For reference, I ultimately decided to directly integrate asyncio with pytorch RPC to get around this issue. I describe how I did it in this post: Pytorch Distributed RPC bottleneck in _recursive_compile_class - #9 by jeremysalwen 2
This idea of using @rpc.functions.async_execution is interesting, but to me the example looks like it’s begging to be a coroutine instead of a pair of functions chained together with a callback |
st176662 | I am running the torch.distributed.pipeline.sync.Pipe library using pytorch 3.8.1 (also tried nightly). I have 2 visible devices. Below is the example from doc.
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.distributed.pipeline.sync import Pipe
from torchgpipe import GPipe
# Run with Pipe
fc1 = nn.Linear(16, 8).cuda(0)
fc2 = nn.Linear(8, 4).cuda(1)
model = nn.Sequential(fc1, fc2)
model = Pipe(model, chunks=8)
input = torch.rand(16, 16).cuda(0)
output_rref = model(input)
# Run with GPipe
fc1 = nn.Linear(16, 8)
fc2 = nn.Linear(8, 4)
model = nn.Sequential(fc1, fc2)
model = GPipe(model, balance=[1,1], chunks=8)
model = nn.DataParallel(model)
input = torch.rand(16, 16).cuda(0)
output_rref = model(input)
print(output_rref)
I am getting this error:
Traceback (most recent call last):
File "test.py", line 12, in <module>
output_rref = model(input)
File "/usr0/home/ruohongz/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr0/home/ruohongz/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/pipe.py", line 366, in forward
return RRef(output)
RuntimeError: agent INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/distributed/rpc/rpc_agent.cpp":247, please report a bug to PyTorch. Current RPC agent is not set!
However, the GPipe code works fine. What is the problem with the pytorch assertion? |
st176663 | You need to initialize the RPC framework, see the latest master docs: Pipeline Parallelism — PyTorch master documentation 5 |
st176664 | Now, we bought a 4 way rtx3090 24g GPU server, and I want to confirm there will be total 48g memory while I use nvlink to connect two 3090.
If rtx3090 supports this feature, how should I change my pytorch code?
Thanks. |
st176665 | Solved by ptrblck in post #4
No, the devices should not show up as a single GPU with 48GB.
You can connect them via nvlink and use a data or model parallel approach. |
st176666 | I want to confirm there will be total 48g memory while I use nvlink to connect two 3090.
That sounds right, but since this is a GPU hardware spec-related question, I would ask the NVIDIA team directly.
If rtx3090 supports this feature, how should I change my pytorch code?
With 2 GPUs and nvlink connecting them, I would use DistributedDataParallel (DDP) for training. At a high level, you can spawn 2 CPU processes, 1 for each GPU, and create a NCCL Process Group to have fast data transfer between the 2 GPUs. Then you can simply wrap your model with DDP and train.
Here are some references in terms of how to implement:
docs: https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel 56
Implementation: https://pytorch.org/docs/master/notes/ddp.html#example 24
Example: https://github.com/pytorch/examples/tree/master/distributed/ddp 34 |
st176667 | Nvidia has officially said that they will stop supporting SLI. the 3090 does still have the the connector.
nvidia.custhelp.com
NVIDIA SLI Support Transitioning to Native Game Integrations | NVIDIA 15
I am not sure if the loss of driver profiles also means the loss of driver support going forwards and I do not know if this will affect using the cards for computation. Hopefully someone else can chime in as I am curious to know. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.