id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st179168 | I have a couple of questions with regard to the proper usage of DistributedDataParallel that doesn’t seem to be covered anywhere.
The questions are below inline with the code.
def train(device, num_epochs=10):
model = ToyModel().to(device)
# QUESTION: Suppose each process has a different random generator state, when
# `DistributedDataParallel` is initialized does each process need to have the same parameter
# values?
ddp_model = nn.DistributedDataParallel(model, device_ids=[device], output_device=device)
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
loss_fn = nn.MSELoss()
for i in range(num_epochs):
# Training
model.mode(train=True)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(device)
loss_fn(outputs, labels).backward()
optimizer.step()
# Evaluate on master
if torch.distributed.get_rank() == 0:
model.mode(train=False)
# QUESTION: In order to evaluate, on one GPU, can we use `ddp_model.module`?
# QUESTION: Can we use something like `EMA` to copy new parameters to `ddp_model.module`
# and then restore them after evaluation? Learn more:
# http://www.programmersought.com/article/28492072406/
outputs = ddp_model.module(torch.randn(20, 10))
labels = torch.randn(20, 5).to(device)
print(loss_fn(outputs, labels))
# Save checkpoint on master
if torch.distributed.get_rank() == 0:
# QUESTION: In order to save the model, can we use `ddp_model.module`?
torch.save(ddp_model.module, 'checkpoint.pt')
# QUESTION: Do we need to use `torch.distributed.barrier` so that the other processes
# don't continue training while the master evaluates?
Thank you for the helpful tutorial https://pytorch.org/tutorials/intermediate/ddp_tutorial.html 24. I reused it’s example code for this question. |
st179169 | Solved by mrshenli in post #2
QUESTION: Suppose each process has a different random generator state, when DistributedDataParallel is initialized does each process need to have the same parameter values?
No. Rank 0 will broadcast model states to all other ranks when you construct DDP. Code for that is here.
In order to evalu… |
st179170 | QUESTION: Suppose each process has a different random generator state, when DistributedDataParallel is initialized does each process need to have the same parameter values?
No. Rank 0 will broadcast model states to all other ranks when you construct DDP. Code for that is here 16.
In order to evaluate, on one GPU, can we use ddp_model.module?
Yes, this should work.
Can we use something like EMA to copy new parameters to ddp_model.module and then restore them after evaluation?
Yes, if you make sure you restored those model param values correctly. Otherwise, if this introduces inconsistency across param values across different processes, DDP will not fix that for you, as DDP only syncs grad instead of params. This 18 might be helpful to explain.
In order to save the model, can we use ddp_model.module
Yes. And when you restore from the checkpoint, it’s better to reconstruct the DDP instance using the restored module to make sure that DDP starts from a clean state.
Do we need to use torch.distributed.barrier so that the other processes don’t continue training while the master evaluates?
It’s recommended this way. But if you are not consuming the checkpoint right away and not worried about timeout due to rank0 is doing more work, this is not necessary. Because the next DDP backward will launch allreduce comm ops, which will sync anyway. Some of this is also explained here 34. |
st179171 | Hi,
I have a question about the p2p communication in torch.distributed. Suppose we set up a group with 3 processes using command init_process_group(backend=‘gloo’, init_method=“tcp://10.0.0.1:8888”, rank=args.rank, world_size=3) on three different nodes with IP 10.0.0.1 to 10.0.0.3. When we are sending tensors from 10.0.0.2 to 10.0.0.3, how is the underlying network traffic routed? Is it directly from 10.0.0.2 to 10.0.0.3 or from 10.0.0.2 to 10.0.0.1 and then to 10.0.0.3? Probably the answer is obvious but I couldn’t find it based on the doc’s description. Thanks in advance!
Yijing |
st179172 | Solved by mrshenli in post #2
Hey @yijing
The message will directly send from 10.0.0.2 to 10.0.0.3.
In init_process_group, the init_method=“tcp://10.0.0.1:8888” is only for rendezvous, i.e., all process will use the same ip:port to find each other. After that communications don’t need to go through master.
BTW, if you are us… |
st179173 | Hey @yijing
The message will directly send from 10.0.0.2 to 10.0.0.3.
In init_process_group, the init_method=“tcp://10.0.0.1:8888” is only for rendezvous, i.e., all process will use the same ip:port to find each other. After that communications don’t need to go through master.
BTW, if you are using p2p comm, torchrpc 3 might be useful too. Here 2 is a tutoral. |
st179174 | I was trying to train my NLP model in multGPU with 2 K80s, each K80 has 2 cores, my model works fine in CPU or single GPU with DataParallel or distributedDataParallel, but when I use 2 or more cores, embarrassing things happened, it hangs always,this is the symptom
DataParallel
clothModel = myModel.cuda()
clothModel = nn.DataParallel(clothModel) # <-- it works fine
······
out, loss = clothModel(input) # <-- program always hang in this line, even I can't use ctrl+C to shut it down and I get this infomation by using VSCode debug
when I turn to nvidia-smi, I found this
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.39 Driver Version: 418.39 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 On | 00000000:08:00.0 Off | Off |
| N/A 45C P0 70W / 149W | 2113MiB / 12206MiB | 100% E. Process |
+-------------------------------+----------------------+----------------------+
| 1 Tesla K80 On | 00000000:09:00.0 Off | Off |
| N/A 35C P0 70W / 149W | 322MiB / 12206MiB | 0% E. Process |
+-------------------------------+----------------------+----------------------+
| 2 Tesla K80 On | 00000000:86:00.0 Off | Off |
| N/A 39C P0 57W / 175W | 311MiB / 12206MiB | 0% E. Process |
+-------------------------------+----------------------+----------------------+
| 3 Tesla K80 On | 00000000:87:00.0 Off | Off |
| N/A 31C P0 71W / 175W | 320MiB / 12206MiB | 0% E. Process |
+-------------------------------+----------------------+----------------------+
after a night, it remains this still
distributedDataParallel
after failed to use DataParallel, I turned to distributedDataParallel which is recommanded, and it hangs in clothModel = nn.parallel.DistributedDataParallel(clothModel)
I turn to nvidia-smi, it seems almost the same as when I use nn.DataParallel
and this time I can use ctrl + C while the processes are still remain in cuda so that the only way to exit is kill -9 PID
when I use ctrl+c, it displayed this
File "/home/damaoooo/.conda/envs/test/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/damaoooo/.conda/envs/test/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/damaoooo/.conda/envs/test/lib/python3.6/site-packages/torch/distributed/launch.py", line 263, in <module>
main()
File "/home/damaoooo/.conda/envs/test/lib/python3.6/site-packages/torch/distributed/launch.py", line 256, in main
process.wait()
File "/home/damaoooo/.conda/envs/test/lib/python3.6/subprocess.py", line 1477, in wait
(pid, sts) = self._try_wait(0)
File "/home/damaoooo/.conda/envs/test/lib/python3.6/subprocess.py", line 1424, in _try_wait
(pid, sts) = os.waitpid(self.pid, wait_flags)
and more interesting, I made a simply CNN for MNIST and turn to DataParallel or distributedDataParallel, it works perfect… I wonder is there something wrong with my clothModel?if there is, why I turn to single GPU, it works fine?
and how can I solve the confusing hang? |
st179175 | damaoooo:
and it hangs in clothModel = nn.parallel.DistributedDataParallel(clothModel)
You mean DDP hangs at constructor? Can you attach the process to gdb and check the trace to see line is causing the hang?
Have you set CUDA_VISIBLE_DEVICES or pass in device_ids arg properly for DDP? Each DDP process should exclusively work on one GPU.
I wonder is there something wrong with my clothModel?
Given the trace, I assume you are using the launch script. With that, DDP should be constructed in the following way:
clothModel = DistributedDataParallel(clothModel, device_ids=[arg.local_rank], output_device=arg.local_rank) |
st179176 | thanks a lot! that’s the key to the question, after tried that, I successed to run, but How about the DataParallel hang in the first question? |
st179177 | Not sure why DataParallel stuck. The source code is here 30. Can you attach the process/threads to GDB and backtrace the stack? |
st179178 | Hello everyone,
I’m building an app that makes calculations using CUDA (it makes some optimization based on Simulated annealing). I successfully followed Custom C++ and CUDA Extensions 4 tutorial and made stable version on simple GPU, so now I would like to use multiple GPUs (some tasks has huge amount of data, that could not be allocated a single GPU + I’d like to speed up my calculations).
I have several tensors that I would like to split by dim=0 and make distributed calculations (all calculations based on map pattern, so all records by dim=0 are undependable). So best choice for me would be create my custom nn.Module class with forward method and use DistributedDataParallel module, but I have not any parameter that requires a gradient and module crushes. (Yeah it rises AssertionError: DistributedDataParallel is not needed when a module doesn’t have any parameter that requires a gradient.)
Could you please recommend something how to solve this problem or some other modules/ways to have distributed calculations.
Best regards, Demetry |
st179179 | Solved by ptrblck in post #2
Would splitting the data and sending each chunk to a specific device work?
Something like this could already solve your use case:
data = torch.randn(4, 100)
chunks = data.chunk(4, 0)
res = []
for idx, chunk in enumerate(chunks):
res.append(my_fun(chunk.to('cuda:{}'.format(idx))).to('cuda:0'))… |
st179180 | Would splitting the data and sending each chunk to a specific device work?
Something like this could already solve your use case:
data = torch.randn(4, 100)
chunks = data.chunk(4, 0)
res = []
for idx, chunk in enumerate(chunks):
res.append(my_fun(chunk.to('cuda:{}'.format(idx))).to('cuda:0'))
res = torch.stack(res) |
st179181 | Thank you, ptrblck,
As I understand your way will calculate consequentially?
I would like to calculate in parallel: my app calculates about million iterations and each one based on the previous, so should I use threading/multiprocessing/concurrent.futures or there is some better solutions? |
st179182 | CUDA operations are asynchronous, so each device should operate on its own.
You could check the GPU utilization during the script execution, which should show that all devices are being used. |
st179183 | Thank you, it really helped)
So I’ve done something like that:
import torch
from concurrent.futures import ThreadPoolExecutor
import MyCustomCudaModule as my_module
class MyClass:
def __init__(self, data):
self.gpus = [0, 1] # set devices I'd like to use
# Split some data to chunks and allocate on its own GPU
self.tensor0 = torch.tensor(data[0], dtype=torch.float64).chunk(len(self.gpus))
self.tensor0 = [self.tensor0[idx].to(f'cuda:{gpu}') for idx, gpu in enumerate(self.gpus)]
self.tensor1 = torch.tensor(data[1], dtype=torch.float64).chunk(len(self.gpus))
self.tensor1 = [self.tensor1[idx].to(f'cuda:{gpu}') for idx, gpu in enumerate(self.gpus)]
def calculate(self):
# Prepare input data to use my CUDA method
chunks = list()
for idx in range(len(self.gpus)):
chunk = [self.tensor0[idx], self.tensor1[idx]]
chunks.append(chunk)
# Start my calculations asynced
futures = self.executor.map(lambda ch: my_module.calculate(*ch), chunks)
total_result = 0.0
for result in futures:
total_result += result.item() # return calculations result from GPU to CPU
return result
It splits my data between GPUs and correctly calculates but I have no speedup (the speed is the same as I use 1 GPU).
What should I do to calculate faster? |
st179184 | How large is the data tensor? If it is not large enough, the GIL contention across threads and the extra overhead of setting this up could overshadow the speed up brought by using multiple GPUs. Another thing is how did you measure the delay? As the computation is done on CUDA, you might need to use CUDA events and elapsed_time 1 to get the accurate measure.
If elapsed_time still shows no improvement, can you try:
increase chunk size.
use multiple processes |
st179185 | In average tensors are about 2000 * 1000 * 100 elements, sometimes they are could be about 15000 * 8000 * 100 elements. I split on chunks by dim=0.
Now I have a guess that they are calculating consequentially instead of in parallel (I thought that ThreadPoolExecutor.map will start first GPU thread, return to the main CPU thread, start second one GPU thread etc, then will await for results from any device that finished. But as I can see it waits until first GPU will finish and then start calculations on second one)
So what is the best practice to start my calculations asynchronous? (I could not use asyncio) |
st179186 | Multi-thread should work, and this is how DataParallel 21 is implemented (search for parallel_apply). But if my_module.calculate is composed of many CPU ops, you might see sequential execution due to GIL. |
st179187 | Shuffle BN is an important trick proposed by MoCo (Momentum Contrast for Unsupervised Visual Representation Learning 13):
We resolve this problem by shuffling BN. We train with multiple GPUs and perform BN on the samples independently for each GPU (as done in common practice). For the key encoder f k , we shuffle the sample order in the current mini-batch before distributing it among GPUs (and shuffle back after encoding); the sample order of the mini-batch for the query encoder f q is not altered. This ensures the batch statistics used to compute a query and its positive key come from two different subsets. This effectively tackles the cheating issue and allows training to benefit from BN.
Since the official code is not yet released, I tried to implement Shuffle BN as below (where the size of local tensor data is [32, 3, 224, 224]):
def forward(self, data):
N = data.size(0)
if self.training and self.shuffle_bn:
global_data = distributed_concat_no_grad(data, 0)
shuffle_index = torch.randperm(global_data.size(0), device=data.device)
broadcast(shuffle_index, 0)
recover_index = shuffle_index.argsort()
beg = N * self.rank
end = beg + N
data = global_data[shuffle_index[beg: end]]
feature = self.some_feature_extracting_network(data)
feature = feature.view(N, -1)
if self.training and self.shuffle_bn:
global_feature = distributed_concat_with_grad(feature)
feature = global_feature[recover_index[beg: end]]
return feature
However, the first call of allgather communication makes the training much slower (0.54s/iter -> 0.84s/iter). |
st179188 | Hey @WarBean
Where is the allgather call? Do you mean the broadcast?
Is this question about how to improve the efficiency? |
st179189 | Thanks for your reply.
1.distributed_concat_no_grad allgather the data tensors on each GPUs.
2.Yes. |
st179190 | Looks like, if you can know the value of global_data.size(0) without communication, you then only need the real data from global_data at the end of the if statement. In this case, you can try launch an async allgather and only wait for it right before the shuffle, so that the comm can overlap with other steps in between.
Another questions is why do you need to do the shuffle this way? Can you pre-shuffle the input data for multiple batches and then run multiple iterations without communication? If this is possible, you can both 1) consolidate smaller comm into larger ones and 2) launch multiple async comm and wait for all in one shot to saturate the bandwidth. Besides, looks like the comm only applies to input data, if so, you can even align one iteration with a previous comm, e.g., always let iteration i consume comm result from iteration i - 2. In this way, the comm i-2 might have already finished before kicking off iteration i. |
st179191 | Hi, all
I’m trying to use the distributed package for multi-gpu training. Because of the way the code is written, the master process does all the initialisations (creating model replicas, optimisers etc.). From pytorch source code, it seems like during forward pass, all model replicas will be synced with the one running in subprocess with rank 0. Does that mean I could just initialise one optimiser for subprocess 0 and only update the parameters of the first model replica?
Thanks, |
st179192 | Hi @DzReal
From pytorch source code, it seems like during forward pass, all model replicas will be synced with the one running in subprocess with rank 0.
If you are using DistributedDataParallel, above is actually not true. The distributed sync occurs during the backward pass, and it averages all gradients instead of parameters (check torch/csrc/distributed/c10d/reducer.cpp). So that when the optimizer consumes those gradients, they are already global gradients.
The sync you mentioned in the forward pass might be this 13. This only does intra-node sync when you use one DDP process to work on multiple GPUs.
Does that mean I could just initialise one optimiser for subprocess 0 and only update the parameters of the first model replica?
No. Each DDP process should have its own local optimizer. |
st179193 | So when running on a single node with multiple GPUs, could I only use one optimiser? |
st179194 | So when running on a single node with multiple GPUs, could I only use one optimiser?
You will need one optimizer per DDP process, regardless of where those DDP processes are. I hope the following note could help explain it: https://pytorch.org/docs/master/notes/ddp.html 12 |
st179195 | Hi
I have a RNN and a set of time series data organized in n sequences as (y_1, u_1), (y_2, u_2), …, (y_n, u_n), where y_i is vector of data outputs and u_i is matrix where each column is a signal and each row a time sample of all signals.
The cost function to be minimized is in the form of a sum
min f_1(y_1, u_1) + f_2(y_2, u_2) + f_3(y_3, u_3) + … + f_n(y_n, u_n)
where each cost function f_i is different for each sequence i.
I was wondering if someone has any experience, or can help me find where to start look how to make each term f_i(y_i, u_i) in the cost-function to be evaluated in parallel using multiple cpus/gpus? |
st179196 | Hey @daner if you are using a single machine with multiple GPUs, you can try scatter + parallel_apply + gather. The implementation of DataParallel can serve as an example. [link 1] |
st179197 | I have written a module for small graphs that has a custom collate_batch function so that I have batches of 128 graphs at a time however the graphs are different sizes and so we have a variable number of nodes. I am using torch_scatter for some operations.
My aim is to parallelise my code over multiple GPUs to reduce the real time for training. Naively using nn.DataParallel(model) gives me an error as I believe it requires the sub-tensors it splits up the tensor into to all be the same size which is not the case here as while each GPU gets 64 graphs the number of nodes varies. Is there a way to do this currently across multiple GPUs or do I need to just take the hit and train on one GPU? |
st179198 | CompRhys:
Naively using nn.DataParallel(model) gives me an error as I believe it requires the sub-tensors it splits up the tensor into to all be the same size which is not the case here as while each GPU gets 64 graphs the number of nodes varies.
I am inclined to think this shouldn’t cause a problem for DataParallel. Because DataParallel would replicate your model and scatter your input on the first (batch) dimension. Even though different input element (graph) can contain different node, as long as your model forward function can handle it properly, it should not hit error.
Could you please share a min repro for the error you saw? |
st179199 | Hi everyone,
I am trying to train a model with one machine, but with multi gpus. Until now, I was using the nn.DataParallel which works well, but it seems a bit slow to me so I would like to use the DistributedDataParallel instead.
However, I am not sure to understand clearly how to use this function (I have some weird results, the training takes 10x much more time than DataParallel).
In fact, I am not sure which gpus have to load the model/batch and compute the loss function ?
Moreover with the code below, my training is slower and I saw on nvidia-smi a weird behavior. Instead of having ONE process on each gpu, I have two process for each gpu (I have two gpus, but I have 4 process) .
My second issue is if I increase the number of workers in the dataloader, I have a dataloader pid killed error .
Am I doing something wrong ?
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from apex.parallel import DistributedDataParallel as DDP_apex
from torch.nn.parallel import DistributedDataParallel as DDP
def run(gpu, args):
rank = gpu
dist.init_process_group(
backend='nccl',
init_method='tcp://localhost:1088', #'env://',
world_size=args.world_size,
rank=rank
)
trainset = ...
testset = ...
################################################################
train_sampler = torch.utils.data.distributed.DistributedSampler(
trainset,
num_replicas=args.world_size,
rank=rank
)
################################################################
test_sampler = torch.utils.data.distributed.DistributedSampler(
testset,
num_replicas=args.world_size,
rank=rank
)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=args.batch_size, shuffle=True, num_workers=args.workers, pin_memory=False, drop_last= True )
testloader = torch.utils.data.DataLoader(testset, batch_size=args.batch_size, shuffle=False, num_workers=args.workers, pin_memory=False)
optim_params = list(filter(lambda p: p.requires_grad, net.parameters()))
optimizer = optim.Adam(optim_params, lr=args.lr, betas=(0.9, 0.999), eps=1e-08, weight_decay=args.weight_decay, amsgrad=True)
net = 2D_CNN()
net = net.to(args.gpus[0])
net = DDP(net, device_ids=args.gpus)
train(net, optimizer, trainloader, testloader, args, gpu) # function which iterate accross the dataloader and do the forward/backward/step
if __name__ =="__main__":
args.nodes = 1 # one single machine
args.gpus = [0,1,2]
#########################################################
args.world_size = len(args.gpus) * args.nodes #
os.environ['MASTER_ADDR'] = 'localhost' #
os.environ['MASTER_PORT'] = '8888' #
print(args.gpus)
print(os.environ['MASTER_PORT'] )
mp.spawn(run, nprocs=len(args.gpus), args=(args,)) # |
st179200 | One requirement for DistributedDataParallel is that, you need to set device_ids 15 properly or use CUDA_VISIBLE_DEVICES env var to configure them properly to make sure that one process only works on one GPU. Otherwise, by default, each process will try to use all visible GPUs. |
st179201 | Thanks for your replies.
@mrshenli How can I configure the devices_ids if I want to have one process = one gpu ?
In my case, I have tried to change device_ids by the gpu number (which is equal to the rank value in my case) such as device_ids = [gpu] and call transfer the model in the first gpu model.to(args.gpus[0]) .
But unfortunately, I got that error : RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 1 does not equal 0 (while checking arguments for cudnn_convolution)
P.S : it seems to “work” now, I forgot to cast the loss function on the good gpu. But, when I increase the number of workers for the dataloader (>0) it creates the error : RuntimeError: DataLoader worker (pid(s) 16451) exited unexpectedly . (and the training took above 24 hours). I will try with the apex distributed and then again compare with DataParallel. |
st179202 | Please report back if you find that distributed is faster than data parallel (i.e. time it and lets us know). My intuition says that the claim is false but I haven’t seen anywhere on the docs a proper MWE on how to properly use distributed for each individual case in combination with launch.py. |
st179203 | This 78 is a minimum DDP example. Posting it here in case it is useful for future readers. |
st179204 | That doesn’t cover launch.py examples.
I would like to see examples using launch.py covering the following two separate use cases
image583×608 24.1 KB |
st179205 | @mrshenli Thanks for the link.
I see that my training take exactly the same amount of time with 1 ou 2 gpus, so I was wondering, when we use Distributed process, do we have to divide the original number of iteration by the number of gpus used ? |
st179206 | Hey @kirk86, could you please create an issue on GitHub for us to track this request? Thanks! |
st179207 | @Shiro
If each process runs the same amount of iterations with each iteration consuming the same amount of data, using 2GPUs might actually take longer, because there will be additional communication overhead between the two GPUs. But in this case, your model is actually trained using 2X number of batches.
Reducing the number of iterations should work, or you can also reduce the batch size. One thing to note is that, this might also call for additional tuning on the learning rate or other configs. Some relevant discussions are available here 24. |
st179208 | Hi,
I am running experiments on other’s people code. They used 16 gpus and the library torch.distributed. I just want to run the code with one gpu, I know it will be slow. Is there a simple way to adapt the code to one GPU without having to learn to use the library pytorch.distributed? at this moment my priority is to see if their code help us, if selected then I’ll focus on that library.
Will the library pytorch.distributed automatically detect that I only have one GPU and work on it? No, because it is sending me errors.
Use GPU: 0 for training
Traceback (most recent call last):
File “train.py”, line 97, in
main()
File “train.py”, line 29, in main
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, idx_server, opt))
File “/home/ericd/anaconda/envs/myPytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 171, in spawn
while not spawn_context.join():
File “/home/ericd/anaconda/envs/myPytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 118, in join
raise Exception(msg)
Exception:
– Process 0 terminated with the following error:
Traceback (most recent call last):
File “/home/ericd/anaconda/envs/myPytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 19, in _wrap
fn(i, *args)
File “/home/ericd/tests/CC-FPSE/train.py”, line 37, in main_worker
dist.init_process_group(backend=‘nccl’, init_method=opt.dist_url, world_size=world_size, rank=rank)
File “/home/ericd/anaconda/envs/myPytorch/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py”, line 397, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File “/home/ericd/anaconda/envs/myPytorch/lib/python3.6/site-packages/torch/distributed/rendezvous.py”, line 120, in _tcp_rendezvous_handler
store = TCPStore(result.hostname, result.port, world_size, start_daemon)
TypeError: init(): incompatible constructor arguments. The following argument types are supported:
1. torch.distributed.TCPStore(arg0: str, arg1: int, arg2: int, arg3: bool)
line 37 is
torch.distributed.init_process_group(backend='nccl', init_method=opt.dist_url, world_size=world_size, rank=rank)
How can I adapt the code to my situation? Ideally there is some parameter that I can use and make things compatible.
Do I need to find certain lines and modify them? I am afraid that may be the case but I don’t know where or which.
I am working with https://github.com/xh-liu/CC-FPSE 2 but I am not sure if this helps. |
st179209 | Solved by mrshenli in post #2
It depends on how the code was written. If the model forward function has sth like:
def forward(input):
x1 = self.layer1(input.to("cuda:0"))
x2 = self.layer2(input.to("cuda:1"))
x3 = self.layer3(input.to("cuda:2"))
return x3
Then it would certainly fail as it cannot find the device… |
st179210 | It depends on how the code was written. If the model forward function has sth like:
def forward(input):
x1 = self.layer1(input.to("cuda:0"))
x2 = self.layer2(input.to("cuda:1"))
x3 = self.layer3(input.to("cuda:2"))
return x3
Then it would certainly fail as it cannot find the device.
If there is nothing like that, you probably can get around by doing sth like:
torch.distributed.init_process_group(backend='nccl', init_method=opt.dist_url, world_size=1, rank=0)
You might also need to change opt.dist_url into sth like "tcp://localhost:23456" |
st179211 | Hi @all,
I’m new to pytorch and currently trying my hands on an mnist model. I do not have a GPU but have 24 CPU cores and >100GB RAM (using torch.get_num_threads()). However, I do not observe any significant improvement in training speed when I use torch.set_num_threads(10) - it seems to me that there isn’t any difference between setting the number of threads and not having at all.
I would like to know how I can take advantage of the multiple CPU cores available during model training. I have also tried setting num_workers of the data loader but to no avail. |
st179212 | Setting the number of threads will only make some individual operations faster (e.g. big matrix multiplication or convolution), if they work on big tensors. For your example, this might accelerate some of the big fully connected layers, if you use a batch size that’s big enough. Alternatively, you can explore running more processes, and using torch.nn.parallel.DistributedDataParallel to parallelize across processes. |
st179213 | Thanks for your direction. I have tried using torch.nn.parallel.DistributedDataParallelCPU and the forward pass is able to utilize the number of processes I set (I assume that’s the same as cpu cores in my case). I followed the tutorial here 968. However, there’s a lengthy block, for what I think is the backward pass, before any forward pass is observed.
Any suggestion on how to address this? |
st179214 | Sorry, it’s a typo. I mean a ‘lengthy block’ of all forward pass ops before the spawned processes do the next forward pass. |
st179215 | Do you mean a lengthy block of time? That you observe upon starting the processes?
It is possible the first forward pass take a bit longer than subsequent ones due to memory allocation and general initialization of all the operators/backends. |
st179216 | If this only happens in the first iteration, it’s likely memory allocation and initialization stuff. If subsequent iterations also take longer than you expect, it is possible you have started too many processes and are overloading your system. |
st179217 | Is torch.nn.parallel.DistributedDataParallel only applicable to GPU and not to CPU with multi cores? |
st179218 | It works with CPUs with multi cores. From the DistributedDataParallel doc:
For multi-device modules and CPU modules, device_ids must be None or an empty list, and input data for the forward pass must be placed on the correct device.
The thing is that as there is only one “cpu” device in PyTorch, you cannot specify which cores to run a DDP process using the device_ids arg in DistributedDataParallel constructor. However, you should still be able to set the CPU affinity for processes independently? |
st179219 | I’ve been using react native for one of my module.Recently we started using Pytorch . Pytorch along with react native is crashing.
Below is the error
java.lang.UnsatisfiedLinkError: couldn’t find DSO to load: libpytorch_jni.so caused by: dlopen failed: cannot locate symbol “_ZN8facebook3jni10JByteOrder11nativeOrderEv” referenced by “/data/app/*****-4ZGDZJcLItKdhIsz3hj2rQ==/lib/arm/libpytorch_jni.so”.
Please help me.I am in great urgency. |
st179220 | Hey @Arun_Kumar1
Could you please share a snippet of the PyTorch code that causing this error? Why do you think this relates to torch.distributed or torch.distributed.rpc? Are you using these two packages? It seems to me this questions should be posted under mobile build and Java bindings tag? |
st179221 | Hi. For example, for the code snippet
a = torch.rand(3).requires_grad_()
l = a.sum()
torch.distributed.all_gather([a,b,c], a)
l.backward() will trigger an error say
one of the variables needed for gradient computation has been modified by an in-place operation
I assume this is because all_gather do in-place change for all a,b,c.
But why? As a is emitted by the current process, it’s not necessary to change it and cause this issue. Is there any consideration behind this all_gather behavior?
Thx |
st179222 | We probably can add a shortcut to avoid changing a in this case, but I am not sure if that is a good idea, because that will make all_gather have different behavior depending on underlying storage. Consider two cases.
Case 1:
x = empty_like(a)
torch.distributed.all_gather([x,b,c], a)
In this case, we would still need to write data from a to x, right?
Case 2:
x = a.view(...) # say change the stride
torch.distributed.all_gather([x,b,c], a)
In this case, x will share the storage with a, but using a different element layout, so we would need to write into x.
To address the above problems, we probably can detect and only skip inplace write if x shares the same storage and meta with a. However, the concerns are 1) does the extra overhead worth it? 2) will the disparity in all_gather’s behavior confuse users?
This PR 3 might be relevant. It’s trying to avoid a flatten. |
st179223 | I am looking for clarification on the best way to use DataParallel with attention layers. As an example, MultiheadAttention expects inputs which have shape (L,N,E) where L is the length of the sequence, N is the batchsize, and E is the embedding size. The fact that the batch size is NOT the first dimension leads to problem when using DataParallel. To work around this I am transposing the dimension, see example below:
import torch
import torch.nn as nn
class AttnParallel(nn.Module):
def __init__(self, dim, num_heads):
super(AttnParallel,self).__init__()
self.attn = nn.MultiheadAttention(dim, num_heads, dropout=0, bias=False)
def forward(self, h, mask):
print("h has shape:", h.shape)
print("mask has shape", mask.shape)
h = h.transpose(0,1).contiguous()
h = self.attn(h,h,h, key_padding_mask=mask)[0]
h = h.transpose(0,1).contiguous()
return h
# create model
dim =4
num_head=2
device = torch.device("cuda")
mod = AttnParallel(dim, num_head)
mod = nn.DataParallel(mod.to(device))
# create data
bsz = 16
L = 5
h = torch.rand(bsz,L,dim)
mask = torch.zeros(bsz,L).bool()
mask[0,1] = True
mask[2,4] = True
# forward
h = mod(h,mask)
I have a few questions:
My understanding is that when using DataParralel, whatever tensors I feed to the forward() function will be chunked over the first dimension into 8 pieces and fed to 8 replica of my network (assuming 8 GPUs). So in this example, both the h and mask tensor will be chunked into 8 pieces. Eventually, the outputs of the 8 replica are concatenated over the first dimension. Am I understanding this correctly?
Is transposing the input the recommended way of dealing with module that expect input whose first dimension is not the batch dimension. Is it recommended to use contiguous() to improve performance, or is that unnecessary?
Should it be nn.DataParallel(mod.to(device)) or nn.DataParallel(mod).to(device)? Both seem to work but the doc says: " The parallelized module must have its parameters and buffers on device_ids[0] before running this DataParallel module." So I don’t understand how come nn.DataParallel(mod).to(device) work?
Thanks! |
st179224 | your understand is correct, it will chunk data based on dim=0 in default
I’m not sure transposing it recommended way, but contiguous() will cause copy
I think nn.DataParallel(mod.to(device)) is better |
st179225 | I want to save gradients of internal variables through register_hook() or retain_grad().
When I run model on single GPU, it works.
But when I run model on multiple GPUs through wrapping model into nn.DataParallel, I find that it doesn’t work.
Can anyone help me? |
st179226 | based on comments “In each forward, :attr:module is replicated on each device, so any
updates to the running module in forward will be lost. For example,
if :attr:module has a counter attribute that is incremented in each
forward, it will always stay at the initial value because the update
is done on the replicas which are destroyed after forward. However,
:class:~torch.nn.DataParallel guarantees that the replica on
device[0] will have its parameters and buffers sharing storage with
the base parallelized :attr:module. So in-place updates to the
parameters or buffers on device[0] will be recorded.”
That means gradients of internal variables can not be updated in multiple GPUs, can only be updated in device[0]. if you want to sync buffers, you can try to use DistributedDataParallel package? |
st179227 | With DataParallel, how can we assign exemples to GPUs manually while iterating on data loader?
My dataset contains images of highly variable sizes and we chose to use a batchsize of 1. The automatic scatter in dataParallel will use the batchsize dimension to realize the scatter and will only assign to 1 GPU in this case.
Is there a way to compute compute the backward in multi-GPU fashion in this context? |
st179228 | Do you want to try to use DistributedDataParallel API, where you can spawn each process running on one GPU |
st179229 | Hi,
I have a model which has a custom autograd Function call. The backward method of this call has mutliple torch.autograd.grad calls in a loop. Somewhat like this -
class func1(Function):
@staticmethod
def forward(ctx, input1, input2, *args):
ctx.save_for_backward(input1, input2)
return input2
@staticmethod
def backward(ctx, grad):
input1, input2 = ctx.saved_tensors
for ii in range(10):
new = torch.autograd.grad(input2, input1, grad_outputs=grad,
retain_graph=True)
return (None, new)
class MyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = torch.nn.Linear(10, 10)
self.relu = torch.nn.ReLU()
self.net2 = torch.nn.Linear(10, 5)
def forward(self, x):
out = self.net1(x)
out = func1.apply(x, out)
out2 = self.relu(out)
out2 = func1.apply(x, out2)
out3 = self.net2(out2)
out3 = func1.apply(x, out3)
return out3
This works fine when I run with single GPU. But when I run with DDP it hangs in the loss.backward() call. I see that it hangs in the for loop in func1. Peculiarly all the workers hang in the same iteration of the for loop.
Edit: CPU utlization and GPU utilization stays high (100%) when it hangs for all processes. Code run with torch.distributed.launch
Any help would be appreciated!
Thanks!! |
st179230 | Hi,
This is a known limitation of DDP I’m afraid. You can see the issue tracking this here: https://github.com/pytorch/pytorch/issues/24005 98
We are planning on getting to this after the 1.5 release. |
st179231 | Thank you for your response.
After some testing, now I realised that the hang is due to syncBNs in the model - it works fine with normal BNs. The graph between input2 and input1 has syncBNs too and the many autograd.grad() calls give rise to many all_reduce calls in syncBNs’ backward which hang. I think this is what’s happening - https://github.com/pytorch/pytorch/pull/14267#discussion_r257051495 16
My model has many heads in parallel with syncBNs and those could be deadlocking too.
Do you see a solution/workaround for this?
Thanks again! |
st179232 | @alekhka as a temporary workaround, you could try doing allreduce manually after the backward pass. (see this comment 17). This will be slower as there will be no overlapping between computation and communication, but hopefully can avoid the hang problem. |
st179233 | That should be same as setting delay_reduce=True in apex right? That doesn’t fix it either.
I think the deadlock is between all_reduce calls in syncBNs’ backward() across the parallel heads and not between gradient all_reduce & syncBN all_reduce.
Removing the parallel head and having just 1 head works fine. Replacing syncBN with BN works fine for with parallel head model.
Thank you! |
st179234 | I am a deep learning beginner and running PyTorch’s Demo for study. I have several computers and laptops at home, the situation is different for each machine, some machines only have CPU, and some have powerful GPU, but I hope they all come together to speed up a training process.
My implementation idea is the C / S structure. When the server is doing model input, if there is a request from the client, any number of samples are sent to the client through the socket, and the calculation of these sent samples is skipped. After the client completed, it sends back the results of the model forward, and the server merges the results. Finally, if the server completed an epoch, it blocks and waits for the results of all clients to return.
Now, what I don’t know is how to merge the results of the model foward on the client into the server. I don’t know if this is correct …
PS: Or, when the client’s model calls forward (), it does not make a classifier. Before the classifier is called, the data is sent back and the server model completes the classifier. Can the main calculation amount in the forward process be shared Arrived |
st179235 | IIUC, what you have in mind is the reverse structure of this tutorial 10. In the tutorial, there are multiple observers sending inputs to the same agent, while in your case, you would like to have the server sending inputs to different clients and run forward on the clients?
The problem with the server-client structure is that, if forward is run on client, the autograd graph and activations will also be on client, meaning that the server cannot merge the output and run the backward locally.
One possible alternative is that, instead of sending forward output from client to server, you can let the client finish forward-backward and then send the model param gradients to server. Then the server collects gradients from all clients and sum them, use the summed grads to update parameters, and then broadcast the updated model to all clients.
Another alternative is to let the client finish forward-backward-optimizer, and then send the model params to the server. Then the server calculates the weighted average of all params and broadcast them back to the clients. |
st179236 | I face this when using torch.nn.parallel.DistributedDataParallel(pytorch 1.4.0), and also using below
device = torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”)
tensor = torch.zeros(*shape, device=device).scatter_add(1, segment_ids, data)
File “/home/gezi/mine/pikachu/utils/melt/eager/train.py”, line 1398, in train
loss.backward()
File “/home/gezi/env/anaconda3/lib/python3.6/site-packages/torch/tensor.py”, line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/home/gezi/env/anaconda3/lib/python3.6/site-packages/torch/autograd/init.py”, line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Tensors must be CUDA and dense
How to slove this ? I tried many such as
tensor = torch.zeros(*shape).cuda().scatter_add(1, segment_ids, data)
but this only works for DataParallel not DistributedDataParallel.
Another problem of DIstributedDataParallel is each process using all gpus like below, is this by design ?
image1818×2576 467 KB |
st179237 | Huige_Cheng:
RuntimeError: Tensors must be CUDA and dense
Are you using sparse tensors?
Another problem of DIstributedDataParallel is each process using all gpus like below, is this by design ?
How did you construct DDP? You need to either set device_ids 26 arg properly or use the CUDA_VISIBLE_DEVICES env var to configure that, and make sure no DDP processes share the same GPU. Otherwise, each process will try to use all visible devices, and when two DDP process share a GPU, it could hang. |
st179238 | Yes I’m using Embedding with arg sparse=True. But seems not ok to run DDP only if I using scatter_add later.
If using 2 processes to run DDP. Then I set CUDA_VISIBLE_DEVICE=0,1 for each prcoess.
code like below
rank = dist.get_rank()
device = torch.device(‘cuda’, rank)
model = model.to(device)
model = torch.nn.parallel.DistributedDataParallel(model,device_ids=[rank],output_device=rank)
I tried to launch each process with CUDA_VISIBLE_DEVICE=0 CUDA_VISIBLE_DEVICE=1 but seems not work. |
st179239 | @mrshenli Well the second problem is due to I’m using tf dataset eager mode to read data first then convert to torch tensors, the problem has been solved.
For the first I find yes it is due to using sparse not related to scatter_add. So the problem is the same as DistributedDataParallel Sparse Embeddings 157 |
st179240 | Hi, I used to have a single gpu, but since now I have two,
I tried to run my code in cuda:1, rather than cuda:0 which I normally use.
However, I ran into the error of
File "/Hard_3rd/harry/TOF_hj_0306/train/model_trainers/trainer_CU_MixRes_scale.py", line 297, in _train_epoch
for step, data in data_loader:
File "/home/user/anaconda3/envs/TOF/lib/python3.7/site-packages/tqdm/std.py", line 1107, in __iter__
for obj in iterable:
File "/home/user/anaconda3/envs/TOF/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 582, in __next__
return self._process_next_batch(batch)
File "/home/user/anaconda3/envs/TOF/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 608, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
File "/home/user/anaconda3/envs/TOF/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/user/anaconda3/envs/TOF/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 68, in default_collate
return [default_collate(samples) for samples in transposed]
File "/home/user/anaconda3/envs/TOF/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 68, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "/home/user/anaconda3/envs/TOF/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 42, in default_collate
out = batch[0].new(storage)
RuntimeError: Attempted to set the storage of a tensor on device "cuda:1" to a storage on different device "cuda:0". This is no longer allowed; the devices must match.
I guess the issues come from default collate_fn trying to send data to cuda:0, when it is already on cuda:1. How can I stop this from happening? Is there a way I can still use default collate_fn while running my code properly? |
st179241 | Solved by mrshenli in post #4
Have you tried setting CUDA_VISIBLE_DEVICES env var before launching the process? It would be more clear if you share some minimum code snippet |
st179242 | Have you tried setting CUDA_VISIBLE_DEVICES env var before launching the process? It would be more clear if you share some minimum code snippet |
st179243 | As you mentioned, you can specify a custom collate_fn. Have you tried doing so? Can you provide a minimal code snippet that we could experiment to reproduce? |
st179244 | I didn’t realize I could do this, setting CUDA_VISIBLE_DEVICES to a single gpu. Thank you very much for your help!! |
st179245 | Hi, I would like to pretrain BERT by using DDP.
I saved pretrain dataset(350GB of large corpus) as torch.tensor.
When I run the code below, dataset is loaded in memory 8 times.
python -m torch.distributed.launch --nproc_per_node=8 train.py
How can I prevent it?
Thanks. |
st179246 | Solved by ptrblck in post #2
Did you store the complete dataset in a single tensor?
If so, I think you might need to load it once and store smaller chunks of the data (and load only certain chunks in each process) or load the data lazily from the beginning. |
st179247 | Did you store the complete dataset in a single tensor?
If so, I think you might need to load it once and store smaller chunks of the data (and load only certain chunks in each process) or load the data lazily from the beginning. |
st179248 | Yes, I did
As you said, I stored smaller chunks of the data.
Thanks for your reply. |
st179249 | I thought it is expected to have dedicated data loader in each process? So that 8 processes will have 8 dataloaders and 8 DDP instances?
cc @vincentqb please correct me if I am wrong. |
st179250 | Right, depending on the details in the code is organized, I would expect 8 processes/gpus getting different chunk of data as you said. |
st179251 | Is there a way to define a remote GPU device inside our local code?
For example:
local_cpu = torch.device('cpu')
remote_device = ... (?)
model = Model().to(remote_device)
...
inputs = inputs.to(remote_device)
outputs = model(inputs)
outputs = outputs.to(local_cpu) |
st179252 | Hey @xerxex, there is a torch.distributed.rpc package for this purpose. Please refer to the following docs:
API doc: https://pytorch.org/docs/master/rpc.html 123
Tutorial: https://pytorch.org/tutorials/intermediate/rpc_tutorial.html 143
For now, we do not yet support creating remote device like torch.device('worker1/cuda0'), but this is on our roadmap and we plan to implement this as a sugar layer on top of RPC. Applications should be able to do the same thing using our raw RPC API. |
st179253 | Hi @mrshenli, thanks for the links. Unfortunately, I could not run the minimal example from the documentation. can you please correct me to run this?
Let’s say I can not proceed without the return value so I need rpc_sync.
I created two python scripts. The first one is:
# On worker 0:
import torch
import torch.distributed.rpc as rpc
rpc.init_rpc("worker0", rank=0, world_size=2)
ret = rpc.rpc_sync("worker1", torch.add, args=(torch.ones(2), 3))
rpc.shutdown()
and the second one is:
# On worker 1:
import torch.distributed.rpc as rpc
rpc.init_rpc("worker1", rank=1, world_size=2)
rpc.shutdown()
Then, when I execute the first script, I run into this error:
File "process_0.py", line 4, in <module>
rpc.init_rpc("worker0", rank=0, world_size=2)
File "/data/anaconda3/lib/python3.7/site-packages/torch/distributed/rpc/__init__.py", line 60, in init_rpc
init_method, rank=rank, world_size=world_size
File "/data/anaconda3/lib/python3.7/site-packages/torch/distributed/rendezvous.py", line 48, in rendezvous
raise RuntimeError("`url` must be a string. {}: {}".format(type(url), url))
RuntimeError: `url` must be a string. <class 'NoneType'>: None |
st179254 | Have you set the master address and port for Gloo ProcessGroup? Sth like:
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500' |
st179255 | mrshenli:
os.environ[‘MASTER_ADDR’] = ‘localhost’ os.environ[‘MASTER_PORT’] = ‘29500’
@mrshenli, I added as you said, but still the same error.
The error comming from this line:
rpc.init_rpc("worker0", rank=0, world_size=2) |
st179256 | Hey @xerxex, which version of PyTorch are you using?
File "/data/anaconda3/lib/python3.7/site-packages/torch/distributed/rpc/__init__.py", line 60, in init_rpc
init_method, rank=rank, world_size=world_size
Given this line above, it does seem to be v1.4 nor the current master branch 2.
RPC is only available since v1.4. |
st179257 | I see, that’s the commit prior to the official v1.4.0 release, and that why the code from the error message looks different from v1.4.0.
In the version you are using (Nov 22nd, 2019), the init_rpc API takes an init_method arg, which you need to set. It is the same init_method as how you would call init_process_group.
It will be easier if you switch to official v1.4 or the current master. |
st179258 | Honestly, we wouldn’t recommend using versions prior to v1.4.0, the API and behavior of RPC package are only officially announced as experimental in v1.4.0. So, even if you can get around init_rpc using your current PyTorch version by setting init_method, you might run into other issues later. |
st179259 | I tried to reproduce the results by using the code provided in the tutorial on Single Machine Model Parallel Best Practices 6, however the results were a bit different.
The model with pipelining is expected to perform better than than the rest two cases but it doesn’t. What can be the possible reasons for this? |
st179260 | Hey @aniruddhadave, a lot of configurations could affect the performance, e.g., split size, hardware type, GPU interconnection bandwidth, model complexity, etc. And it does takes effort to get the best performance. One place to start with could be drawing the split size curve using your environment. I mean this figure blow. How does it look on your side? |
st179261 | @mrshenli
I also observed a similar result.
This is the thread I created in the discussion.
Pytorch Model Parallel Best Practices: Pipeline Stats distributed
I am trying to replicate the model parallel best practices tutorial.
Model Parallel Pytorch Docs
I use Tesla K80 GPUs for running the example. I didn’t plot graphs but I have the following stats.
Single Node Time: 2.1659805027768018
Model Parallel Time: 2.23040875303559
Pipeline 20 Mean: 3.496733816713095
I don’t get the best results at this split size and it could be okay, depending on the hardware, software issues this can be possible. So I went for testing what is going on. Then I ran…
I also benchmarked this using multiple configurations.
I am not sure about the concurrent run of the code. So I changed it as follows and got a sort of fine result.
class PipelineParallelResNet50(ModelParallelResNet50):
def __init__(self, split_size=20, *args, **kwargs):
super(PipelineParallelResNet50, self).__init__(*args, **kwargs)
self.split_size = split_size
def taskA(self, s_prev, ret):
s_prev = self.seq2(s_prev)
ret.append(self.fc(s_prev.view(s_prev.size(0), -1)))
def taskB(self, s_next):
s_prev = self.seq1(s_next).to('cuda:1')
return s_prev
def forward(self, x):
splits = iter(x.split(self.split_size, dim=0))
s_next = next(splits)
s_prev = self.seq1(s_next).to('cuda:1')
ret = []
for s_next in splits:
# A. s_prev runs on cuda:1
# self.taskA(s_prev=s_prev, ret=ret)
with concurrent.futures.ThreadPoolExecutor() as executor:
futureA = executor.submit(self.taskA, s_prev, ret)
futureA.result()
# B. s_next runs on cuda:0, which can run concurrently with A
with concurrent.futures.ThreadPoolExecutor() as executor:
futureB = executor.submit(self.taskB, s_next)
s_prev = futureB.result()
s_prev = self.seq2(s_prev)
ret.append(self.fc(s_prev.view(s_prev.size(0), -1)))
return torch.cat(ret) |
st179262 | Hey @Vibhatha_Abeykoon
Based on the numbers you posted in that thread, looks like you get the best performance with split_size=60, which gives you around 1.8s execution time, and it is a little faster than the 2.2 single node time?
BTW are you using the same model as the tutorial? |
st179263 | Hey, Thank you for the response. The split size was indeed the reason for the difference inperformance. On changing the split size I was able to reduce the time for pipelined model. |
st179264 | @mrshenli
I am exactly using the same code. I couldn’t replicate similar results.
Correct me if I have misunderstood the concept of running two micro-batches concurrently,
for s_next in splits:
# A. s_prev runs on cuda:1
s_prev = self.seq2(s_prev)
ret.append(self.fc(s_prev.view(s_prev.size(0), -1)))
# B. s_next runs on cuda:0, which can run concurrently with A
s_prev = self.seq1(s_next).to('cuda:1')
Here part A, and part B as shown in the comments must run concurrently to get the pipeline performance. Am I correct/wrong?
If so, as Pytorch eagerly executes the layers, it is not asynchronous? Am I following this right?
If both these clauses are true, having threads is required? Isn’t it?
My reasoning comes with the usage of the for loop. Here within the loop a concurrent execution could
happen if we use threads. Am I following this wrong? |
st179265 | Vibhatha_Abeykoon:
Here part A, and part B as shown in the comments must run concurrently to get the pipeline performance. Am I correct/wrong?
No. Because CUDA operations run asynchronously from CPU’s point of view, unless you explicitly call synchronize() on CPU. And they are inserted into the same CUDA stream on each device in this example, which will guarantee ops on the same device run in order, but ops on different device can run in parallel.
Vibhatha_Abeykoon:
If so, as Pytorch eagerly executes the layers, it is not asynchronous? Am I following this right?
Yes, it launches the CUDA kernel right away, but the CUDA kernel execution can be asynchronous. And CUDA will actually queue the kernels in the stream and to coordinate the execution. So, “launch right away” does not mean it will wait for the kernel to finish, nor does it mean the the kernel will start on GPU immediately.
Vibhatha_Abeykoon:
If both these clauses are true, having threads is required? Isn’t it?
No, with multiple CUDA device + async CUDA kernel behavior, you could still get parallel execution without threads. And actually, even with thread, all Python code on CPU will still run in sequentially due to GIL. |
st179266 | @mrshenli
Yes, there is a part that GIL avoids. I was trying to use multiprocessing (torch version), it gave some memory issues. I understand your point.
I was trying to run this on K80 GPUs. I didn’t get the graph as shown in the tutorial. Does old hardware could be an issue for not getting the expected graph?
Could you share the script run command, I assumed it is just python scripy.py
do I have to use any specific flags or CUDA env variables that need to be set.
I am just curious why the performance is not there as expected in the tutorial. |
st179267 | Could you share the script run command, I assumed it is just python scripy.py
do I have to use any specific flags or CUDA env variables that need to be set.
I was actually using the exactly same script as shown in that tutorial. It is a .rst instead of a notebook because I don’t know what the hardware spec of our tutorial servers will be. I would like to avoid tuning parameters every time the underlying server hardware changes. I probably should have highlighted in the very beginning saying that the result will be different in different envs, and each env would require efforts to explore configuration space to get the best perf.
I did that a while back, and my server env should be either this:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.69 Driver Version: 396.69 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M40 On | 00000000:0B:00.0 Off | 0 |
| 0% 27C P8 18W / 250W | 0MiB / 11448MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla M40 On | 00000000:0D:00.0 Off | 0 |
| 0% 25C P8 18W / 250W | 0MiB / 11448MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
GPU0 GPU1 CPU Affinity
GPU0 X PIX 0-11,24-35
GPU1 PIX X 0-11,24-35
or this:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.116.00 Driver Version: 418.116.00 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro GP100 On | 00000000:81:00.0 Off | 0 |
| 26% 32C P0 29W / 235W | 1MiB / 16278MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Quadro GP100 On | 00000000:82:00.0 Off | 0 |
| 26% 33C P0 30W / 235W | 1MiB / 16278MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
GPU0 GPU1 CPU Affinity
GPU0 X NV4 12-23
GPU1 NV4 X 12-23 |
Subsets and Splits