id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st180000 | Hi,
I have access to 4 GPUs, and would like to use 2 for my task training with DataParrallel. From what I understand, instructing torch which GPUs to use is done by:
task = nn.DataParrallel(model, device_ids=[1, 2]) to use GPUs 1 and 2.
However, this yields the following assertion error: “raise AssertionError(“Invalid device id”)”.
From my looking at previous answers (apologize I’m not sure how to link them, first post), this error indicates that I do not have GPUs 1 or 2 available to me. However, this does not appear to be the case.
For instance, extracting the device properties in the python interpreter (same environment) works without issue:
Screen Shot 2019-03-30 at 2.29.56 PM.png1517×47 4.85 KB
Is there something I’m missing with regards to assigning the model to specific device ids? Or recommendations on other methods through which I can try accomplish this task?
Thank you very much for your time! |
st180001 | Are you able to create tensors on each device?
Could you run this code and check the device ids?
x = torch.randn(10)
for id in range(4):
y = x.to(id)
print(y.device)
Also, note that the device ids start with 0, in case you would like to use the “first two” GPUs. |
st180002 | Hello!!
Its possible to run pytorch distributed NN using GPUs with CUDA Capability < 2.0 ??
If its possible. How??
Thanks |
st180003 | the default of init_method is init_method='env://' (using nccl backend),if I want to run another code, what kind of url can I use? thanks. |
st180004 | What do you mean with “run another code”? Do you want to use another distributed backend or different initialization method? |
st180005 | For example: (ignore the export NGPUS)
python -m torch.distributed.launch --nproc_per_node=$NGPUS run1.py
python -m torch.distributed.launch --nproc_per_node=$NGPUS run2.py
After try. just using the form like “env://tmp” can work. thank you. |
st180006 | I see. The env:// initialization method pulls all information it needs from the environment, so it will be isolated to a single run. If you use the file:// initialization method, and any of the processes crashes, it may leave a stale file that prevents you from running something else until you delete it. |
st180007 | My institution uses one server to store many different datasets while another one to conduct computation. How should we load the data from the data server, such that the loading of data will not become the bottleneck? Does Dataloader have considered this occasion? |
st180008 | Welcome!
Check out the documentation for the torch.utils.data.DataLoader 1. There is a keyword argument for the number of worker processes to spawn to load and preprocess data. You can use this to try and fix any data loading bottlenecks. |
st180009 | I used the command watch -n0.1 nvidia-smi to check the behavior of GPUs and
I found the utilization of GPU becomes 0% at the beginning of each epoch for a short period. I am just wondering if it is common that the training hangs for a second before the beginning of each epoch? Maybe the reason is that dataloader has to re-prepare data for the beginning of each epoch? |
st180010 | That’s likely, yes. You can add some timing code in the body of your trainer to confirm this. For example, printing some output as soon as you get the first batch of input data can be used to prove/disprove the data loader being the cause of the slow start. |
st180011 | Thanks Pieter. I believe it has some slow down at the beginning. I am just wondering what’s the reason causing such delay. |
st180012 | Hy I just switched from tf and Im loving pytorch. Im would like to use parallel GPU computations on basic operation like matmul and torch.randn (im doing evolution strategies) is there any way to implement it in pytorch. Ive only seen examples that involve using the nn.DataParallel wrapper on models. |
st180013 | Could you explain your use case a bit more?
If you have separate computations, you could use each GPU to perform a single op:
res0 = torch.matmul(x.to('cuda:0'), y.to('cuda:0')
res1 = torch.matmul(x.to('cuda:1'), y.to('cuda:1')
Another approach would be to use nn.DataParallel in case you would like to send data chunks to each GPU and perform the same operations. |
st180014 | Most of my operations are sequential, what I would like to do is splitting up my arrays by the population dimension along multiple GPU-s. Can I use nn.DataParallel without using a Model class ? |
st180015 | Pataki_Marton:
Can I use nn.DataParallel without using a Model class ?
Yes you can See Uneven GPU utilization during training backpropagation 20 for an example wrapping the loss function with DataParallel |
st180016 | Im using evolution methods, I dont have backpropagation and my loss function is not differentiabe. I want to parallelize sequential basic operations like torch.matmul. |
st180017 | Can you elaborate please ?
I linked a post describing how to wrap an arbitrary loss function via DataParallel so that you can compute it via scattering the dataset onto multiple GPUs. Then you responded
Im using evolution methods, I dont have backpropagation and my loss function is not differentiabe. I want to parallelize sequential basic operations like torch.matmul.
and I mentioned that in that case, if you don’t want to use backpropagation, you don’t need to call .backward() on your loss function. In other words, I also wasn’t sure how your answer was related to wrapping a loss function in DataParallel. I mean, you can use it whether or not you want to do backpropagation, because DataParallel is not tied to backpropagation as far as I know. |
st180018 | So the minimal example to produce the error:
device = torch.device('cuda')
lstm = nn.DataParallel(nn.LSTM(1, 5, batch_first=False), dim=1).to(device)
batch_size = 30
max_length = 20
lengths=torch.tensor([max_length]*batch_size, device=device)
inputs = torch.zeros(max_length, batch_size, 1, device=device)
inputs_pack = torch.nn.utils.rnn.pack_padded_sequence(inputs, lengths, batch_first=False)
outputs, hidden = lstm(inputs_pack)
which ends up with an exception:
Dimension out of range (expected to be in range of [-1, 0], but got 1) |
st180019 | Solved by shaform in post #3
Hi, the trace is quite long.
But now I kind of know what the problem is. Basically, the packed sequence could not be parallelized because it could not be divided along a batch dimension.
The solution is to put LSTM inside another module and pack the sequence inside the forward function of that mod… |
st180020 | Hi, the trace is quite long.
But now I kind of know what the problem is. Basically, the packed sequence could not be parallelized because it could not be divided along a batch dimension.
The solution is to put LSTM inside another module and pack the sequence inside the forward function of that module.
RuntimeError Traceback (most recent call last)
<ipython-input-5-4bf0d856e1ee> in <module>
6 inputs = torch.zeros(max_length, batch_size, 1, device=device)
7 inputs_pack = torch.nn.utils.rnn.pack_padded_sequence(inputs, lengths, batch_first=False)
----> 8 outputs, hidden = lstm(inputs_pack)
/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
137 if not self.device_ids:
138 return self.module(*inputs, **kwargs)
--> 139 inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
140 if len(self.device_ids) == 1:
141 return self.module(*inputs[0], **kwargs[0])
/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in scatter(self, inputs, kwargs, device_ids)
148
149 def scatter(self, inputs, kwargs, device_ids):
--> 150 return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
151
152 def parallel_apply(self, replicas, inputs, kwargs):
/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py in scatter_kwargs(inputs, kwargs, target_gpus, dim)
33 def scatter_kwargs(inputs, kwargs, target_gpus, dim=0):
34 r"""Scatter with support for kwargs dictionary"""
---> 35 inputs = scatter(inputs, target_gpus, dim) if inputs else []
36 kwargs = scatter(kwargs, target_gpus, dim) if kwargs else []
37 if len(inputs) < len(kwargs):
/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py in scatter(inputs, target_gpus, dim)
26 # None, clearing the cell
27 try:
---> 28 return scatter_map(inputs)
29 finally:
30 scatter_map = None
/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py in scatter_map(obj)
13 return Scatter.apply(target_gpus, None, dim, obj)
---> 15 return list(zip(*map(scatter_map, obj)))
16 if isinstance(obj, list) and len(obj) > 0:
17 return list(map(list, zip(*map(scatter_map, obj))))
/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py in scatter_map(obj)
13 return Scatter.apply(target_gpus, None, dim, obj)
14 if isinstance(obj, tuple) and len(obj) > 0:
---> 15 return list(zip(*map(scatter_map, obj)))
16 if isinstance(obj, list) and len(obj) > 0:
17 return list(map(list, zip(*map(scatter_map, obj))))
/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py in scatter_map(obj)
11 def scatter_map(obj):
12 if isinstance(obj, torch.Tensor):
---> 13 return Scatter.apply(target_gpus, None, dim, obj)
14 if isinstance(obj, tuple) and len(obj) > 0:
15 return list(zip(*map(scatter_map, obj)))
/lib/python3.6/site-packages/torch/nn/parallel/_functions.py in forward(ctx, target_gpus, chunk_sizes, dim, input)
87 # Perform CPU to GPU copies in a background stream
88 streams = [_get_stream(device) for device in target_gpus]
---> 89 outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams)
90 # Synchronize with the copy stream
91 if streams is not None:
/lib/python3.6/site-packages/torch/cuda/comm.py in scatter(tensor, devices, chunk_sizes, dim, streams)
146 ``devices``.
147 """
--> 148 return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams))
149
150
RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1) |
st180021 | Traceback (most recent call last):
File "./pretraining/run_pretraining.py", line 440, in <module>
main()
File "./pretraining/run_pretraining.py", line 384, in main
loss.backward()
File "/data/anaconda/envs/bzheng_env/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/data/anaconda/envs/bzheng_env/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
File "/data/anaconda/envs/bzheng_env/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/parallel/distributed.py", line 310, in overlapping_backward_epilogue
"This probably indicates some buckets were not allreduced.")
RuntimeError: ('In epilogue, next_bucket (0) != num_buckets (1). ', 'This probably indicates some buckets were not allreduced.')
This error occurred when executing loss.backward() in pytorch with distributed training.
It occurs even using only a single gpu while the program runs normally without distributed training.
Have anyone met the same problem with me?
The command I use to start the program is shown follows:
python -u -m torch.distributed.launch --nproc_per_node=1 ./pretraining/run_pretraining.py ****** |
st180022 | Do you distributedDataParallel(model) in your program?
You should change some code in your program when using torch.distributed.launch |
st180023 | @11116 I see you’re using Apex. The error message in the epilogue means that not all learnable parameters in your model had their gradients computed (i.e. they didn’t participate in the forward pass). This is possible if you use any type of control flow in your forward pass that excludes use of certain parameters. You can fix this in Apex by delaying all reduction until the very end of the backwards pass by using the delay_allreduce option. See https://nvidia.github.io/apex/parallel.html 16 for more details. |
st180024 | When I wrap my model in nn.DataParallel, it requires me to move the model to the GPU (via .cuda()). However, it also requires moving the inputs to the forward pass to CUDA (e.g., via to(torch.device('cuda'))). I saw this post 20 from 2017 mentioning DataParallel allows CPU inputs, but I’m running into issues when passing CPU tensors to the parallelized model (specifically, it’s saying it’s expecting cuda tensors but didn’t get cuda tensors). Maybe things changed since then.
I want to avoid a case where I put my input on one GPU, then DataParallel has to take it off that GPU and distribute it on the rest, making it really slow.
Are there any optimizations I can do to ensure that I’m not doing any unnecessary transfer between GPU and CPU? And is it correct that I have to pass cuda tensors to a parallelized module?
Thanks! |
st180025 | Can you share an example snippet that shows this problem? Looking at the code 34 that gets called there is an explicit mention of copying one big input tensor to all GPU devices you want to use. |
st180026 | My network is a seq2seq net with three inputs:
Word embeddings for the input sequence
Word embeddings for the output sequence
Image-like inputs for each token in the output sequence
When I try to keep any of these three on CPU before the forward pass with DataParallel it complains that the tensors aren’t cuda tensors.
E.g., here’s a snippet of my input encoder:
# Tensor containing sequence word type indices
torch_indices: torch.Tensor = torch.zeros((len(examples), max(seq_lens)), dtype=torch.long)
for idx, (sequence, sequence_length) in enumerate(zip(batch_indices, seq_lens)):
torch_indices[idx, :sequence_length] = torch.tensor(sequence, dtype=torch.long)
# Now embed things
batch_embeddings: torch.Tensor = self._embedder(torch_indices.to(DEVICE))
len(examples) gets my my batch size, max(seq_lens) gets me the maximum sequence length in the batch, and I iterate over indexed sequences (batch_indices) and modify values of the indices tensor indicating the indices of word types. I then put it on the device (in the case of single-GPU, this will be DEVICE=torch.device('cuda'); in the case of CPU, this will be DEVICE=torch.device('cpu'); and when I have more than one GPU this is by default also DEVICE=torch.device('cpu'). Perhaps I shouldn’t use to (and explicitly place on CPU) at all when using DataParallel? |
st180027 | I just tested removing the call to to(DEVICE) in the snippet above, and it still gives an error that it’s expecting a cuda tensor (e.g., in the call to the embedder,
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.LongTensor for argument #3 'index'
self._embedder is an object of a class which extends nn.Module, and has an attribute of type nn.Embedding, which is on the GPU when I make this call to it. |
st180028 | I may be running into these problems given how I set my code up. I wanted the code to be adaptable for zero, one, or multiple GPUs, so I have a ModelWrapper class which keeps track of whether it’s being parallelized or not.
Internal to that I have a member model which is the actual nn.Module being parallelized. It extends both nn.Module and an abstract model class (I use an abstract model class so that I can have multiple kinds of model architectures, but the assumptions are that all models in my project have both an encoder and a decoder, and also implement forward as they are modules).
When initializing the ModelWrapper, I first create the model module. This object has attributes for the encoder and decoder (which are objects also extending nn.Module), and these attributes have attributes which are also modules, e.g., an embedding module, and so on. Once I create the model, if I have more than one GPU, I first wrap it in nn.DataParallel, and then put it on the GPU by calling model.cuda().
When I want to use the model during training, e.g., to compute the loss, I just call model(inputs) (do a forward pass), which returns a tensor.
Perhaps the call to nn.DataParallel is not actually distributing the model parameters on the GPU correctly, given how I wrapped everything in classes?
I did verify that all parameters in my model are on the GPU, and during training all three GPUs are being used by the process. |
st180029 | Sounds like this error is expected here.
The input encoder you posted earlier will always run on CPU, since you don’t pass a device kwarg to torch.zeroes. I’m assuming you’re calling the encoder from within the forward function. If you want inputs to be distributed to all GPUs, you need to call the wrapped module (the resulting model after wrapping it with nn.DataParallel) with the CPU side inputs, and nn.DataParallel will make sure the inputs are distributed accordingly. If you generate the encoded input from within the forward function, there is no place where nn.DataParallel could hook into and move them around. |
st180030 | I’m assuming you’re calling the encoder from within the forward function.
Yes, this code is all in the forward function for the instruction encoder Module (the Module object is an attribute of another Module who is wrapped in DataParallel, and its forward call is called during the top-level forward call. It is very modular code!). The forward call for this model takes as input a list of string vectors seqs (List[List[str]]), and just before the call I posted, I convert them in to lists of ints:
batch_indices: List[List[int]] = [[self._embedder.get_index(tok) for tok in seq] for seq in seqs]
seq_lens: List[int] = [len(instruction) for instruction in instructions]
I think I know what the issue is – does DataParallel require that the input to the forward calls be tensors so it can distribute them? |
st180031 | Another issue with assembling the batch in the forward function is that you end up doing the same work multiple times, depending on the number of GPUs you are using (the forward function is called N times).
alsuhr:
I think I know what the issue is – does DataParallel require that the input to the forward calls be tensors so it can distribute them?
Yes. If you pass the input batch (as a tensor) to nn.DataParallel, it will split along the batch dimension and distribute the smaller batches to participating GPUs. |
st180032 | I have a question regarding the use of collective functions such as all_reduce().
I am working on a layer in which I would like to synchronize a value across processes. I have seen an implementation 2 of synchronized batch norm that essentially does what I am looking for. In those layers, it seems that all_reduce is called from forward, but there is also an autograd function that defines backward behavior as well.
Is that approach necessary for a Module that does not have trainable parameters? Can I just call all_reduce in the forward method of a Module or do I need to define it in an autograd function?
Btw, the layer I’m working on looks like this:
class BatchStandardDeviation(Module):
def __init__(self):
super().__init__()
def forward(self, x):
batch_size, _, height, width = x.size()
out = x - x.mean(dim=0, keepdim=True) # Shape: B, C, H, W
out = torch.sqrt(out.pow(2.0).mean(dim=0, keepdim=False) + 1e-8) # Shape: 1, C, H, W
out = out.mean().view(1, 1, 1, 1)
out = out.repeat(batch_size, 1, height, width) # Shape: B, 1, H, W
return torch.cat([x, out], dim=1)
It concats mini-batch statistics to each feature map. I would like to get those statistics from batches across all processes rather than just the local batch in one processes. |
st180033 | You can call allreduce in the forward pass, but beware that if you have multiple of these layers, or other layers that need to call any torch.distributed functions, that the order they are called in needs to be identical across all workers. If you end up with any learnable parameters, consider the concerns I expressed on this PR adding sync batch norm 7. |
st180034 | Thanks for the reply! After reading over the PR for synced batch norm (really excited to see this functionality being baked into PyTorch btw, I think it’s a must have for proper distributed training), you seem to point out a potential deadlock when everything is running on the same process group. Is this a factor for my layer since no learnable parameters are used? I can always spin up a new process group for this layer to use if so. |
st180035 | The first time my training scripts are run, the dataset is ‘compiled’ in the Dataset class. For instance, I do work often with medical data and then it is possible I add another dataset, and I want to cut this data into tiles of a specified shape. These files are then cached to the SSD, so next time the compilation phase is skipped.
However, when I use DistributedDataParallel all processes will do this. I could check the rank of the process, and only allow rank == 0 to execute this, but then the other processes will crash because they will find an empty dataset. Is there a way I can tell the second process to wait before it starts training? |
st180036 | You could do one of two things:
Segment your input dataset into WORLD_SIZE chunks and let every process preprocess its own subset of the dataset (this also gets you parallelization of the preprocessing). You can call torch.distributed.get_rank 2 to get the rank of the current process and use this to index into your input dataset.
Like you say, force rank == 0 to perform all preprocessing, and make all other workers wait for completion. You can do this with a call to torch.distributed.barrier 100. The default timeout for this call is 30 minutes. If preprocessing takes more time, you can tune the timeout through the timeout kwarg to torch.distributed.init_process_group 19.
Are you running on a single machine or multiple machines? |
st180037 | Currently I am running on one machine, but, ideally, the solution would also be useful for both single and multiple machines.
Thanks you for the suggestions, both seem reasonable, but it might be unclear beforehand how long the processing will take (also depends on the network speed and such) so, that would require a bit of tweaking for the timeout parameter. I also do not known beforehand how large the dataset will be precisely, as it can happen new samples have been added so it would be tricky to write a class which effectively splits the dataset into smaller ones, as that would require me to know how large it.
So, perhaps, a combination of 1 and 2 would also work: I can make a rough split across WORLD_SIZE, and base this on get_rank. So, it can happen that some processes are finished earlier than the others. If I then call torch.distributed.barrier() at the end of the processing of the dataset, this would have the effect that the preprocessing will be split along the processes and all will wait until each one is done with their part. Do I understand this correctly? |
st180038 | Yes, that’s correct. If the split is a rough split, you’ll still have to synchronize on the actual dataset size once preprocessing has completed. Distributed data parallel expects all workers to be called with an equal number of equally sized batches. |
st180039 | Hi,
I have an 8-GPU machine and successfully used DataParallel to train my network. At the end of the epoch I’m attempting to evaluate the model on a fairly large dev set, and I’d like to parallelize that operation. For some reason only one GPU is utilized during my prediction operations. Does setting model.eval() have some impact on how DataParallel works? |
st180040 | It looks like my issue is that the sample code I’m working with sets a new varialble mnetwork=DataParallel(network), and the training code operates on mnetwork but prediction code operates on network. Verifying the fix now. |
st180041 | Is there a code example on sharing tensor among multiple processes using process pool? The doc above does not specify how to other than saying we need to move tensor to “shared memory” and use “queue”.
Also, if I want all the processes to have access at the same time because it is garunteed that the processes only do read and not write, using semaphores and queue seems to be unnecessary. Is there a workaround to disable semaphore and queue when sharing tensors? |
st180042 | Don’t know of some example code. But if you use a queue per process in the pool and just send the same shared tensor to every single queue, all processes should have an identical tensor with the storage mapped to the same memory, so they’ll all see all writes to that storage. This should also address the second question you post. If you don’t want coordination between processes because the tensor is read only, you just send it, and access it, and not do anything else (or I’m missing something here, but this is my understanding here). |
st180043 | I’ve noticed the, when multiple threads run GPU pytorch code, the operations happen in serial ( i.e. the speed of having 10 threads of the same process each perform a GPU task is the same as having 1 thread perform 10 tasks ). However, when I spanw multiple processes, there is a significant and linear speedup for upto 3 processes.
Why do you think this could be? I hypothesize: by default, same CUcontext is used among multiple threads of the same process, where as different processes use different CUcontext.
If my hypothesis in (1) is correct, then how can I manually create CUcontext for each thread in pytorch? |
st180044 | Are you using different CUDA streams for every thread?
By default they will all use the same CUDA stream, which will serialize all operations.
See c10::Stream 5 if you’re developing against master, or this one 5 if you’re developing on 1.0.1. |
st180045 | In replacing DataParallel with DistributedDataParallel, I encountered that each epoch compute a number of metrics. While using DataParallel this still was fine, as everything was running in the same process.
However, I would like to collect the metrics on the first GPU as well. There metric could either be computed on the GPU or CPU. As I understand, with DistributedDataParallel in case there are 2 GPUs, 3 processes are started (one to collect things). Then, loss.backward() should work as expected. But how do I do this for my metrics? |
st180046 | Let me answer my own question. I think I got it right now. So I store my metrics in a dictionary {'batch_metric_0': etc}. Initially these were numpy arrays, but I converted the code to torch, I assume if this is not possible, you could otherwise dump to a pickle and use ByteTensor.
Then you can collect them together by iterating over the dictionary like (you can use torch.no_grad()):
for k in sorted(tensors_dict.keys()):
tensor_names.append(k)
all_tensors.append(tensors_dict[k])
all_tensors = torch.stack(all_tensors, dim=0)
Then, torch.distributed.reduce(all_tensors, dst=0) collects everything on device 0. Then you need to divide by WORLD_SIZE only for device 0 (as the other processes are not collected, and we are not interested in those.
Hope this is helpful to someone. |
st180047 | jteuwen:
As I understand, with DistributedDataParallel in case there are 2 GPUs, 3 processes are started (one to collect things). Then, loss.backward() should work as expected. But how do I do this for my metrics?
There is no separate process to collect things. You are responsible for starting all processes, either through running them yourself from a terminal, through torch.distributed.launch, or with some other runner such as mpirun. The typical mode of execution is to use 1 GPU per process. |
st180048 | Traceback (most recent call last):
File “/snap/pycharm-professional/121/helpers/pydev/pydevd.py”, line 1741, in
main()
File “/snap/pycharm-professional/121/helpers/pydev/pydevd.py”, line 1735, in main
globals = debugger.run(setup[‘file’], None, None, is_module)
File “/snap/pycharm-professional/121/helpers/pydev/pydevd.py”, line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File “/snap/pycharm-professional/121/helpers/pydev/_pydev_imps/_pydev_execfile.py”, line 18, in execfile
exec(compile(contents+"\n", file, ‘exec’), glob, loc)
File “/home/wen/PycharmProjects/Attention-Echino/train.py”, line 217, in
main()
File “/home/wen/PycharmProjects/Attention-Echino/train.py”, line 59, in main
dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=args.world_size)
File “/home/wen/anaconda3/lib/python3.6/site-packages/torch/distributed/init.py”, line 94, in init_process_group
group_name, rank)
RuntimeError: Permission denied at /pytorch/torch/lib/THD/process_group/General.cpp:17 |
st180049 | Please include more details, such as PyTorch version, environment, what you’re trying to do, and any code you can share to reproduce this issue. |
st180050 | (Apologies if there is an existing similar topic. The search doesn’t seem to be working properly. The topics I could find via searching on Google did not seem to answer my question.)
I am parallelizing across multiple GPUs. I want to parallelize across non-contiguous device IDs (e.g., 0 and 2), as I am running another process on device 1. I can set CUDA_VISIBLE_DEVICES=0,2 and I can wrap my model in nn.DataParallel (torch.cuda.device_count() correctly returns 2). However, the DataParellel constructor by default assigns to list(range(# available devices) (see https://pytorch.org/docs/stable/_modules/torch/nn/parallel/data_parallel.html 1), which means that it will try to assign using IDs [0, 1]. This means it will never try to allocate to device 2 (and consequently run out of memory).
If I pass the DataParellel constructor the list of devices [0, 2], it throws an error as device ID 2 == device_count() (this is in cuda/__init__.py:292). Thus, it won’t let me assign to any GPU IDs >= the device count – if I want to use all available GPUs, their IDs must be contiguous.
Is there any way for me to force DataParellel to use device 2?
Thanks! |
st180051 | Hi,
gpu ids are always from 0 to the number of gpu -1.
When you do CUDA_VISIBLE_DEVICES=0,2 then it remaps the ids to 0 and 1: setting 0 in your program will use physical 0 and setting 1 in your program will use physical 2.
Note that you could do CUDA_VISIBLE_DEVICES=2,0 and 0 would map to physical 2 and 1 would map to physical 0.
So your code already works and will use the devices specified by the macro |
st180052 | Ok, thanks! Then I guess the problem is that it’s not mapping to the 2nd GPU at all. When I didn’t specify device IDs (–> by default, DataParellel uses both 0 and 1, i.e., 0 and 2 globally), it threw an out-of-memory exception as the memory use of GPU 0 reached maximum, rather than using GPU 2 in addition (I was supervising with nvidia-smi).
I will look into this further and see if I can figure out why it’s not mapping to the 2nd GPU. |
st180053 | To update, my model (Module) was wrapped in another class with functions like get_loss which called the forward function. I didn’t realize DataParallel requires outputs of forward to be tensors, so I had to write a higher-level wrapper which can handle use of the model class when it’s being parallelized and when it’s not. So I solved this problem. |
st180054 | I have a question regarding the “preferred” setup for training a more complex model in parallel.
Let’s assume I have a GAN model with an additional encoder and some additional losses (VGG, L1, L2) as shown in the illustration here:
I saw two main patterns on how to use such a setup with torch.nn.DataParallel
Pattern 1:
One has been used in the pix2pixHD implementation from Nvidia 40.
As you see in models->pix2pixHD_model.py 67 they wrapped all the networks and losses and even optimizers in a module. To parallize it using multiple GPUs they then call torch.nn.DataParallel on the whole model.
Pattern 2:
The other option is to call torch.nn.DataParallel on each of the networks individually as you show for example in the DCGAN tutorial. This setup is much more convenient since you don’t have to deal with issues regarding multiple inputs/outputs and separating model from logging using tensorboardX etc.
My question is now. Would you suggest to use one or the other pattern when using let’s say 8 GPUs (V100) on a single instance on AWS/ Google?
I played around with both setups and didn’t see any crucial performance advantage using one or the other. But pattern 1 is really annoying since it’s not how I would typically write nice PyTorch code.
From my understanding wrapping each network itself with torch.nn.DataParallel results in lots of scatter and gather operations since the intermediate results will be collected by the “host GPU”, correct? So the higher the [communication/ computation] ratio is the worse it gets having all those models wrapped in DataParallel individually?
Thanks a lot for your feedback and keep up the good work, PyTorch really rocks! |
st180055 | Just my two cents: since you didn’t see any performance advantage of one approach over the other, you could just stick to the coding style you prefer. Personally, I would prefer the second approach, too. |
st180056 | The forward function of torch.nn.parallel.DataParallel calls its member function “scatter” to replicate the input data into all of the devices:
class DataParallel(Module):
def forward(self, replicate_model=True, gather_grad=True, *inputs, **kwargs):
inputs, kwargs = self.scatter(gather_grad, inputs, kwargs, self.device_ids)
...
And the scatter function was finally implemented by the module torch.nn.parallel._functions.Scatter like this:
class Scatter(Function):
@staticmethod
def forward(ctx, target_gpus, chunk_sizes, dim, input):
...
@staticmethod
def backward(ctx, *grad_output):
return None, None, None, Gather.apply(ctx.input_device, ctx.dim, *grad_output)
So I think at the backward process of DataParallel, the backward function of module Scatter should be called. And in this way it could gather all of the gradients distributed from every device.
But when I try to do something in the backward function of Scatter, just like print a line:
class Scatter(Function):
@staticmethod
def backward(ctx, *grad_output):
print("try to print something.")
return None, None, None, Gather.apply(ctx.input_device, ctx.dim, *grad_output)
I always got nothing printed. It seems that Scatter::backward() function has never been called.
I wonder why the function is not been called and then how DataParallel gather the gradients from all of the devices? Is there anything wrong with my testing?
Thanks very much! |
st180057 | Solved by wish in post #2
I found it is my misunderstood.
It is Broadcast::backward() rather than Scatter::backward() gathers the grads from all of the devices. |
st180058 | I found it is my misunderstood.
It is Broadcast::backward() rather than Scatter::backward() gathers the grads from all of the devices. |
st180059 | I may need some slight clarification on the way torch.distributed handles creation of the default process group.
When calling _get_default_group(), a torch.distributed.ProcessGroupNCCL object is returned. From looking over the source code for DistributedDataParallel, this is what it calls internally when no process_group argument is supplied. However, when calling torch.distributed.group.WORLD, a different group object is returned. It’s not a ProcessGroupNCCL object either. If I pass this group from group.WORLD into DistributedDataParallel as the process_group, it will hang indefinitely on initialization (Using NCCL backend. I haven’t tested others). If I pass the _get_default_group() group into DistributedDataParallel it works as expected (Because it would call this by default anyway if process_group arg is None).
Can anyone clarify the discrepancy between these two process groups? The docs are light regarding torch.distributed so I’m trying to get a more complete understanding of this. |
st180060 | The difference is in that torch.distributed.group.WORLD is a constant that identifies the global process group. All functions prefixed with an underscore are not part of the public API and should not be used if you expect version-to-version compatibility.
For now you can continue to specify process_group=None to make it pick up the default process group. I created https://github.com/pytorch/pytorch/issues/17305 57 to track fixing the issue you mention. It should be transparent, just like all functions in the torch.distributed module. |
st180061 | I used pytorch 0.4.1, and when using distributed training, I encountered a timeout problem in loss.backward(), which usually takes 1~2 seconds but sometimes 10~20 even 30+s, causing gloo to timeout, thus the training fail. Anyone know why? |
st180062 | I come upon the same issues using 0.4.1. What make me quite confused in that it will timeout once the second epoch of my model training begins, while the first epoch goes very smoothly. Do not what what is the reason… |
st180063 | A model is usually composed of many tensors of parameters. While the model is being updated e.g. via optimizers etc. this process is not likely to be atomic. For such a non-atomic operation under memory sharing among many processes, I am normally anxious about the inconsistency of the model state during such operation i.e. at a moment, a process might get a partially updated model while the rest is from the last iteration.
Does Pytorch have precautions for such a scenario? Or, in fact, the inconsistency of the state is usual but is a non-issue? |
st180064 | Hi,
I’m receiving follow error when I try to train model on multi-GPU. Can someone let me know what’s wrong?
Thanks.
–
Traceback (most recent call last):
File “train.py”, line 247, in
out, output_sizes = model(inputs, input_sizes)
File “/home/edward/anaconda3/envs/deepspeech2_p36/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 491, in call
result = self.forward(*input, **kwargs)
File “/home/edward/anaconda3/envs/deepspeech2_p36/lib/python3.6/site-packages/torch/nn/parallel/distributed.py”, line 360, in forward
self._sync_params()
File “/home/edward/anaconda3/envs/deepspeech2_p36/lib/python3.6/site-packages/torch/nn/parallel/distributed.py”, line 392, in sync_params
param_data.set(tensor)
RuntimeError: set_storage is not allowed on Tensor created from .data or .detach() |
st180065 | Azure ML supports Horovod, but I’d like to keep things as simple as possible (but not simpler), so thinking of using DistributedDataParallel instead… has anyone done this succesfully? |
st180066 | It has the same purpose and result as Horovod.
The primary difference lies in how you launch a distributed run. With Horovod you go through MPI (and launch with mpirun), whereas with torch.distributed you can launch the processes yourself, independently, and have them find each other through any one of the supported initialization methods (see https://pytorch.org/docs/stable/distributed.html#tcp-initialization 12).
Horovod only works with NCCL2 AFAIK (and therefore CUDA tensors). In torch.distributed we also have a Gloo backend in case you want to run collective operations against CPU tensors. |
st180067 | Hello,
I am having the problem that as I scale to multiple GPU’s my training is bottlenecked at the capacity of a single GPU. So for example, I am using 8 GPU’s with 16GB of GPU memory per GPU. However, GPU 0 uses approximately it’s full capacity but, the other GPU’s only use 11GB’s. I know that one GPU is designated as the main GPU and requres memory to coordinate but, is there any way to make this more efficient? A printout of my nvidia-smi usage can be found below. Thank you.
GPU_Utilization.png654×619 55.7 KB |
st180068 | You could follow @Thomas_Wolf’s blog post 62 how to use the memory more efficiently. |
st180069 | Awesome, Thank you. Are there any plans to add this functionality into pytorch itself? I am curious if i should continue balancing loads like this in the future or if till become unnecessary. Thanks again |
st180070 | Although memory-wise it may seem like a bottle neck, but I don’t think distributing the load further across the GPUs will benefit computational performance regarding speed – it would help avoid memory bottlenecks though. I think instead of distributing it evenly if memory is a concern, it would probably be even better to use a seperate GPU for loss computation and gradient accumulation (or do that step even on the CPU because copying data across GPUs is expensive). We actually had a discussion about that recently here Uneven GPU utilization during training backpropagation 17 |
st180071 | I am using DistributedDataParallel on a single machine with multiple GPUs, and I’m having trouble collecting loss and accuracy between GPUs.
I have each process (GPU) printing the loss and accuracy of its training, but I want to track the overall loss and accuracy. I have tried using multiprocessing.RLock as an argument to torch.multiprocessing.spawn, but this fails.
What is the best way to collect the results when training with DistributedDataParallel? |
st180072 | For anyone wondering, I solved this by using torch.multiprocessing.Manager
In main function
from torch.multiprocessing import Manager
from torch import multiprocessing as mp
from copy import deepcopy
with Manager() as manager:
train_results = manager.list()
# spawn
mp.spawn(train_worker, nprocs=ngpus, args=(train_results))
# copy out
results = deepcopy(train_results)
# postprocess results to collect data
In train_worker:
def train_worker(tid, train_data):
...
train_data.append((tid, epoch_num, loss,acc,time,num_correct))
...
Then after training you can use pandas to do some stats and collection of data. |
st180073 | I am new to pytorch and distributed learning in general and I’m trying to go through this tutorial here: https://pytorch.org/tutorials/beginner/aws_distributed_training_tutorial.html 7. After setting everything up, when I run the 4 different python processes (2 on each machine) I always get the following error:
File “/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/distributed/rendezvous.py”, line 95, in _tcp_rendezvous_handler
store = TCPStore(result.hostname, result.port, start_daemon)
RuntimeError: Address already in use
I feel like this is somehow related to the init_method being specified. I’m using the rank 0 machine ip and port for that value as specified in the tutorial. Nothing else is running on that port. Am I missing something about how to configure this properly? |
st180074 | Hello, I conducted some experiments to understand the different components of batch processing time and how they can be lowered by increasing parallelism. Two types of parallelism can be exploited – data can be loaded in parallel using multiple processes and data can be processed in parallel on multiple GPUs using data parallelism. I didn’t consider distributed data parallelism in these experiments (yet)
Dataset used is imagenet-200 consisting of 500 images of 200 classes. Thus, there are 100,000 total images to be processed in an epoch of training. For a batch size of 256, this results in roughly 390 batches to be processed in an epoch.
The time to process a batch can be split into data loading time, transfer time and processing time, as defined below.
• Data loading time: this is the time taken by the dataloader to load data from the disk into memory
• Transfer time: this is the time to transfer data from CPU RAM to GPU global memory (tensor.cuda() )
• Processing time: measures the time taken to run the forward pass, backward pass, loss calculation and parameter updates. Doesn’t include the time taken to transfer data from CPU RAM to the GPU global memory
I considered two networks – Resnet 18 and Resnet 50. I analyzed three cases:
1- base case with batch size of 64, num_workers = 4, num_gpus = 1
2- Data parallel with a batch size of 256, num_workers = 4, num_gpus = 4
3- Data parallel without pin_memory = true in the dataloader and non_blocking = true in the .cuda calls. Thus, loaded data is not copied to pinned memory by the dataloader. This allows for analyzing the importance of asynchronous data transfers.
Report describing the experiments is here:
https://drive.google.com/open?id=1hy39b5PwimJfT3fTKhgTa7FwGhXDQE_7 20
Most of the results were as expected, but there were a few surprises. I’d really appreciate it if someone from the Pytorch team could shed some light:
As shown in Table 5, the processing time per GPU for 4 GPUs for a batch size of 1024 turns out to be lower than the processing time for 1 GPU for a batch size of 256. This doesn’t make sense - the processing time in data parallel mode should be strictly higher due to the data parallel overhead (transfering the gradients from the slave GPUs to parameter server, calculating parameter updates and then updating the model on each slave GPU)
Pytorch documentation say: once you pin a tensor or storage, you can use asynchronous GPU copies. Just pass an additional non_blocking=True argument to a cuda() call. This can be used to overlap data transfers with computation.
Question: Is the converse true – i.e., when we set pin_memory = False, we need to set non_blocking = False in .cuda() calls?
With pin_memory = False and non_blocking = False, I expected the data loading time to be lower as the data loader now only needs to copy data to local memory, not pinned memory. However this time is now higher. Furthermore, I expected the processing time on the GPU to not be affected, however this processing time is now lower. The transfer time is now non-negligible, which is as expected. Can anyone shed light on this? I think what may be going on is that the transfer to pinned memory is itself aysnchronous, so with pin_memory = False, now the data is actually being written to the local memory adding to the loading time, whereas before the latency of asynchronous write to pinned_memory was hidden.
Appreciate any thoughts/feedback!
-Ankur |
st180075 | upon reading the docs, the answer to the second question is clear- if pin_memory is set to false, then setting non_blocking = true has no effect.
Looking at the code for the data loader, setting pin_memory = true sets up an additional thread which reads from the output queue of the worker threads and puts the batch into pinned memory and the pinned memory pointer into another queue, which is then queried during the next call.
Also, I believe with asynchronous data transfer, the processing time includes the data transfer time as well. When I put a cuda.synchronize() after the .cuda() calls, now the transfer time is non-negligible and the processing time is lower by the same amount. The sum adds up to the processing time with synchronous data transfer. |
st180076 | Hey everyone,
I have a model spread across a couple of GPUs:
class MicroUNet3D(nn.Module):
def __init__(self, n_channels, n_classes):
super(MicroUNet3D, self).__init__()
self.inconv = InConv(n_channels, 2).to('cuda:0')
self.down1 = Down(2, 4).to('cuda:0')
self.down2 = Down(4, 8).to('cuda:0')
self.up1 = Up(8, 4).to('cuda:1')
self.up2 = Up(4, 2).to('cuda:1')
self.outconv = OutConv(2, n_classes).to('cuda:1')
def forward(self, x):
x1 = self.inconv(x)
x2, indices1 = self.down1(x1)
x3, indices2 = self.down2(x2)
# Transfer to next GPU.
x2, indices1 = x2.to('cuda:1'), indices1.to('cuda:1')
x3, indices2 = x3.to('cuda:1'), indices2.to('cuda:1')
x4 = self.up1(x3, indices2, x2.shape)
x5 = self.up2(x4, indices1, x1.shape)
x6 = self.outconv(x5)
return x6
Is there a way to determine how the communication is being handled with the to() method? I am hoping that Pytorch will use NCCL here, and I would like to make sure. |
st180077 | No, PyTorch does not use NCCL for to() (copying from one GPU to another). That’s not one of the operations provided by NCCL (https://github.com/NVIDIA/nccl#whats-inside 14)
PyTorch does use NCCL as a distributed backend and for DataParallel broadcast and reduction. |
st180078 | Ah yeah, for some reason I was thinking that NCCL could do send/recv. Does that mean that to() is coming down to main memory to then transfer over to the next GPU? Could it use NVLink? |
st180079 | Yes, it will use NVLink if available. The choice of how to communicate is made by the CUDA driver. PyTorch just calls cudaMemcpy (or launches a P2P kernel in some cases). |
st180080 | Hi ,
I am using MPI for distributed machine learning. I would like to profile the communication model. Is there a way/tool that I can use for this performance analysis?
Thanks, |
st180081 | The original neural_style code is here:
github.com
jcjohnson/neural-style/blob/master/neural_style.lua 1
require 'torch'
require 'nn'
require 'image'
require 'optim'
require 'loadcaffe'
local cmd = torch.CmdLine()
-- Basic options
cmd:option('-style_image', 'examples/inputs/seated-nude.jpg',
'Style target image')
cmd:option('-style_blend_weights', 'nil')
cmd:option('-content_image', 'examples/inputs/tubingen.jpg',
'Content target image')
cmd:option('-image_size', 512, 'Maximum height / width of generated image')
cmd:option('-gpu', '0', 'Zero-indexed ID of the GPU to use; for CPU mode set -gpu = -1')
cmd:option('-multigpu_strategy', '', 'Index of layers to split the network across GPUs')
This file has been truncated. show original
The strategy for distributing computation across multiple GPUs is shown here:
#neural_style.lua
local DEFAULT_STRATEGIES = {
[2] = {3},
}
local gpu_splits = nil
if params.multigpu_strategy == '' then
-- Use a default strategy
gpu_splits = DEFAULT_STRATEGIES[#params.gpu]
-- Offset the default strategy by one if we are using TV
if params.tv_weight > 0 then
for i = 1, #gpu_splits do gpu_splits[i] = gpu_splits[i] + 1 end
end
else
-- Use the user-specified multigpu strategy
gpu_splits = params.multigpu_strategy:split(',')
for i = 1, #gpu_splits do
gpu_splits[i] = tonumber(gpu_splits[i])
end
end
assert(gpu_splits ~= nil, 'Must specify -multigpu_strategy')
local gpus = params.gpu
local cur_chunk = nn.Sequential()
local chunks = {}
for i = 1, #net do
cur_chunk:add(net:get(i))
if i == gpu_splits[1] then
table.remove(gpu_splits, 1)
table.insert(chunks, cur_chunk)
cur_chunk = nn.Sequential()
end
end
table.insert(chunks, cur_chunk)
assert(#chunks == #gpus)
local new_net = nn.Sequential()
for i = 1, #chunks do
local out_device = nil
if i == #chunks then
out_device = gpus[1]
end
new_net:add(nn.GPU(chunks[i], gpus[i], out_device))
end
return new_net
I don’t believe that pytorch has an nn.GPU analog. Should I be looking at torch.nn.DataParallel to achieve this same result? Is cudnn a supported backend for torch.nn.DataParallel? Has anybody seen multi-gpu neural style implemented in pytorch? Any tips would be greatly appreciated, I’m new to pytorch
Thanks! |
st180082 | Hi,
nn.GPU is just a convenient way to run a module on a given GPU. It does not do any multiprocessing.
It is doing (more complex to handle non-Tensor inputs):
def forward(self, input):
gpu_input = input.cuda(self.gpuid)
with torch.cuda.device(self.gpuid):
out = self.mod(gpu_input)
output = out.cuda(self.out_device)
return output
What the lua code is doing is actually building a DataParallel by hand I think. So yes you should be using that. |
st180083 | Okay, awesome, I’m glad to hear it’s possible. I’ll familiarize myself further with the items you mentioned and report back on how it goes. Thanks so much! |
st180084 | I followed the code #12012 64 to implement a ring-allreduce algorithm, but I cannot find any improvement compared to the openMPI allreduce. So is there a way to do so just use the send and recv method in PyTorch? |
st180085 | Did you expect an improvement over the MPI implementation? If so, what kind of improvement?
Different MPI implementations use different algorithms. You can look at the OpenMPI configuration parameters/tunables to figure out how to tweak which algorithm it should use.
The gloo backend implements ring allreduce in C++. You can build it yourself on top of send and recv as well of course, but this won’t be faster compared to the existing implementation. |
st180086 | Thank you for your reply.
Did you expect an improvement over the MPI implementation? If so, what kind of improvement?
Yes. Since I tested gloo, NCCL and MPI among nearly 32 nodes(ResNet-50 data-parallel), I found dist.all_reduce using MPI is the most time-consuming one. I think the reason is the MPI backend has not implemented ring allreduce yet.
Different MPI implementations use different algorithms. You can look at the OpenMPI configuration parameters/tunables to figure out how to tweak which algorithm it should use.
So which one may be the fastest one? I am not familiar with MPI implementations.
The gloo backend implements ring allreduce in C++. You can build it yourself on top of send and recv as well of course, but this won’t be faster compared to the existing implementation.
gloo is good, but it cannot support gather and scatter in GPU, that 's why I chose cuda-aware MPI. By the way, you said just use send and recv won’t be faster compared to the existing implementation, could you give me more details? I wonder why use send and recv couldn’t reduce latency since I used them to implement ring-allreduce. |
st180087 | Hi, I am new in Pytorch, and I am going to deploy a distributed training task in 2 nodes which have 4 GPUS respectively. I have followed the comments in the code torch.distributed.launch while still confused.
Node 1
CUDA_VISIBLE_DEVICES=3,2,1,0 python2 -m torch.distributed.launch \
--nproc_per_node=4 \
--nnodes=2 \
--node_rank=0 \
--master_addr="11.7.157.133" \
--master_port=12345 \
main.py --folder ./experiments/pairwise_shangyi_fpnembed
Node 2 script
CUDA_VISIBLE_DEVICES=3,2,1,0 python2 -m torch.distributed.launch \
--nproc_per_node=4 \
--nnodes=2 \
--node_rank=1 \
--master_addr="11.7.157.133" \
--master_port=12345 \
main.py --folder ./experiments/pairwise_shangyi_fpnembed
And I always meet the error in Node 2:
Traceback (most recent call last):
File "main.py", line 33, in <module>
trainer.train()
File "/export/home/v-jianjie/net/paizhaogou/metric_learning/trainer.py", line 165, in train
self.setup_network()
File "/export/home/v-jianjie/net/paizhaogou/metric_learning/trainer.py", line 90, in setup_network
broadcast_buffers=False,)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/distributed.py", line 134, in __init__
self.broadcast_bucket_size)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/parallel/distributed.py", line 251, in _dist_broadcast_coalesced
dist.broadcast(flat_tensors, 0)
File "/usr/local/lib/python2.7/dist-packages/torch/distributed/__init__.py", line 286, in broadcast
return torch._C._dist_broadcast(tensor, src, group)
RuntimeError: NCCL error in: /export/home/v-yehl/code/caffe2/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:322, unhandled system error
The main.py script runs correctly in one single node.
Thx in advance. |
st180088 | From this 121 I found the solution.
If we use nvidia-docker, you need to add --network=host param in the docker run command in order to let the docker container use the same ip address as the host. |
st180089 | The NCCL error you post doesn’t convey any information that can help unfortunately. Take a look at https://pytorch.org/docs/stable/distributed.html#other-nccl-environment-variables 301 for some environment variables you can set that may help you in debugging this issue. |
st180090 | with python 3.6.7 + pytorch 1.0.0, init_process_group() sometimes hangs and never returns. Any idea of how to fix it? I need to run some projects under 1.0 version. Here are details.
Code scripts
a.py
import torch
import torch.distributed as dist
import os
def get_mpi_rank():
return int(os.environ['RANK'])
def get_mpi_size():
return int(os.environ.get('WORLD_SIZE', '1'))
rank = get_mpi_rank()
world_size = get_mpi_size()
init_param={'backend': 'nccl',
'init_method': 'env://',
'rank': rank,
'world_size': world_size}
from pprint import pformat
print('before {} - {}\n'.format(rank,
pformat(init_param)))
dist.init_process_group(**init_param)
print('after {}'.format(rank))
When it works
python 2.7.12 + pytorch 0.4.1
$ python --version
Python 2.7.12
$ python -c 'import torch; print torch.__version__'
0.4.1
$ python -m torch.distributed.launch --nproc_per_node 2 a.py
before 1 - {'backend': 'nccl', 'init_method': 'env://', 'rank': 1, 'world_size': 2}
before 0 - {'backend': 'nccl', 'init_method': 'env://', 'rank': 0, 'world_size': 2}
after 0
after 1
If i run the scripts multiple times, it always succeeds.
When it does not work
$ python --version
Python 3.6.7 :: Anaconda, Inc.
$ python -c 'import torch; print(torch.__version__)'
1.0.0
$ python -m torch.distributed.launch --nproc_per_node 2 a.py
before 1 - {'backend': 'nccl', 'init_method': 'env://', 'rank': 1, 'world_size': 2}
before 0 - {'backend': 'nccl', 'init_method': 'env://', 'rank': 0, 'world_size': 2}
after 0
The rank 0 is able to finish the function call of init_process_group, but the rank 1 never returns. Then, I use the gdb to attach the hanged process.
$ sudo gdb -p 40855
GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 40855
Reading symbols from /raid/jianfw/anaconda3/bin/python3.6...done.
Reading symbols from /lib/x86_64-linux-gnu/libpthread.so.0...Reading symbols from /usr/lib/debug/.build-id/ce/17e023542265fc11d9bc8f534bb4f070493d30.debug...done.
done.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Reading symbols from /lib/x86_64-linux-gnu/libc.so.6...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libc-2.23.so...done.
done.
Reading symbols from /lib/x86_64-linux-gnu/libdl.so.2...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libdl-2.23.so...done.
done.
(gdb) where
#0 0x00007f0ce5586c00 in __nanosleep_nocancel () at ../sysdeps/unix/syscall-template.S:84
#1 0x00007f0cde561e88 in c10d::tcputil::connect(std::string const&, unsigned short, bool, std::chrono::duration<long, std::ratio<1l, 1000l> > const&) ()
from /raid/jianfw/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so
#2 0x00007f0cde55cef5 in c10d::TCPStore::TCPStore(std::string const&, unsigned short, bool) () from /raid/jianfw/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so
#3 0x00007f0cde4f09f6 in void pybind11::cpp_function::initialize<void pybind11::detail::initimpl::constructor<std::string const&, int, bool>::execute<pybind11::class_<c10d::TCPStore, std::shared_ptr<c10d::TCPStore> >, , 0>(pybind11::class_<c10d::TCPStore, std::shared_ptr<c10d::TCPStore> >&)::{lambda(pybind11::detail::value_and_holder&, std::string const&, int, bool)#1}, void, pybind11::detail::value_and_holder&, std::string const&, int, bool, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(void pybind11::detail::initimpl::constructor<std::string const&, int, bool>::execute<pybind11::class_<c10d::TCPStore, std::shared_ptr<c10d::TCPStore> >, , 0>(pybind11::class_<c10d::TCPStore, std::shared_ptr<c10d::TCPStore> >&)::{lambda(pybind11::detail::value_and_holder&, std::string const&, int, bool)#1}&&, void (*)(pybind11::detail::value_and_holder&, std::string const&, int, bool), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::detail::is_new_style_constructor const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call) () from /raid/jianfw/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so
#4 0x00007f0cddff0e36 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) () from /raid/jianfw/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so
#5 0x000055b391cbe3d4 in _PyCFunction_FastCallDict () at /tmp/build/80754af9/python_1540319457073/work/Objects/methodobject.c:231
#6 0x000055b391cbe7ef in _PyObject_FastCallDict () at /tmp/build/80754af9/python_1540319457073/work/Objects/abstract.c:2313
#7 0x000055b391cc3303 in _PyObject_Call_Prepend () at /tmp/build/80754af9/python_1540319457073/work/Objects/abstract.c:2373
#8 0x000055b391cbe1de in PyObject_Call () at /tmp/build/80754af9/python_1540319457073/work/Objects/abstract.c:2261
#9 0x000055b391d1b78b in slot_tp_init () at /tmp/build/80754af9/python_1540319457073/work/Objects/typeobject.c:6420
#10 0x000055b391d47f57 in type_call () at /tmp/build/80754af9/python_1540319457073/work/Objects/typeobject.c:915
#11 0x000055b391cbe5bb in _PyObject_FastCallDict () at /tmp/build/80754af9/python_1540319457073/work/Objects/abstract.c:2331
#12 0x000055b391d47d6e in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4861
#13 0x000055b391d6a71a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#14 0x000055b391d4a860 in gen_send_ex (closing=0, exc=0, arg=0x0, gen=0x7f0ca53fed58) at /tmp/build/80754af9/python_1540319457073/work/Objects/genobject.c:189
#15 gen_iternext (gen=0x7f0ca53fed58) at /tmp/build/80754af9/python_1540319457073/work/Objects/genobject.c:563
#16 builtin_next () at /tmp/build/80754af9/python_1540319457073/work/Python/bltinmodule.c:1330
#17 0x000055b391cbe311 in _PyCFunction_FastCallDict () at /tmp/build/80754af9/python_1540319457073/work/Objects/methodobject.c:234
#18 0x000055b391d47c1c in call_function () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4837
#19 0x000055b391d6a71a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3335
#20 0x000055b391d42ad9 in _PyEval_EvalCodeWithName (qualname=0x0, name=0x0, closure=0x0, kwdefs=0x0, defcount=2, defs=0x7f0ca578dd60, kwstep=2, kwcount=<optimized out>,
kwargs=0x7f0ce447eae8, kwnames=0x7f0ce447eae0, argcount=<optimized out>, args=0x55b393457df0, locals=0x0, globals=<optimized out>, _co=0x7f0ca54e8270)
at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4166
#21 PyEval_EvalCodeEx () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4187
#22 0x000055b391d43a06 in function_call () at /tmp/build/80754af9/python_1540319457073/work/Objects/funcobject.c:604
#23 0x000055b391cbe1de in PyObject_Call () at /tmp/build/80754af9/python_1540319457073/work/Objects/abstract.c:2261
#24 0x000055b391d6bd9a in do_call_core (kwdict=0x7f0ce58c0678, callargs=0x7f0ce594b048, func=0x7f0ca54f0a60) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:5106
#25 _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:3404
#26 0x000055b391d42ad9 in _PyEval_EvalCodeWithName (qualname=0x0, name=0x0, closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwstep=2, kwcount=<optimized out>, kwargs=0x0, kwnames=0x0,
argcount=0, args=0x0, locals=0x7f0ce5901360, globals=0x7f0ce5901360, _co=0x7f0ce448e5d0) at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4166
#27 PyEval_EvalCodeEx () at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:4187
#28 0x000055b391d4387c in PyEval_EvalCode (co=co@entry=0x7f0ce448e5d0, globals=globals@entry=0x7f0ce5901360, locals=locals@entry=0x7f0ce5901360)
at /tmp/build/80754af9/python_1540319457073/work/Python/ceval.c:731
#29 0x000055b391dc4074 in run_mod () at /tmp/build/80754af9/python_1540319457073/work/Python/pythonrun.c:1025
#30 0x000055b391dc4471 in PyRun_FileExFlags () at /tmp/build/80754af9/python_1540319457073/work/Python/pythonrun.c:978
#31 0x000055b391dc4673 in PyRun_SimpleFileExFlags () at /tmp/build/80754af9/python_1540319457073/work/Python/pythonrun.c:419
#32 0x000055b391dc477d in PyRun_AnyFileExFlags () at /tmp/build/80754af9/python_1540319457073/work/Python/pythonrun.c:81
#33 0x000055b391dc81b0 in run_file (p_cf=0x7ffc8471a94c, filename=0x55b3933a6300 L"a.py", fp=0x55b393433e20) at /tmp/build/80754af9/python_1540319457073/work/Modules/main.c:340
#34 Py_Main () at /tmp/build/80754af9/python_1540319457073/work/Modules/main.c:811
#35 0x000055b391c8fb4e in main () at /tmp/build/80754af9/python_1540319457073/work/Programs/python.c:69
#36 0x00007f0ce51cc830 in __libc_start_main (main=0x55b391c8fa60 <main>, argc=4, argv=0x7ffc8471ab58, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>,
stack_end=0x7ffc8471ab48) at ../csu/libc-start.c:291
#37 0x000055b391d711a8 in _start () at ../sysdeps/x86_64/elf/start.S:103
Looks like it hangs at the function call of c10d::TCPStore::TCPStore(std::string const&, unsigned short, bool) (), c10d::tcputil::connect(std::string const&, unsigned short, bool, std::chrono::duration<long, std::ratio<1l, 1000l> > const&) ()
Any idea to fix it?
Thanks |
st180091 | github.com/pytorch/pytorch
[c10d] TCP init method race condition fix 40
by teng-li
on 01:03AM - 03 Jan 19 UTC
1 commits
changed 6 files
with 103 additions
and 50 deletions.
has the fix |
st180092 | Hi all,
I’m trying to use torch.distributed.launch with NCCL backend on two nodes each of them has single GPU. When I see here 25, it guides me to set torch.cuda.set_device(local_rank), however, each node has only device 0 available. So I’m confused torch.cuda.set_device(0) for both process is correct or not.
Either of them I met an error like this:
Traceback (most recent call last):
File "batch_train.py", line 26, in <module>
m.batch_train(argv[1:])
File "/u3/jbaik/pytorch-asr/asr/models/deepspeech_ctc/train.py", line 56, in batch_train
trainer = NonSplitTrainer(model, **vars(args))
File "/u3/jbaik/pytorch-asr/asr/models/trainer.py", line 93, in __init__
self.model = nn.parallel.DistributedDataParallel(model, device_ids=[local_rank], output_device=local_rank)
File "/home/jbaik/.pyenv/versions/3.7.0/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 134, in __init__
self.broadcast_bucket_size)
File "/home/jbaik/.pyenv/versions/3.7.0/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 251, in _dist_broadcast_coalesced
dist.broadcast(flat_tensors, 0)
File "/home/jbaik/.pyenv/versions/3.7.0/lib/python3.7/site-packages/torch/distributed/__init__.py", line 279, in broadcast
return torch._C._dist_broadcast(tensor, src, group)
RuntimeError: NCCL error in: /u3/setup/pytorch/pytorch/torch/lib/THD/base/data_channels/DataChannelNccl.cpp:322, unhandled system error |
st180093 | If each node has multiple NIC, does NCCL finds the proper connection between the nodes? How about the other backends? |
st180094 | In my case, the same error was found when I used docker.
With ‘–network=host’ parameter, the problem was resolved. |
st180095 | Topics related to DataLoader, Dataset, torch.utils.data, pytorch/data, and TorchArrow. |
st180096 | I am trying to apply pytorch_forecasting.TimeSeriesDataSet from PyTorch Forecasting. My difficulty is, that I have two differently scaled & shaped DataFrames as input and output data.
Simply put, my input_df looks like this:
unix_timestamp[ms] value_a value_b
0 1609455600000 2 3
1 1609455600010 2 4
2 1609455600020 4 5
3 1609455600030 6 6
... ... ... ...
Where the unix_timestamp is a running integer in milliseconds and each row represents the value of a 10ms interval.
Whereas my output_df looks like this:
unix_timestamp[ms] target_value
0 1609455600000 9
1 1609455660000 8
2 1609455720000 7
3 1609455780000 6
... ... ...
In this case, each row represents the value of a 1-minute interval!
Now I would like to use a time window of 10 minutes from the input_df (so 600000 ms and therefore 60000 rows) to predict 1 minute of the output_df (therefore 1 row).
How do I use pytorch_forecasting.TimeSeriesDataSet to prepare these two DataFrames this way?
Important Note I:
The unix_timestamp of the two DataFrames do not necessarily overlap like shown it the example above. So for instance, if the input_df has a timestamp of 1601596805783 which corresponds to the ‘2020-10-02T00:00:05.783’, it does not mean that this exact timestamp exists in the output_df. It might be very close, but mostly not on point to the exact millisecond and off by a couple of milliseconds.
Important Note II:
I thought about just scaling up the output_df to the same scale by repeating the value within the affiliated time interval, however, as far as I can judge this should distort the prediction result, shouldn´t it? |
st180097 | eTuDpy:
pytorch_forecasting
Since the pytorch_forecasting library is not maintained by the PyTorch team, you may get a better response if you ask this question in their repository/forum.
DataLoader does allow you to pass in a custom batch_sampler, which will allow you to specify how your sampling process work and get 10 minutes of input data at a time. You can find more details on this page torch.utils.data — PyTorch 1.10 documentation 1. |
st180098 | Hi, I’m building pytorh binary classification model(eg : cat vs dog)
My model’s output is
[[0.4820, 0.5180]] and my lable is [1,0] for example.
my loss is criterion = nn.CrossEntropyLoss()
loss = criterion(outputs, true_value)
#loss = criterion([[0.4820, 0.5180]] , [1,0])
I’m expecting that, the lable is [1,0] than… output shold be [[0.99, 0.01]] like that…
BUT, there is many error or no loss is downgoing…
The datashape of lable and output is not right? please help!!
could you give me the correct shape of lable and outputs?
my shape of outputs is 1x2 and my label’s shape is 2 and batchsize is 1 in my code.
(I want to find pytorch binary classification guide at homepage but I failed T.T)
log:
I got this error.
loss = criterion(outputs, true_value)
File "/home/hsm/anaconda3/envs/reproducibleresearch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/hsm/anaconda3/envs/reproducibleresearch/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 1048, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/hsm/anaconda3/envs/reproducibleresearch/lib/python3.6/site-packages/torch/nn/functional.py", line 2690, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/hsm/anaconda3/envs/reproducibleresearch/lib/python3.6/site-packages/torch/nn/functional.py", line 2382, in nll_loss
"Expected input batch_size ({}) to match target batch_size ({}).".format(input.size(0), target.size(0))
ValueError: Expected input batch_size (1) to match target batch_size (2). |
st180099 | Solved by thecho7 in post #3
Adding more info to @larcane 's explanation,
true_value should be a LongTensor which means the type of elements inside the tensor is long.
Your true_value should have only one value. In this case, [0] or [1].
Please make it clear whether your output should be updated into [0, 1] or [1, 0] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.