id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st175468 | Also, if you want a detailed control on how the data generate and be consumed by processes, consider using custom dataloader Writing Custom Datasets, DataLoaders and Transforms — PyTorch Tutorials 1.9.1+cu102 documentation |
st175469 | Hello all,
I would like to train multiple models, each on a separate GPU, using DDP. The challenge is that each model has some unique params (for each model), and some common params which are shared by all models. Any tips on how this can be done? Thanks very much in advance. |
st175470 | Hi @Blake_Camp Thanks for posting the question. You can try mixing DDP and RPC together, use RPC to hold a shared part of your model in one rank, and use DDP for the remaining part. see Combining Distributed DataParallel with Distributed RPC Framework — PyTorch Tutorials 1.9.1+cu102 documentation 1 |
st175471 | Also, if you could provide more details on your use case, it should be helpful for us to see if there’re existing solutions |
st175472 | How to effectively sync the gradients/parameters between two tiny MLP models (3-layer each, with 256 hidden dimensions), Note that these two MLP models are trained separately by two parallel processes.
I currently come up with two potential solutions.
using the torch.distributed library with gloo` backend for CPU-based parameter sync. it seems to be very slow due to copying tensor back-and-forth between CPU and GPU.
using shared global memory for GPU-based parameter sync. However, it seems hard to achieve for the current Pytorch version. |
st175473 | Solved by wanchaol in post #4
I see, so you are using one GPU but run two training in parallel. We couldn’t run all_reduce collective manually since there’s only one GPU. If we are on CPU we can leverage tensor.share_memory_(), but since we are on GPU, this is not an option. Yeah so the only option I could think of is the first … |
st175474 | @DanielWang Thanks for posting the question. Could you elaborate more on what’s your use case? from the description I see you have two MLP models with exactly the same architecture, and trained separately by two parallel process, it looks to me you can just use DDP with the MLP model, and it will periodically (full_sync or async) sync the model parameters gradients by DDP under the hood? Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.9.1+cu102 documentation |
st175475 | DDP is usually for model running on two GPUs, while for models share the same GPU it probably not work |
st175476 | I see, so you are using one GPU but run two training in parallel. We couldn’t run all_reduce collective manually since there’s only one GPU. If we are on CPU we can leverage tensor.share_memory_(), but since we are on GPU, this is not an option. Yeah so the only option I could think of is the first option you mentioned, you can periodically move the two model back to CPU, sync params/grads, then move them back and resume training. |
st175477 | I found a problem when use torch.dist.allreduce. I want to manually reduce and sum all model parameter gradients.
This is the first solution, which can give me the correct reduced_and_sum results.
for p in params:
dist.all_reduce(p.grad, op=dist.ReduceOp.SUM)
However, the below second solution does not do any reduce at all after running and returns me the same value before the reduction.
def sync_para(x):
dist.all_reduce(x.grad, op=dist.ReduceOp.SUM)
x.grad = grad
map(lambda x: sync_para(x), params)
Why map function cannot be used at here? |
st175478 | @DanielWang Thanks for posting the question, in the second case, did you found there’re any errors when using it? My guess is that since map() is written in C and did some optimizations, its implied loop can be more efficient than a regular Python for loop, so the order might be different, and since all_reduce is happening in SPMD fashion, a mis-order might incur some issues. |
st175479 | I see, curious what makes async_op=True work? Another thought: does the map really running? It seems like map() will return a generator instead of really looping over the list. Did you try put print statements inside and see if it really got invoked? or wrap it with a sth like list(map(lambda x: sync_para(x), params)) |
st175480 | Hi, I’m training a self-pruning network on DDP, i.e. the input of the network can go through different paths towards the output. However, in DDP, this seems to result in an error in my case:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
After I set find_unused_parameters=True, the above error goes away, but it seems like some of the parameters in prunable layers fail to synchronize (parameters in different GPUs are not the same one) after the update of one training step.
The desired behavior of the training is: as long as some of the processes in GPUs used the parameter, it always performs the update for this parameter across processes in ALL GPUs. Is there any way to make the training behavior possible with DDP? Thanks! |
st175481 | Hello,
I am new to pytorch distributed computing – using pytorch1.9.0. I have a large model that I have distributed over several processes on a single computer. The model is organized such that the master process has an input nn.Linear layer and an output nn.Linear layer that execute at the very start and the very end of the forward pass respectively. Between those two layers are many RemoteModule objects that each execute an RNN model with the overall goal of predicting some time series data. I have successfully obtained rrefs to the parameters associated with each of these RemoteModules as:
MyModules.append(RemoteModule("worker"+str(proc),MyRNN))
MyParameters = []
for module in MyModules:
MyParameters.extend(module.remote_parameters())
MyParameters contains 1000 Parameters
Then in the forward pass running on the master process:
for batch in DataLoader:
with dist_autograd.context() as context_id:
input = InputLinear(batch)
for t in range(sequenceLength):
ids =[]
for mod in MyModules:
ids.append(mod.forward_async(input,t)
get_state = [id.wait() for id in ids]
state[:,t,:] = torch.stack(get_state)
out = OutputLinear(state)
loss = nn.MSELoss(out,batch)
dist_autograd.backward(context_id,[loss])
grads = dist_autograd.get_gradients(context_id)
dist_optim = DistributedOptimizer( optim.Adam, MyParameters ,lr=0.001,)
dist_optim.step(context_id)
At this point grads contains only 4 items corresponding to the parameters associated with the InputLinear and OutputLinear layers only. It does not contain gradients for any of the remote module parameters. If I understand properly, the context_id should be able to track computations across these remote modules. Can anyone tell me what I am doing wrong or how I can properly compute the gradients over these remote modules? |
st175482 | Hi @MLbrain thanks for posting the question, as you can see it only have the local_gradients, this is because the gradients will only be visible in the rank where remote module located. When using RPC to do training, you can simply pass the parameter_rrefs into the distributed optimizer, and the distributed optimizer will figure out the remote module parameter gradients and sync it properly. You can take a look at this tutorial Getting Started with Distributed RPC Framework — PyTorch Tutorials 1.9.1+cu102 documentation |
st175483 | Hello, I am using a cpp_extension function written with .cpp and .cu built with torch.utils.cpp_extension._get_build_directory. It works FINE with one GPU. But with distributed parallel training, an error always occurs:
terminate called after throwing an instance of 'std::runtime_error'
what(): NCCL error in: ../torch/lib/c10d/../c10d/NCCLUtils.hpp:155, unhandled cuda error, NCCL version 2.8.3
ncclUnhandledCudaError: Call to CUDA function failed.
Traceback (most recent call last):
File "train.py", line 557, in <module>
main() # pylint: disable=no-value-for-parameter
File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "train.py", line 552, in main
torch.multiprocessing.spawn(fn=subprocess_fn, args=(args, temp_dir), nprocs=args.num_gpus)
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 247, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 205, in start_processes
while not context.join():
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 166, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/workspace/train.py", line 402, in subprocess_fn
training_loop.training_loop(rank=rank, **args)
File "/workspace/training/training_loop.py", line 289, in training_loop
loss.accumulate_gradients(phase=phase.name, real_img=real_img, real_c=real_c, gen_z=gen_z, gen_c=gen_c, sync=sync, gain=gain)
File "/workspace/training/loss.py", line 67, in accumulate_gradients
gen_img, _gen_ws = self.run_G(gen_z, gen_c, sync=(sync and not do_Gpl)) # May get synced by Gpl.
File "/workspace/training/loss.py", line 47, in run_G
img = self.G_synthesis(ws)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 684, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
result = self.forward(*input, **kwargs)
File "/workspace/training/networks.py", line 1091, in forward
x, num_voxels, sigma, voxel_interact, subs = block(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
result = self.forward(*input, **kwargs)
File "/workspace/training/networks.py", line 990, in forward
x, intersect_index, min_depth, max_depth = self.fourier_feature(center, next(w_iter), num_voxels, camera_center, ray_directions)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 744, in _call_impl
result = self.forward(*input, **kwargs)
File "/workspace/training/networks.py", line 740, in forward
intersect_index, min_depth, max_depth = intersect.intersect(self.voxel_size, n_max=self.max_intersect,
File "/workspace/torch_utils/ops/intersect.py", line 92, in intersect
return _aabb_intersect_cuda(voxelsize, n_max, points.unsqueeze(0), ray_start, ray_dir)
File "/workspace/torch_utils/ops/intersect.py", line 49, in forward
inds, min_depth, max_depth = _plugin.aabb_intersect(
RuntimeError: CUDA error: an illegal memory access was encountered
/opt/conda/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 17 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Here the _plugin.aabb_intersect is my implemented cpp_extension function.
I’m using PyTorch 1.8.0 with Docker.
Are there any specific points in writing cpp extensions that result in errors ONLY during multiple GPU training like this? Thanks! |
st175484 | Solved by ptrblck in post #2
I guess you are missing the deviceGuard via:
const at::cuda::OptionalCUDAGuard device_guard(device_of(tensor));
which would use the default device in your custom CUDA extension and will thus run into illegal memory accesses. |
st175485 | I guess you are missing the deviceGuard via:
const at::cuda::OptionalCUDAGuard device_guard(device_of(tensor));
which would use the default device in your custom CUDA extension and will thus run into illegal memory accesses. |
st175486 | hi,
I’m trying to use torch.multiprocessing.Queue to transfer torch.tensor between processes (one consumer and many producers), and found the consumer is very very slow. I just wonder how to fix it or I did something wrong.
My simplified code is below, it is about one consumer and two producers. Every producer put one torch.tensor (with size 72012803) to a queue every 5ms, the consumer just does queue.get to get the tensor back without any additional effort. But the consumer is too slow to get all the tensors in the queue.
import os
import time
import argparse
from torch.multiprocessing import Process, Queue, Lock
import torch
parser = argparse.ArgumentParser(description='data loader')
parser.add_argument('-j', '--workers', default=2, type=int, metavar='N',
help='number of workers (default: 2)')
def get_ten(tens, lock):
#lock.acquire()
if not tens.empty():
ten = tens.get()
else:
ten = None
#lock.release()
return ten
def put_ten(tens, ten, lock):
#lock.acquire()
tens.put(ten)
#lock.release()
def worker(tens, process_index, tenlock):
#affinity_mask = {process_index+1}
#os.sched_setaffinity(0, affinity_mask)
#torch.set_num_threads(1)
while True:
time.sleep(0.005)
ten = torch.ones([720, 1280, 3])
put_ten(tens, ten, tenlock)
def main():
args = parser.parse_args()
#affinity_mask = {0}
#os.sched_setaffinity(0, affinity_mask)
#torch.set_num_threads(1)
tens = Queue()
process_count = args.workers
process_list = []
tenlock = Lock()
for i in range(process_count):
p = Process(target=worker, args=(tens, i, tenlock))
p.start()
process_list.append(p)
ten_count = 0
while True:
ten = get_ten(tens, tenlock)
if ten is not None:
ten_count = ten_count + 1
if ten_count % 500 == 0:
print("in consumer process, get %d tensors, still %d left in the queue" % (ten_count, tens.qsize()))
else:
print("in consumer process, do not get a tensor")
time.sleep(0.01)
for p in process_list:
p.join()
if __name__ == '__main__':
main()
The output looks like:
in consumer process, do not get a tensor
in consumer process, do not get a tensor
in consumer process, do not get a tensor
in consumer process, do not get a tensor
in consumer process, get 500 tensors, still 191 left in the queue
in consumer process, get 1000 tensors, still 272 left in the queue
in consumer process, get 1500 tensors, still 284 left in the queue
in consumer process, get 2000 tensors, still 323 left in the queue
in consumer process, get 2500 tensors, still 338 left in the queue
in consumer process, get 3000 tensors, still 357 left in the queue
in consumer process, get 3500 tensors, still 385 left in the queue
in consumer process, get 4000 tensors, still 401 left in the queue
in consumer process, get 4500 tensors, still 395 left in the queue
in consumer process, get 5000 tensors, still 408 left in the queue
in consumer process, get 5500 tensors, still 415 left in the queue
in consumer process, get 6000 tensors, still 435 left in the queue
in consumer process, get 6500 tensors, still 457 left in the queue
in consumer process, get 7000 tensors, still 478 left in the queue
in consumer process, get 7500 tensors, still 513 left in the queue
in consumer process, get 8000 tensors, still 519 left in the queue
in consumer process, get 8500 tensors, still 516 left in the queue
in consumer process, get 9000 tensors, still 547 left in the queue
in consumer process, get 9500 tensors, still 555 left in the queue
in consumer process, get 10000 tensors, still 560 left in the queue
in consumer process, get 10500 tensors, still 581 left in the queue
in consumer process, get 11000 tensors, still 604 left in the queue
in consumer process, get 11500 tensors, still 633 left in the queue
...
And finally, the system is out of memory. |
st175487 | yjguo:
import os
import time
import argparse
from torch.multiprocessing import Process, Queue, Lock
import torch
parser = argparse.ArgumentParser(description='data loader')
parser.add_argument('-j', '--workers', default=2, type=int, metavar='N',
help='number of workers (default: 2)')
def get_ten(tens, lock):
#lock.acquire()
if not tens.empty():
ten = tens.get()
else:
ten = None
#lock.release()
return ten
def put_ten(tens, ten, lock):
#lock.acquire()
tens.put(ten)
#lock.release()
def worker(tens, process_index, tenlock):
#affinity_mask = {process_index+1}
#os.sched_setaffinity(0, affinity_mask)
#torch.set_num_threads(1)
while True:
time.sleep(0.005)
ten = torch.ones([720, 1280, 3])
put_ten(tens, ten, tenlock)
def main():
args = parser.parse_args()
#affinity_mask = {0}
#os.sched_setaffinity(0, affinity_mask)
#torch.set_num_threads(1)
tens = Queue()
process_count = args.workers
process_list = []
tenlock = Lock()
for i in range(process_count):
p = Process(target=worker, args=(tens, i, tenlock))
p.start()
process_list.append(p)
ten_count = 0
while True:
ten = get_ten(tens, tenlock)
if ten is not None:
ten_count = ten_count + 1
if ten_count % 500 == 0:
print("in consumer process, get %d tensors, still %d left in the queue" % (ten_count, tens.qsize()))
else:
print("in consumer process, do not get a tensor")
time.sleep(0.01)
for p in process_list:
p.join()
if __name__ == '__main__':
main()
When I uncomment these lines:
#affinity_mask = {process_index+1}
#os.sched_setaffinity(0, affinity_mask)
#torch.set_num_threads(1)
I can get the following outputs w/o OOM. I think not setting CPU affinity caused the slowdown you’ve seen.
in consumer process, do not get a tensor
in consumer process, do not get a tensor
in consumer process, do not get a tensor
in consumer process, get 500 tensors, still 222 left in the queue
in consumer process, get 1000 tensors, still 223 left in the queue
in consumer process, get 1500 tensors, still 223 left in the queue
in consumer process, get 2000 tensors, still 219 left in the queue
in consumer process, get 2500 tensors, still 223 left in the queue
in consumer process, get 3000 tensors, still 223 left in the queue
in consumer process, get 3500 tensors, still 219 left in the queue
in consumer process, get 4000 tensors, still 224 left in the queue
in consumer process, get 4500 tensors, still 225 left in the queue
in consumer process, get 5000 tensors, still 231 left in the queue |
st175488 | thanks @wayi , actually the issue is still there when uncomment these three lines.
I think you’ll still see OOM when running more time. I verified on my machine, we’ll always see OOM finally just if the left number in the queue is increased.
We just put 1000/5 = 200 tensors per second in the queue in each producer, that’s 400 tensors per second in total. It is expected to see just several (0-5) tensors left in the queue due to the nature of async. It is still an issue if we see 2xx left in the queue even if there’s no OOM at last.
I tried different size of the tensor, there’s no such issue if the tensor is very small. Just wonder why, 400 720p (1080*720) RGB24 (3bytes) images should not be a bottle neck even they are shared between processes via memory copy.
thanks |
st175489 | Thanks for the explanation!
Please check the best practices here: Multiprocessing best practices — PyTorch 1.9.1 documentation 6
2 suggestions:
You can replace Queue with SimpleQueue. This seems to work for me.
You can consider reusing buffers passed through the queue to avoid additional memory copy. |
st175490 | thanks.
Looks that there’s a typo for SimpleQueue in the best practices.
from torch.multiprocessing.queues import SimpleQueue
we’ll get
ModuleNotFoundError: No module named 'torch.multiprocessing.queues'
Then I tried with
from torch.multiprocessing.queue import SimpleQueue
tens = SimpleQueue()
and the error message is:
tens = SimpleQueue()
TypeError: __init__() missing 1 required keyword-only argument: 'ctx'
Just think memory copy of 400 720p (1080*720) RGB24 (3bytes) images is not big. In other words, even if we reduce the memory copy, it still does not resolve the issue since it is not the bottle neck. |
st175491 | I actually replaced Queue in your code with SimpleQueue w/o importing any new package, and this means you need to import SimpleQueue directly from torch.multiprocessing. |
st175492 | I see below error if just replace Queue in the code.
tens = SimpleQueue()
NameError: name 'SimpleQueue' is not defined
And I see another error if change the code to torch.multiprocessing.SimpleQueue()
print("in consumer process, get %d tensors, still %d left in the queue" % (ten_count, tens.qsize()))
AttributeError: 'SimpleQueue' object has no attribute 'qsize'
The best practices webpage says: multiprocessing.queues.SimpleQueue. It’s the reason that I think there might be a typo here. |
st175493 | tens = SimpleQueue()
NameError: name ‘SimpleQueue’ is not defined
You also need to import SimpleQueue.
from torch.multiprocessing import Process, **Queue**, Lock
→
from torch.multiprocessing import Process, **SimpleQueue**, Lock
print("in consumer process, get %d tensors, still %d left in the queue" % (ten_count, tens.qsize()))
AttributeError: ‘SimpleQueue’ object has no attribute ‘qsize’
I guess you can just avoid qsize in the print method. If you still want to print it out, then you can create a separate counter instead. |
st175494 | tens.qsize() shows how slow the consumer process is, and we could not mock it by another counter since it is the result of the multiple processes communication. |
st175495 | tens.qsize() shows how slow the consumer process is, and we could not mock it by another counter since it is the result of the multiple processes communication.
One workaround probably can be calling an allreduce to sum up the counters on each process, and hence know how many tensors are consumed in total. You can change the frequency of allreduce. Agree that this may not be elegant. Hopefully other people can give better suggestions here. |
st175496 | thanks,
but let’s come back to the original issue, why the consumer is so slow using torch.tensor with torch.multiprocessing.Queue? I would expect it can be resolved with a very simple fix to share tensors between processes with acceptable performance. Or we have to admit that the current implementation needs improvement … and so other people (maybe me maybe not, ) can start the work for the improvement when have bandwidth. |
st175497 | I am actually not familiar with these libraries either. It can be a good idea to file a bug on Github. |
st175498 | Hey everyone,
We wanted to let you know that we are considering the deprecation of DataParallel (torch.nn.DataParallel, a.k.a. DP) module with the upcoming v1.11 release of PyTorch. Our plan is to keep DataParallel in maintenance mode for the 12 months following the v1.11 release and afterwards completely remove it from our code base. No matter whether we follow this plan or not, we still highly encourage everyone relying on DataParallel to onboard with the Distributed Data Parallel (torch.distributed, a.k.a. DDP) module as we consider it the future of PyTorch Distributed.
Your feedback is very important to us, so please let us know your thoughts on our GitHub post 30. |
st175499 | hi,
I implement a RPC distributed model like the tutorial example(rnn part) 1
I put an embedding module locally and a pretrained bert model remotely. The whole model is as follows:
class DistCPM(nn.Module):
def __init__(self,
save_prompt_embedding_path,
device="cuda:0",
ps="ps"):
super(DistCPM, self).__init__()
# setup prompt locally(I want to add some prompt tokens to improve the accuracy)
self.prompt_embedding_policy = PromptEmbeddingPolicy(
prompt_embeds_path=save_prompt_embedding_path,
device=device
)
# setup bert remotely
# In addition to create a LanguageModelService model(e.g bert/bart), I also load its checkpoint. This will take a while
# And the requires_grad is set to false to make the remote model has no grad.
self.language_model_service_rref = rpc.remote(ps, LanguageModelService, args=())
def forward(self,
word_embeds,
pad_emnbeds,
batch_target_sequence_token):
# make input embeddings and so on
model_batch, _ = self.prompt_embedding_policy(
word_embeds, pad_emnbeds, batch_target_sequence_token
)
logits = _remote_method(LanguageModelService.forward,
self.language_model_service_rref,
model_batch)
return logits
# return all params(local+remote) just as the example
def parameter_rrefs(self):
remote_params = []
# create RRefs for local parameters
remote_params.extend(_parameter_rrefs(self.prompt_embedding_policy))
# get RRefs of bert(or bart and so on.Given I set bert.requires_grad=False, the following line may not be necessary)
remote_params.extend(_remote_method(_parameter_rrefs, self.language_model_service_rref))
return remote_params
# trainer, also like that in the example
def _run_trainer():
r"""
The trainer creates a distributed RNNModel and a DistributedOptimizer. Then,
it performs training using random input data.
"""
train_dataset_path = 'data/diagnose/train_1000.jsonl'
test_dataset_path = 'data/diagnose/test.jsonl'
save_prompt_embedding_path = 'data/checkpoint/embedding.pth' # if exists, will be load
device0 = "cuda:0"
model = DistCPM(save_prompt_embedding_path, device=device0, ps="ps")
criterion = torch.nn.CrossEntropyLoss()
opt = DistributedOptimizer(
optim.SGD,
model.parameter_rrefs(),
lr=0.05,
)
# load bert word embeddings
# I will concate the prompt tokens embedding and bert-tokens embedding
word_embedding_service = WordEmbeddingService(device0)
batch_size = 4
batch_train_dataset = load_dataset(train_dataset_path, batch_size)
for k in range(100):
for bix, batch in enumerate(batch_train_dataset):
print(f"epoch-{k} batch-{bix}")
with dist_autograd.context() as context_id:
batch_information = {'special_vocab_string_list': ['<pad>', '<mask>'],
'batch_source_sequence_string': batch['source'],
'batch_target_sequence_string': batch['target']}
# The next few lines create input embeddings.
batch_information = word_embedding_service.get_word_embedding(batch_information)
word_embeds = batch_information['batch_sequence_embedding']
pad_emnbeds = batch_information['special_vocab_embedding_list'][0]
batch_target_sequence_token = batch_information['batch_target_sequence_token']
# logits
logits = model(word_embeds, pad_emnbeds, batch_target_sequence_token)
logits = logits.float()
target = batch_target_sequence_token
loss = 0.0
target_token_len = []
# calculate loss trivially
for index in range(len(target)):
if logits[index].shape[0] != target[index].shape[0]:
pass
else:
loss += criterion(logits[index], target[index])
target_token_len.append(target[index].shape[0])
loss = loss / sum(target_token_len)
print('batch_train_loss:', loss)
# run distributed backward pass
dist_autograd.backward(context_id, [loss])
# run distributed optimizer
opt.step(context_id)
the error is: RuntimeError: RRef creation via rpc.remote() timed out, and it is possible that the RRef on the owner node does not exist.
image1230×516 82.8 KB
I find the RREF object(namely self.language_model_service_rref) only can be used once.
The first usage occurs in optimizer:
opt = DistributedOptimizer(
optim.SGD,
# [RRef(prompt_embedding_policy.prompt_embeds)],
model.parameter_rrefs(), # get the remote params will use self.language_model_service_rref)
lr=0.05,
)
The second usage is in model(*args) where the error occurs:
logits = model(word_embeds, pad_emnbeds, batch_target_sequence_token)
# model.forward()
def forward(self,
word_embeds,
pad_emnbeds,
batch_target_sequence_token):
# make input embeddings and so on
model_batch, _ = self.prompt_embedding_policy(
word_embeds, pad_emnbeds, batch_target_sequence_token
)
logits = _remote_method(LanguageModelService.forward, # the second usage
self.language_model_service_rref,
model_batch)
Any comment would be appreciated. |
st175500 | Solved by pbelevich in post #4
hi @111282, when you create self.language_model_service_rref it takes sometime as you mentioned, but the result rref is returned to you immediately, although the underlying remote object is not created yet.
This creation takes more time than is allowed by the default timeout and thus it fails silen… |
st175501 | Does this mean that this error won’t occur if you don’t use DistributedOptimizer?
cc: @mrshenli |
st175502 | In the tutorial, the distributedOptimizer is necessary.
image1620×620 150 KB
On the other hand, even the rref object(self.language_model_service_rref) is only used in the model(used in model.forward() exactly), the same error occurs when the second batch begins to be handled.
# remove the rref object from optimizer
opt = DistributedOptimizer(
optim.SGD,
# Just get the local params and wrap it with RRef,
# because I need no update to the remote bert params
[RRef(prompt_embedding_policy.params)],
# model.parameter_rrefs(), # get the remote params, will use self.language_model_service_rref)
lr=0.05,
)
...
# when the model begins to deal with the second batch, the same error occurs(the rref object does not exist).
for batch in data_loader:
logits = model(batch) # note just an example here |
st175503 | hi @111282, when you create self.language_model_service_rref it takes sometime as you mentioned, but the result rref is returned to you immediately, although the underlying remote object is not created yet.
This creation takes more time than is allowed by the default timeout and thus it fails silently. But when you try to call _remote_method(LanguageModelService.forward, self.language_model_service_rref,...) this error is shown to you. This behavior is described in remote method timeout argument:
timeout in seconds for this remote call. If the creation of this RRef on worker to is not successfully processed on this worker within this timeout, then the next time there is an attempt to use the RRef (such as to_here() ), a timeout will be raised indicating this failure.
What you need to do is to add some timeout greater than default one(60) here:
self.language_model_service_rref = rpc.remote(ps, LanguageModelService, args=(), timeout=...) |
st175504 | @pbelevich You are right. But I am still wondering why the error occurs on the second use?
The above error occurs when the second usage(namely model(*args)) happens. The first usage is getting the remote params when creating DistributedOptimizer.
Also even I remove the first usage(code as follows), the same error occurs when the model begins to deal with the second batch(The first batch is ok). So I thought the rref object can only be used once.
# remove the rref object from optimizer
opt = DistributedOptimizer(
optim.SGD,
# Just get the local params and wrap it with RRef,
# because I need no update to the remote bert params
# and bert.requires_grad = False
[RRef(prompt_embedding_policy.params)],
# model.parameter_rrefs(), # get the remote params, will use self.language_model_service_rref)
lr=0.05,
)
...
# Now the same error occurs when the second batch begins to be dealt with.
for batch in data_loader:
logits = model(batch) # note just an example here |
st175505 | Hi, everyone
When I train my model with DDP, I observe that my training process got stuck every few seconds. The device information is shown in the following figure when it is stuck.
image637×643 26.2 KB
There seems always one GPU got stuck whose utilization is 0%, and the others are waiting for it to synchronizing.
This issue disappears after switching to another server (with the same image).
Though it is solved, I am curious about the reasons. I will appreciate it if someone shares the ideas. |
st175506 | Hi,
My experience with distributed training so far is quite limited so others might have better answers. But what I notice straight away is that some GPUs (e.g. number 3) are almost at full capacity, i.e. 11004/11019 and this might be causing problems. I had similar issues when running so close to capacity and found that reducing the batch size (perhaps together with using gradient accumulation) helped.
The reason could be some deadlock, where the code is waiting for data to come from all GPUs before moving onto the next step but one is unresponsive. Although it’s hard to guess without looking at the code. If reducing batch size (or increasing GPUs memory) doesn’t help, feel free to give more details and ideally attach a reproducible example and someone with more experience will take a look. |
st175507 | Hi Andrea,
Thanks for your reply.
The causes seem not about large memory cost, because this issue happened to several models with different sizes.
I think this issue is not related to my code, giving some code snippets might be a misleading instead. I use the standard code configuration which works fine several days ago and it indeed works fine in another server.
It’s really tricky to figure out the reasons for such a problem without applicable running environment. Thank you anyway.
Any other comments are definitely welcome! |
st175508 | There seems always one GPU got stuck whose utilization is 0%, and the others are waiting for it to synchronizing.
Do you have uneven inputs? For example, you run one epoch every few seconds, and one partition on a rank has much fewer batches to process.
This issue disappears after switching to another server (with the same image).
Does the “image” here means cuda:6 still has 0 GPU utilization for some time, but you don’t feel it’s get stuck? |
st175509 | Thanks for reply.
My data is uniformly partitioned to every GPUs.
Sorry for the unclarity. “Image” here refer to as the docker image. Two machines share the same docker image, while only one of them works fine. |
st175510 | “Image” here refer to as the docker image. Two machines share the same docker image, while only one of them works fine.
That’s really weird, so the software setup should be exactly the same, and you also ran exactly the same code. Have you tried any different programs that can run on multiple GPUs? For example, another DDP training use case or just running some collective communication APIs like torch.distributed.all_reduce?
I am afraid that I may not be able to reproduce your issue on my own setup. |
st175511 | Thanks for your suggestions, and I will have a try.
I think this issue has something to do with my environment, where I observed some strange things. For example, another cuda error occurs in the same machine. Your comments about this cuda error are definitely welcome !! |
st175512 | The problem got more serious: several dataloaders got completely stuck, and the others are waiting for them.
I try print some information once the __getitem__ function is called.
def __getitem__(self, idx):
print(f'rank {torch.distributed.get_rank()}, fetch sample {idx}')
# my custom transformations ...
After several iterations I got following information:
rank 4, fetch sample 1469
rank 2, fetch sample 3282
rank 5, fetch sample 2757
rank 1, fetch sample 1355
rank 3, fetch sample 279
rank 0, fetch sample 4107
rank 7, fetch sample 2834
Rank 6 is missing, and the GPU information is as follows.
After sending an interrupt, the traceback is:
File "/root/.pyenv/versions/3.6.8/lib/python3.6/site-packages/mmdet/apis/train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/root/.pyenv/versions/3.6.8/lib/python3.6/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/root/.pyenv/versions/3.6.8/lib/python3.6/site-packages/mmcv/runner/epoch_based_runner.py", line 47, in train
for i, data_batch in enumerate(self.data_loader):
File "/root/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/root/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1186, in _next_data
idx, data = self._get_data()
File "/root/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1152, in _get_data
success, data = self._try_get_data()
File "/root/.pyenv/versions/3.6.8/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 990, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/root/.pyenv/versions/3.6.8/lib/python3.6/multiprocessing/queues.py", line 104, in get
if not self._poll(timeout):
File "/root/.pyenv/versions/3.6.8/lib/python3.6/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/root/.pyenv/versions/3.6.8/lib/python3.6/multiprocessing/connection.py", line 414, in _poll
r = wait([self], timeout)
File "/root/.pyenv/versions/3.6.8/lib/python3.6/multiprocessing/connection.py", line 911, in wait
ready = selector.select(timeout)
File "/root/.pyenv/versions/3.6.8/lib/python3.6/selectors.py", line 376, in select
fd_event_list = self._poll.poll(timeout)
KeyboardInterrupt
Anyone has any idea? @wayi @AndreaSottana @ptrblck |
st175513 | Looks like uneven inputs for me.
Can you add no_sync context manager to disable allreduce in DDP?
https://pytorch.org/docs/stable/_modules/torch/nn/parallel/distributed.html#DistributedDataParallel.no_sync 5
This way you shouldn’t have any gradient synchronization, and you can check if this issue still occurs. |
st175514 | Another suggestion is enabling debug mode by:
os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL" |
st175515 | – this is no longer a pytorch issue. but you can find some helpful tips for ddp pytorch. thanks
hi,
i am using ddp.
on a single machine (node=1) w/ many gpus, it is fine.
but with many nodes w/many gpus, i find an issue with file writing.
assuming i run job where each gpu handles a process.
if:
case 1: each gpu is located on a different node and never 2 gpus on the same node
case 2: each gpu is located on the same node (case 2), there is no issue.
let’s say we want to write a tmp file that every process will need later.
in case 1, all of them need to write without worries about writing simultaneously in the same file.
in case 2, only the master can write.
but the problem is when some gpus are located on the same node.
in that case, many process cold attempt to write in the same file which is a problem.
questions:
1- how to properly deal with the mixed case?
2- is there a way to know if the process is a node master?
for second question, which could solve q1, one needs to check the local rank.
this is useful when writing files such as:
copy training data
checkpoints.
because processes in a node can not see disk of other nodes.
in a node, only one process needs to write.
a node master needs to be designed.
also, a master (global) can be designed as well to handle unique operations that need to be done only once such as logging.
thanks |
st175516 | If you need different processes on the same node to write to a combined single file, then you can first call gather or gather_objects to collect all the outputs, and then write them to the file. Or you can just let all the processes collectively write to a sharded file, where each shard is a single file output by a process.
If you only want the master to write, you can specify the condition torch.distributed.get_rank() == 0. This is usually good enough for checkpointing the model, since DDP guarantees the model on each process is the same. |
st175517 | yes, for the synch part, that’s what i was doing (model is self synch using ddp, but for other objects i use torch.distrbuted.all_gather). so, this is fine.
for writing, i allow only node master to write.
for the last point, this is fine on paper but it does not work… not because ddp or pytorch but because of slurm.
i ask every master node to write/bring/decompress some data from somewhere else to the node local disk. but at the end, they did all instruction, but processes dont find decompressed files. i wrote to the server it staff asking for some explanations. waiting…
so, i’ll face the same issue when writing checkpoints, or any file in each process… later when asked to reload the checkpoint, it will be missing.
thanks |
st175518 | I think the answer now is quite specific to the cluster and not quite relevant to PyTorch. At the high level, I think you need to write the output to a dedicated distributed/shared file system or directory instead of the local disk of each node. Better ask the cluster admins for the right output destination. |
st175519 | yes, pytorch is needed only to control which process is doing the writing
the rest is cluster dependent.
i/o in local disk is the fastest and the recommended way by admins. writing in network disk or other shared storage is slow and can be extremely slow. in multi-node jobs, they provided a way to dispatch the data across all nodes using srun in a job script but it is not clear how to do it from inside a python code. each process needs to do some i/o operations. it has to be done right in this case. i wrote to admins. no answer yet.
so, yes, this question does not concern pytorch anymore. i will edit the question’s header to clarify.
thanks. |
st175520 | I reproduce the training code from DataParallel to DistributedDataParallel, It does not release bugs in training, but it does not print any log or running.
Could show me what is wrong with my code?
This is my code.
import torch
import argparse
import os, time
import json, tqdm
from utils import find_highest_score_answer, load_feature_from_file, create_logger
from transformers import AutoModelForQuestionAnswering
from transformers import AutoTokenizer
def inference(model, data, tokenizer):
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model.eval()
results = []
for _, batch in tqdm.tqdm(enumerate(data)):
input_ids = batch["input_ids"].to(device)
attention_mask = batch["attention_mask"].to(device)
token_type_ids = batch["token_type_ids"].to(device)
bbox = batch["bbox"].to(device)
image = batch["image"].to(device)
outputs = model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids,
bbox=bbox, image=image)
start_logits = outputs.start_logits.detach().cpu().numpy()
end_logits = outputs.end_logits.detach().cpu().numpy()
start_indices, end_indices = find_highest_score_answer(
start_scores=start_logits, end_scores=end_logits)
input_ids = input_ids.cpu().numpy()
question_ids = batch["question_id"].detach().cpu().numpy().tolist()
for question_id, input_id, s, e in zip(question_ids, input_ids, start_indices, end_indices):
predicted_answer = tokenizer.decode(input_id[s:e+1])
decoding_string = tokenizer.decode(input_id)
question = decoding_string[decoding_string.find('[CLS]')+5:decoding_string.find('[SEP]')]
results.append({
"questionId": question_id,
"question": question,
"answer": predicted_answer,
})
return results
def main(args):
output_dir = os.path.join(args['output_dir'], args['weights'].split("/")[-2])
if not os.path.exists(output_dir):
try:
print("Creating {} directory".format(output_dir))
os.mkdir(output_dir)
except:
print("INVALID OUTPUT DIRECTORY")
exit(0)
logger = create_logger(file_path=os.path.join(output_dir, 'inference.log'))
# Load dataset
logger.info("Loading dataset from {} ...".format(args['input_dir']))
eval_data = load_feature_from_file(path=args['input_dir'], batch_size=2)
logger.info("The number of sample is {} ...".format(len(eval_data.dataset)))
# Load model
logger.info("Loading model from {} ...".format(args['weights']))
model = AutoModelForQuestionAnswering.from_pretrained(args['model']).cuda()
# model.load_state_dict(torch.load(args['weights'], map_location='cuda'))
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(args['model'])
# Inference
logger.info("Start inference ...")
start_time = time.time()
results = inference(model=model, data=eval_data, tokenizer=tokenizer)
end_time = time.time()
logger.info("Total inference time {} seconds".format(end_time-start_time))
# save inference results to disk
output_file = os.path.join(output_dir, args['input_dir'].split("/")[-2] + '.json')
with open(output_file, 'w') as f:
json.dump(results, f, indent=4)
logger.info("DONE ! Check the inference results at {}".format(output_file))
if name == ‘main’:
parser = argparse.ArgumentParser(description='Inference on DocVQA dataset.')
parser.add_argument('--input_dir', required=True,
help='The input feature extracted from DocVQA dataset. I can be train/val/test subset.' ,
)
parser.add_argument('--model', default='microsoft/layoutlmv2-base-uncased',
help='The model architecture.'
)
parser.add_argument('--weights', required=True,
help='The path to model weights',
)
parser.add_argument('--output_dir', required=True,
help='The output directory'
)
args = vars(parser.parse_args())
main(args)
The output in terminal like below.
(transformer_env) root@ae94a4e6c92d:/mlcv/WorkingSpace/NCKH/tiennv/vqa_thesis/docvqa/libs/layoutlmv2# CUDA_VISIBLE_DEVICES=1,2 python train.py --work_dir ./runs/train/test_multi-gpus --train_config default_config
2021-09-26 10:11:49,801 - INFO - Loading training configuration …
2021-09-26 10:11:49,802 - INFO - Configuration: {‘optimizer’: <class ‘torch.optim.adam.Adam’>, ‘lr’: 0.0001, ‘epochs’: 2, ‘batch_size’: 2, ‘momentum’: 0.9, ‘eval_freq’: 1, ‘save_freq’: 1, ‘num_workers’: 4}
2021-09-26 10:11:49,803 - INFO - Loading training dataset from /mlcv/Databases/DocVQA_2020-21/task_1/extracted_features/layoutlmv2/train …
2021-09-26 10:11:49,953 - INFO - Loading validation dataset from /mlcv/Databases/DocVQA_2020-21/task_1/extracted_features/layoutlmv2/val …
2021-09-26 10:11:49,977 - INFO - Training size: 39456 - Validation size: 5344
2021-09-26 10:11:49,978 - INFO - Loading pre-training model from microsoft/layoutlmv2-base-uncased checkpoint
Some weights of the model checkpoint at microsoft/layoutlmv2-base-uncased were not used when initializing LayoutLMv2ForQuestionAnswering: [‘layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.num_batches_tracked’, ‘layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.num_batches_tracked’]
This IS expected if you are initializing LayoutLMv2ForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing LayoutLMv2ForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LayoutLMv2ForQuestionAnswering were not initialized from the model checkpoint at microsoft/layoutlmv2-base-uncased and are newly initialized: [‘qa_outputs.weight’, ‘qa_outputs.bias’, ‘layoutlmv2.visual_segment_embedding’]
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Running DDP with model parallel example on cuda:0 device
Running DDP with model parallel example on cuda:1 device
GPUs usages for model: 6721 Mb
Epoch 1/2
GPUs usages for model: 6721 Mb
Epoch 1/2 |
st175521 | I guess this is very similar to question of Error when training AutoModelForQuesionAnswering with Distribute Data Parallel? 1
Please check my reply there. |
st175522 | I reproduce the training code from DataParallel to DistributedDataParallel, It does not release bugs in training, but it does not print any log or running. Could show me what is wrong with my code?
This is my code and the distribute data parallel reference from Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.9.1+cu102 documentation
"""
Usages:
CUDA_VISIBLE_DEVICES=0 python train.py --train_config default_config --work_dir runs/train/layoutlmv2-base-uncased_50e/
"""
import argparse
from torch import distributed as dist
from transformers import AutoModelForQuestionAnswering
import torch.nn as nn
from utils import create_logger, get_gpu_memory_map, load_feature_from_file, setup, cleanup
from config import TRAIN_FEATURE_PATH, VAL_FEATURE_PATH, MODEL_CHECKPOINT, TRAINING_CONFIGs
import numpy as np
import torch
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.multiprocessing as mp
def train(rank, model, train_data, val_data, world_size,
epochs, optimizer, lr, save_freq,
eval_freq, work_dir):
device = rank
print("Running DDP with model parallel example on cuda:{} device".format(rank))
# logger.info("Running DDP with model parallel example on cuda:{} device".format(rank))
setup(rank, world_size)
GPU_usage_before = get_gpu_memory_map()
model = model.to(rank)
model = DDP(model, device_ids=[rank], find_unused_parameters=True)
gpus_usage = np.sum(get_gpu_memory_map() - GPU_usage_before)
print("GPUs usages for model: {} Mb".format(gpus_usage))
# logger.info("GPUs usages for model: {} Mb".format(gpus_usage))
optimizer = optimizer(model.parameters(), lr=lr)
model.train()
min_valid_loss = np.inf
idx = 1
for epoch in range(1, epochs):
print("Epoch {}/{}".format(epoch, epochs))
# logger.info("Epoch {}/{}".format(epoch, epochs))
train_loss = 0.0
for _, train_batch in enumerate(train_data):
input_ids = train_batch["input_ids"].to(device)
attention_mask = train_batch["attention_mask"].to(device)
token_type_ids = train_batch["token_type_ids"].to(device)
bbox = train_batch["bbox"].to(device)
image = train_batch["image"].to(device)
start_positions = train_batch["start_positions"].to(device)
end_positions = train_batch["end_positions"].to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids,
bbox=bbox, image=image, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward()
optimizer.step()
train_loss += loss.item()
# Evaluate current model on entire validation dataset after each `eval_freq` iterations
if idx % eval_freq == 1:
val_loss = 0.0
model.eval()
for _, val_batch in enumerate(val_data):
# val_batch = val_batch.to(device)
input_ids = val_batch["input_ids"].to(device)
attention_mask = val_batch["attention_mask"].to(device)
token_type_ids = val_batch["token_type_ids"].to(device)
bbox = val_batch["bbox"].to(device)
image = val_batch["image"].to(device)
start_positions = val_batch["start_positions"].to(device)
end_positions = val_batch["end_positions"].to(device)
outputs = model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids,
bbox=bbox, image=image, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
# Calculate Loss
val_loss += loss.item()
print("Iterations: {:<6} - epoch: {:<3} - train_loss: {:<6} - val_loss: {:<6}".format(idx, epoch, train_loss/eval_freq, val_loss/len(val_data)))
print("Iterations: {:<6} - epoch: {:<3} - train_loss: {:<6} - val_loss: {:<6}".format(idx, epoch, train_loss/eval_freq, val_loss/len(val_data)))
# logger.info("Iterations: {:<6} - epoch: {:<3} - train_loss: {:<6} - val_loss: {:<6}".format(idx, epoch, train_loss/eval_freq, val_loss/len(val_data)))
# loss_log.info("Iterations: {:<6} - epoch: {:<3} - train_loss: {:<6} - val_loss: {:<6}".format(idx, epoch, train_loss/eval_freq, val_loss/len(val_data)))
if min_valid_loss > val_loss/len(val_data):
print("Found best model !! Validation loss descreased from {} to {}".format(min_valid_loss, val_loss/len(val_data)))
# logger.info("Found best model !! Validation loss descreased from {} to {}".format(min_valid_loss, val_loss/len(val_data)))
torch.save(model.state_dict(), os.path.join(work_dir, 'best'+'.pth'))
min_valid_loss = val_loss/len(val_data)
# Save model each save_freq iteration
if idx % save_freq == 1:
print("Saving model to {}".format(os.path.join(work_dir, str(idx).zfill(5)+'.pth')))
# logger.info("Saving model to {}".format(os.path.join(work_dir, str(idx).zfill(5)+'.pth')))
torch.save(model.state_dict(), os.path.join(work_dir, str(idx).zfill(5)+'.pth'))
dist.barrier()
# Reset training loss
train_loss = 0.0
idx += 1
# logger.info("Done !")
# logger.info("The minimum on validation {}".format(min_valid_loss))
print("DONE !")
print("The minimum on validation {}".format(min_valid_loss))
cleanup()
return model
def main(args):
gpu_ids = [i for i in range(torch.cuda.device_count())]
torch.cuda.set_device(gpu_ids[0])
if not os.path.exists(args['work_dir']):
os.mkdir(args['work_dir'])
# Create logger
loss_log = create_logger(os.path.join(args["work_dir"], 'loss.log'))
logger = create_logger(os.path.join(args['work_dir'], 'log.log'))
logger.info('Loading training configuration ...')
config = TRAINING_CONFIGs[args['train_config']]
optimizer, momentum, lr, epochs, batch_size,\
eval_freq, save_freq, num_workers = config['optimizer'], config['momentum'], \
config['lr'], config['epochs'], \
config['batch_size'], config['eval_freq'], config['save_freq'], config['num_workers']
logger.info("Configuration: {}".format(config))
# Check whether feature path file existing or not
if not os.path.exists(TRAIN_FEATURE_PATH):
logger.error("Invalid training feature path")
exit(0)
if not os.path.exists(VAL_FEATURE_PATH):
logger.error("Invalid validation feature path")
exit(0)
# Load data into program
logger.info("Loading training dataset from {} ...".format(TRAIN_FEATURE_PATH))
train_dataloader = load_feature_from_file(path=TRAIN_FEATURE_PATH,
batch_size=batch_size, num_workers=num_workers)
logger.info("Loading validation dataset from {} ...".format(VAL_FEATURE_PATH))
val_dataloader = load_feature_from_file(path=VAL_FEATURE_PATH,
batch_size=batch_size, num_workers=num_workers)
logger.info("Training size: {} - Validation size: {}".format(
len(train_dataloader.dataset), len(val_dataloader.dataset)))
logger.info("Loading pre-training model from {} checkpoint".format(MODEL_CHECKPOINT))
model = AutoModelForQuestionAnswering.from_pretrained(MODEL_CHECKPOINT)
# Fine-tuning model
# trained_model = train(model=model, train_data=train_dataloader, val_data=val_dataloader,
# epochs=epochs, optimizer=optimizer, lr=lr, loss_log=loss_log, save_freq=save_freq,
# work_dir=args['work_dir'], logger=logger, eval_freq=eval_freq, gpu_ids=gpu_ids)
mp.spawn(train,
args=(model, train_dataloader, val_dataloader, len(gpu_ids),
epochs, optimizer, lr, save_freq,
eval_freq, args['work_dir']),
nprocs=len(gpu_ids),
join=True)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Fine tuning pre-training model on DocVQA data')
parser.add_argument('--work_dir', default='runs/train/train_1/',
help='The directory store model checkpoint and log file',
)
parser.add_argument('--train_config', default='default_config',
help='The training configurations: learning rate, batch size, epochs, optimizer, ...'
)
args = vars(parser.parse_args())
main(args)
The output in terminal like below.
(transformer_env)root@ae94a4e6c92d:/mlcv/WorkingSpace/NCKH/tiennv/vqa_thesis/docvqa/libs/layoutlmv2# CUDA_VISIBLE_DEVICES=1,2 python train.py --work_dir ./runs/train/test_multi-gpus --train_config default_config
2021-09-26 10:11:49,801 - INFO - Loading training configuration ...
2021-09-26 10:11:49,802 - INFO - Configuration: {'optimizer': <class 'torch.optim.adam.Adam'>, 'lr': 0.0001, 'epochs': 2, 'batch_size': 2, 'momentum': 0.9, 'eval_freq': 1, 'save_freq': 1, 'num_workers': 4}
2021-09-26 10:11:49,803 - INFO - Loading training dataset from /mlcv/Databases/DocVQA_2020-21/task_1/extracted_features/layoutlmv2/train ...
2021-09-26 10:11:49,953 - INFO - Loading validation dataset from /mlcv/Databases/DocVQA_2020-21/task_1/extracted_features/layoutlmv2/val ...
2021-09-26 10:11:49,977 - INFO - Training size: 39456 - Validation size: 5344
2021-09-26 10:11:49,978 - INFO - Loading pre-training model from microsoft/layoutlmv2-base-uncased checkpoint
Some weights of the model checkpoint at microsoft/layoutlmv2-base-uncased were not used when initializing LayoutLMv2ForQuestionAnswering: ['layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.num_batches_tracked']
- This IS expected if you are initializing LayoutLMv2ForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LayoutLMv2ForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LayoutLMv2ForQuestionAnswering were not initialized from the model checkpoint at microsoft/layoutlmv2-base-uncased and are newly initialized: ['qa_outputs.weight', 'qa_outputs.bias', 'layoutlmv2.visual_segment_embedding']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Running DDP with model parallel example on cuda:0 device
Running DDP with model parallel example on cuda:1 device
GPUs usages for model: 6721 Mb
Epoch 1/2
GPUs usages for model: 6721 Mb
Epoch 1/2
It still prints anything … Please help me … |
st175523 | It seems that the program got stuck before the eval phase, as you didn’t see anything printed out from the eval phase.
I guess the issue is this line torch.cuda.set_device(gpu_ids[0]), since gpu_ids[0] is always 0, then you set device rank as 0 even for non-master processes, and hence the collective communication incurred by DDP will hang.
I think you can try: 1) remove torch.cuda.set_device(gpu_ids[0]) from main method, and 2) add torch.cuda.set_device(rank) at the beginning of train method. |
st175524 | Hello,
I encountered a very strange problem that repeatedly happened.
I have a training started with the following command on a Linux server:
oarsub -l "host=1/gpuid=4,walltime=480:0:0" \
"/home/username/.env/py37/bin/python -m torch.distributed.launch --nproc_per_node=4 --use_env main.py --coco_path /data/coco --output_dir /home/username/code/output --resume /home/username/code/output/checkpoint.pth"
After a few hours, the training was killed. And this happened every time I restarted it. Our system admin could not figure out what was wrong.
The std error messages (content of OAR.<jobID>.stderr) are the following:
Traceback (most recent call last):
File "/home/username/.local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/username/.local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/username/.env/py37/lib/python3.7/site-packages/torch/distributed/launch.py", line 263, in <module>
main()
File "/home/username/.env/py37/lib/python3.7/site-packages/torch/distributed/launch.py", line 259, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/username/.env/py37/bin/python', '-u', 'main.py', '--coco_path', '/data/coco', '--output_dir', '/home/username/code/output', '--resume', '/home/username/code/output/checkpoint.pth']' died with <Signals.SIGKILL: 9>.
In the std output file OAR.<jobID>.stdout, the last lines are the following:
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
This message is displayed only at the end of OAR.<jobID>.stdout, when the crash happened, so maybe it has something to do with the crash.
Could you please help? Thank you very much in advance! |
st175525 | f10w:
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
This is printed immediately after you run launch.py. See the code below:
github.com
pytorch/pytorch/blob/0fc0a9308ab1e8f6b65d2565c45f54afb449d438/torch/distributed/launch.py#L215-L222 19
if 'OMP_NUM_THREADS' not in os.environ and args.nproc_per_node > 1:
current_env["OMP_NUM_THREADS"] = str(1)
print("*****************************************\n"
"Setting OMP_NUM_THREADS environment variable for each process "
"to be {} in default, to avoid your system being overloaded, "
"please further tune the variable for optimal performance in "
"your application as needed. \n"
"*****************************************".format(current_env["OMP_NUM_THREADS"]))
During the few hours when the job is running, did the DistributedDataParallel training making progress as expected? You can, e.g., print some logs in every iteration to check this.
And since the command line contains a path to checkpoint, I assume the training job would write into that checkpoint file periodically? Did the job successfully generated any checkpoint? |
st175526 | Thanks for your reply. The training made progress (and saved checkpoints) for a few hours before being killed.
Here’s the beginning of the log file (head -n 20 OAR.<jobID>.stdout):
| distributed init (rank 2): env://
| distributed init (rank 3): env://
| distributed init (rank 0): env://
| distributed init (rank 1): env://
git:
sha: ae03a2d6e52a9ec1b67f85437d0a275c5abbe9ac, status: has uncommited changes, branch: master
Namespace(aux_loss=True, backbone=‘resnet50’, batch_size=2, bbox_loss_coef=5, clip_max_norm=0.1, coco_panoptic_path=None, coco_path=’/data/coco’, dataset_file=‘coco’, dec_layers=6, device=‘cuda’, dice_loss_coef=1, dilation=False, dim_feedforward=2048, dist_backend=‘nccl’, dist_url=‘env://’, distributed=True, dropout=0.1, enc_layers=6, eos_coef=0.1, epochs=300, eval=False, frozen_weights=None, giou_loss_coef=2, gpu=0, hidden_dim=256, lr=0.0001, lr_backbone=1e-05, lr_drop=200, mask_loss_coef=1, masks=False, nheads=8, num_queries=100, num_workers=2, output_dir=’/home/username/code/output’, position_embedding=‘sine’, pre_norm=False, rank=0, remove_difficult=False, resume=’/home/username/code/output/checkpoint.pth’, seed=42, set_cost_bbox=5, set_cost_class=1, set_cost_giou=2, start_epoch=0, weight_decay=0.0001, world_size=4)
number of params: 41302368
loading annotations into memory…
Done (t=22.53s)
creating index…
index created!
loading annotations into memory…
Done (t=0.75s)
creating index…
index created!
Start training
Epoch: [23] [ 0/14786] eta: 7:42:07 lr: 0.000100 class_error: 22.68 loss: 10.4300 (10.4300) loss_bbox: 0.3688 (0.3688) loss_bbox_0: 0.3812 (0.3812) loss_bbox_1: 0.4038 (0.4038) loss_bbox_2: 0.3718 (0.3718) loss_bbox_3: 0.3781 (0.3781) loss_bbox_4: 0.3690 (0.3690) loss_ce: 0.5279 (0.5279) loss_ce_0: 0.6643 (0.6643) loss_ce_1: 0.5894 (0.5894) loss_ce_2: 0.5849 (0.5849) loss_ce_3: 0.5311 (0.5311) loss_ce_4: 0.5083 (0.5083) loss_giou: 0.8055 (0.8055) loss_giou_0: 0.8359 (0.8359) loss_giou_1: 0.7730 (0.7730) loss_giou_2: 0.7711 (0.7711) loss_giou_3: 0.7646 (0.7646) loss_giou_4: 0.8013 (0.8013) cardinality_error_unscaled: 8.8750 (8.8750) cardinality_error_0_unscaled: 13.2500 (13.2500) cardinality_error_1_unscaled: 12.7500 (12.7500) cardinality_error_2_unscaled: 8.1250 (8.1250) cardinality_error_3_unscaled: 8.1250 (8.1250) cardinality_error_4_unscaled: 8.6250 (8.6250) class_error_unscaled: 22.6786 (22.6786) loss_bbox_unscaled: 0.0738 (0.0738) loss_bbox_0_unscaled: 0.0762 (0.0762) loss_bbox_1_unscaled: 0.0808 (0.0808) loss_bbox_2_unscaled: 0.0744 (0.0744) loss_bbox_3_unscaled: 0.0756 (0.0756) loss_bbox_4_unscaled: 0.0738 (0.0738) loss_ce_unscaled: 0.5279 (0.5279) loss_ce_0_unscaled: 0.6643 (0.6643) loss_ce_1_unscaled: 0.5894 (0.5894) loss_ce_2_unscaled: 0.5849 (0.5849) loss_ce_3_unscaled: 0.5311 (0.5311) loss_ce_4_unscaled: 0.5083 (0.5083) loss_giou_unscaled: 0.4027 (0.4027) loss_giou_0_unscaled: 0.4180 (0.4180) loss_giou_1_unscaled: 0.3865 (0.3865) loss_giou_2_unscaled: 0.3855 (0.3855) loss_giou_3_unscaled: 0.3823 (0.3823) loss_giou_4_unscaled: 0.4006 (0.4006) time: 1.8753 data: 0.4317 max mem: 2509
Epoch: [23] [ 10/14786] eta: 2:39:48 lr: 0.000100 class_error: 30.30 loss: 11.0897 (10.6174) loss_bbox: 0.3473 (0.3555) loss_bbox_0: 0.3888 (0.3989) loss_bbox_1: 0.3834 (0.3796) loss_bbox_2: 0.3662 (0.3772) loss_bbox_3: 0.3590 (0.3603) loss_bbox_4: 0.3520 (0.3548) loss_ce: 0.5279 (0.5271) loss_ce_0: 0.6043 (0.6137) loss_ce_1: 0.5870 (0.5653) loss_ce_2: 0.5627 (0.5542) loss_ce_3: 0.5400 (0.5402) loss_ce_4: 0.5083 (0.5214) loss_giou: 0.8325 (0.8231) loss_giou_0: 0.9057 (0.8922) loss_giou_1: 0.8793 (0.8482) loss_giou_2: 0.8800 (0.8514) loss_giou_3: 0.8392 (0.8296) loss_giou_4: 0.8540 (0.8247) cardinality_error_unscaled: 9.1250 (10.3182) cardinality_error_0_unscaled: 13.2500 (13.6477) cardinality_error_1_unscaled: 12.7500 (12.3068) cardinality_error_2_unscaled: 9.6250 (10.9091) cardinality_error_3_unscaled: 9.1250 (10.1705) cardinality_error_4_unscaled: 9.1250 (10.0341) class_error_unscaled: 22.6786 (23.9301) loss_bbox_unscaled: 0.0695 (0.0711) loss_bbox_0_unscaled: 0.0778 (0.0798) loss_bbox_1_unscaled: 0.0767 (0.0759) loss_bbox_2_unscaled: 0.0732 (0.0754) loss_bbox_3_unscaled: 0.0718 (0.0721) loss_bbox_4_unscaled: 0.0704 (0.0710) loss_ce_unscaled: 0.5279 (0.5271) loss_ce_0_unscaled: 0.6043 (0.6137) loss_ce_1_unscaled: 0.5870 (0.5653) loss_ce_2_unscaled: 0.5627 (0.5542) loss_ce_3_unscaled: 0.5400 (0.5402) loss_ce_4_unscaled: 0.5083 (0.5214) loss_giou_unscaled: 0.4162 (0.4116) loss_giou_0_unscaled: 0.4528 (0.4461) loss_giou_1_unscaled: 0.4397 (0.4241) loss_giou_2_unscaled: 0.4400 (0.4257) loss_giou_3_unscaled: 0.4196 (0.4148) loss_giou_4_unscaled: 0.4270 (0.4123) time: 0.6489 data: 0.0493 max mem: 3574
And here’s the end (tail -n 4 OAR.<jobID>.stdout):
Epoch: [23] [ 4730/14786] eta: 1:07:17 lr: 0.000100 class_error: 40.61 loss: 9.2283 (10.3881) loss_bbox: 0.3610 (0.3562) loss_bbox_0: 0.4111 (0.4107) loss_bbox_1: 0.3788 (0.3752) loss_bbox_2: 0.3783 (0.3661) loss_bbox_3: 0.3680 (0.3598) loss_bbox_4: 0.3660 (0.3571) loss_ce: 0.4747 (0.5366) loss_ce_0: 0.5628 (0.6116) loss_ce_1: 0.5367 (0.5860) loss_ce_2: 0.5133 (0.5601) loss_ce_3: 0.4722 (0.5455) loss_ce_4: 0.4595 (0.5364) loss_giou: 0.7070 (0.7790) loss_giou_0: 0.7854 (0.8533) loss_giou_1: 0.7170 (0.8021) loss_giou_2: 0.7175 (0.7903) loss_giou_3: 0.7327 (0.7818) loss_giou_4: 0.7180 (0.7802) cardinality_error_unscaled: 7.7500 (9.0408) cardinality_error_0_unscaled: 10.8750 (11.5247) cardinality_error_1_unscaled: 10.1250 (11.1548) cardinality_error_2_unscaled: 8.3750 (9.9196) cardinality_error_3_unscaled: 7.3750 (9.3645) cardinality_error_4_unscaled: 7.7500 (9.0276) class_error_unscaled: 30.4464 (32.1511) loss_bbox_unscaled: 0.0722 (0.0712) loss_bbox_0_unscaled: 0.0822 (0.0821) loss_bbox_1_unscaled: 0.0758 (0.0750) loss_bbox_2_unscaled: 0.0757 (0.0732) loss_bbox_3_unscaled: 0.0736 (0.0720) loss_bbox_4_unscaled: 0.0732 (0.0714) loss_ce_unscaled: 0.4747 (0.5366) loss_ce_0_unscaled: 0.5628 (0.6116) loss_ce_1_unscaled: 0.5367 (0.5860) loss_ce_2_unscaled: 0.5133 (0.5601) loss_ce_3_unscaled: 0.4722 (0.5455) loss_ce_4_unscaled: 0.4595 (0.5364) loss_giou_unscaled: 0.3535 (0.3895) loss_giou_0_unscaled: 0.3927 (0.4267) loss_giou_1_unscaled: 0.3585 (0.4011) loss_giou_2_unscaled: 0.3587 (0.3952) loss_giou_3_unscaled: 0.3664 (0.3909) loss_giou_4_unscaled: 0.3590 (0.3901) time: 0.4095 data: 0.0113 max mem: 7106
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
The OMP message is at the very end of the log file, so it doesn’t seem to be printed right after launch.
Please let me know if you need further information. Thanks! |
st175527 | hmm, this is weird. One possibility could be the print buffer from the main process wasn’t full at the beginning and then was only flushed on exit, leading to the message to be shown in the end. But I am not sure if this is case.
One reason for this behavior might be some DDP process hits OOM (or some other error) after a while and crashed, causing other DDP processes to hang. Did you try to do any try-except around DDP in main.py? If so, you might want to try https://pytorch.org/elastic 7, as try-except in one process could lead to DDP communication de-sync/hang/timeout. |
st175528 | In the code, the print function is tweaked to print on the master process only (and do nothing on the others), so the content of OAR.<jobID>.stdout comes from the master. The thing I’m not sure about is the file OAR.<jobID>.stderr. If there is an OOM error, on any process, then the error message should be added to OAR.<jobID>.stderr, right? Because this has nothing to do with the print function I guess. |
st175529 | Hi,
I faced the same problem as you. This error usually caused by the compiling process of various libraries. I created a brand-new environment using Anaconda and train again. This error disappeared.
I hope this information could help you. |
st175530 | Yes, So clever you are, I solve the problem according to your answer, thank you. |
st175531 | When elastic training uses c10d as the backend store and the master node fails, will the program restart?
Will elastic choose a new master node if I use etcd ? |
st175532 | As of today we do no have a builtin failover mechanism, so if the master node fails, it will cause the training to terminate. |
st175533 | Will elastic choose a new master node if I use etcd ?
Sorry, I missed the second question. Yes, if you use etcd, then if the worker on the master node fails, the agents will try to establish another round of rendezvous (up to --max-restarts option you specified).
In summary c10d is ideal if you don’t want to deal with installing and running a 3rd party dependency, etcd is ideal if you care more about fault tolerance and fail over. |
st175534 | Thank you very much for your reply!
After reading the source code, I understood some execution mechanisms. Your reply makes me confirm that etcd is a better choice for me.
I will deploy etcd server on a stable cpu machine, so that I can dynamically increase or decrease nodes without worrying about whether or not the master node fails, as long as the etcd server does not fail. |
st175535 | I am curious about one more thing.
If I use c10d to run elastic training and set --nproc_per_node to zero on a cpu machine, does the similar function of etcd backend be achieved ? |
st175536 | Unfortunately not. You will still have a single point of failure even if the c10d store runs on a separate host. If that host fails, you would end up with a failure of the whole job. By the way you would have the same problem if you had a single etcd instance. The advantage you have with etcd is that you can set up a small cluster (2 or more machines) of etcd servers for failover handling. |
st175537 | Oh, I see. The most important thing about etcd is its distributed reliable key-value store. When using etcd, it is meaningless to have only one etcd instance. |
st175538 | Thank you for your patient reply! It’s a pleasure to communicate with you. I hope to discuss with you more in the future. Thank you again! |
st175539 | i have write a simple net to test DistributedDataParallel() ,but
CUDA_VISIBLE_DEVICES="0,1" python -m torch.distributed.launch --nproc_per_node 2 test_bert.py
with two gpu doesn’t use less time. So Under what circumstances, DistributedDataParallel() doesn’t work.Or is there something wrong with my code?
import torch.nn as nn
import transformers
import config
import pandas as pd
from sklearn.utils import shuffle
import torch.distributed as dist
from torch.utils.data.distributed import DistributedSampler
from torch.nn.parallel import DistributedDataParallel as DDP
import torch
from tqdm import tqdm
import os
LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html
RANK = int(os.getenv('RANK', -1))
WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))
class Dataset:
def __init__(self, review, target):
self.review = review
self.target = target
self.tokenizer = transformers.BertTokenizer.from_pretrained(config.BERT_PATH)
self.max_len = 512
def __len__(self):
return len(self.review)
def __getitem__(self, item):
review = str(self.review[item])
review = " ".join(review.split())
inputs = self.tokenizer.encode_plus(
review,
None,
add_special_tokens=True,
max_length=self.max_len,
padding='max_length',
truncation=True
)
ids = inputs["input_ids"]
mask = inputs["attention_mask"]
token_type_ids = inputs["token_type_ids"]
return {
"ids": torch.tensor(ids, dtype=torch.float),
"attention_mask": torch.tensor(mask, dtype=torch.float),
"token_type_ids": torch.tensor(token_type_ids, dtype=torch.float),
"review": self.review[item],
"targets": torch.tensor(self.target[item], dtype=torch.float),
}
def loss_fn(outputs, targets):
BCEcls = nn.BCEWithLogitsLoss().to(LOCAL_RANK)
return BCEcls(outputs, targets.view(-1,1))
class DKANET(nn.Module):
def __init__(self,concept_emd_glove=None,):
super(DKANET, self).__init__()
self.bert = transformers.BertModel.from_pretrained(config.BERT_PATH)
self.out = nn.Linear(768, 1)
def forward(self,sentence):
outputs = self.bert(input_ids=sentence['ids'],
attention_mask=sentence['attention_mask'],
token_type_ids=sentence['token_type_ids'])
o = self.out(outputs.pooler_output)
return o
def train(data_loader, model, optmizer):
model.train()
pbar = enumerate(data_loader)
if RANK in [-1, 0]:
pbar = tqdm(pbar, total=len(data_loader))
for index, d in pbar:
ids = d["ids"]
attention_mask = d['attention_mask']
token_type_ids = d['token_type_ids']
target = d['targets']
ids = ids.to(LOCAL_RANK, dtype=torch.long)
attention_mask = attention_mask.to(LOCAL_RANK, dtype=torch.long)
token_type_ids = token_type_ids.to(LOCAL_RANK, dtype=torch.long)
target = target.to(LOCAL_RANK, dtype=torch.float)
sentence = {
"ids": ids,
"attention_mask":attention_mask,
"token_type_ids" : token_type_ids,
}
optmizer.zero_grad()
outputs = model(sentence)
loss = loss_fn(outputs, target)
if RANK in [-1, 0]:
pbar.set_description(f'loss{loss},sent:{format(loss,".3f")}')
loss.backward()
optmizer.step()
def run():
torch.cuda.set_device(LOCAL_RANK)
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo")
path_d = 'dataset/books/books.csv'
train_s = pd.read_csv(path_d)
train_s = shuffle(train_s)
train_s = train_s.reset_index(drop=True)
train_data = Dataset(train_s.review.values, train_s.sentiment.values)
datasampler_train = DistributedSampler(train_data, num_replicas=2, rank=0)
train_data = torch.utils.data.DataLoader(
train_data, num_workers=16,batch_size=config.TRAIN_BATCH_SIZE,sampler=datasampler_train)
model = DKANET().to(LOCAL_RANK)
model = nn.parallel.DistributedDataParallel(model,device_ids=[LOCAL_RANK], output_device=LOCAL_RANK)
#### optimizer initializer
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
#######################
for epoch in range(config.EPOCHS):
train(train_data, model, optimizer)
def main():
run()
if WORLD_SIZE > 1 and RANK == 0:
_ = [print('Destroying process group... ', end=''), dist.destroy_process_group(), print('Done.')]
if __name__=='__main__':
main() |
st175540 | Hi, one issue I see is that you are spawning 16 data reading workers on each distributed process for a total of 32 processes just for data reading, do you need that many workers? This might be creating too many processes on your machine resulting in trashing/slowdown. Could you try setting num_workers=0 and comparing the training speed of local vs distributed (ensuring the distributed training splits the data, which it appears it does correctly in the script)? |
st175541 | I did a couple of experiments,it seems that more gpu more time, and setting num_workers=0 doesn’t work.
batch 8
gpus 2 num_worker 16 01:57
gpus 2 num_worker 4 01:56
gpus 2 num_worker 0 02:07
batch 8
gpus 1 num_worker 4 01:32
gpus 1 num_worker 0 01:43 |
st175542 | Additoinally, “gpus 2 num_worker 16 01:57” means setting num_workers=16, using 2 V100, costs 01:57min. |
st175543 | Hello,
I have trained a torch model for NLP tasks and would like to perform some inference using a multi GPU machine (in this case with two GPUs).
Inside the processing code, I use this
dataset = TensorDataset(encoded_dict['input_ids'], encoded_dict['attention_mask'])
sampler = DistributedSampler(
dataset, num_replicas=args.nodes * args.gpus, rank=args.node_rank * args.gpus + gpu_number, shuffle=False
)
dataloader = DataLoader(dataset, batch_size=batch_size, sampler=sampler)
For those familiar with NLP, encoded_dict is the output from the tokenizer.batch_encode_plus function where the tokenizer is an instance of transformers.BertTokenizer.
The issue I’m having is that each GPU is doing predictions (i.e. inference) on a subset of the full dataset, and saving the predictions separately; for example, if I have a dataset with 1000 samples to predict, each GPU is predicting 500 of them. As a result, I have no way of knowing which samples out of the 1000 were predicted by which GPU, as their order is not preserved, therefore the model predictions are meaningless as I cannot trace each of them back to their input sample.
I have tried to save the dataloader instance (as a pickle) together with the predictions and then extracting the input_ids by using dataloader.dataset.tensors, however this requires a tokeniser decoding step which I rather avoid, as the tokenizer will have slightly changed the text (for example double whitespaces would be removed, words with dashes will have been split and so on).
What is the cleanest way to save the input text samples together with their predictions when doing inference in distributed mode? |
st175544 | Thanks for pointing out this issue!
If you have some sort of identifier for your input (i.e. input_ids as you mentioned) can you build a mapping of {input_id: prediction} on each distributed rank? After inference on N inputs, this will be a dict of size N on each rank and you can use all_gather_object (torch.distributed.distributed_c10d — PyTorch 1.9.0 documentation) to get the entire input → prediction mapping across the world. Would something like this satisfy your use case? |
st175545 | Hi @rvarm1
Many thanks for your reply. I think your solution would work, the only complication being that input_ids is not really the best identifier, as it the tokenised input text (effectively each sentence, which is a python string, is tokenised, i.e. converted into a list of integers) whereas I’d like to keep the original string as the id, which is complicated because the splitting of the data into different GPUs happens AFTER the tokenisation.
However, I’ve done some further debugging and I’ve noticed that the data are not actually randomly split across GPUs as I thought. If I set shuffle=False in the DistributedSampler then this happens:
in the case of two GPUs, GPU 0 and GPU 1, all the samples with even index (starting from 0) will be passed to GPU 0, and all those with odd index will be passed to GPU 1.
So for example, if you have 10 samples, whose indices are [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], then samples 0, 2, 4, 6, 8 will go to GPU 0 and samples 1, 3, 5, 7, 9 will to go GPU 1. Therefore this allows me to map the predictions back to the original text string samples by just using this ordering. Not sure if this is a neat solution, as keeping the original text string next to its prediction would be ideal, but at least it works.
N.B. Special case: As the two GPUs must be passed the SAME number of inputs, if the number of inputs is an odd number, for example we have 9 samples with indices [0, 1, 2, 3, 4, 5, 6, 7, 8], then GPU 0 will be passed samples 0, 2, 4, 6, 8 and GPU 1 will be passed samples 1, 3, 5, 7, 0 (in this exact order). In other words, the first sample with index 0 is repeated at the very end of the dataset to make sure each GPU has the same number of samples, in which case we can then write some codes which drops the last prediction from GPU 1 as it is redundant. |
st175546 | Hi, I am curious why this require_forward_param_sync is set to True in the DistributedDataParallel. After I manually set it to False, the multi-gpus in one node speeds up a lot. Since the gradients of each replica have been synchronized, why do we need to synchronize the parameters in the forward? |
st175547 | Solved by rvarm1 in post #2
One of the things this flag controls is whether to broadcast model buffers in each iteration to ensure that module buffers are synchronized across processes. You can evaluate the speed up of your model with broadcast_buffers=False and remove it if model accuracy permits. |
st175548 | One of the things this flag controls is whether to broadcast model buffers in each iteration to ensure that module buffers are synchronized across processes. You can evaluate the speed up of your model with broadcast_buffers=False and remove it if model accuracy permits. |
st175549 | Thanks for your reply, by setting broadcast_buffers to False indeed achieve the same speed up. Therefore, the broadcast_buffers can be safely set to False if anyone of the following holds
(1) there is no buffer in the model;
(2) the buffer will never be updated.
is this correct? |
st175550 | Are you sure there are no buffers in your module? If that is the case it is surprising that you’re seeing a speedup because this synchronization would be a noop (there are no buffers to synchronize).
Can you confirm this by checking the self.named_buffers() attr of your nn.Module you’re passing into ddp? |
st175551 | There are some small buffers in my model, but they are just some constant matrix, the synchronization might take some time.
Another interesting speedup also confuses me a lot, in the Megatron-LM (GitHub - NVIDIA/Megatron-LM: Ongoing research training transformer language models at scale, including: BERT & GPT-2) package, the model will be explicitly converted into the Float16, by calling something like model.half() before training the model with fp16 enabled, and by doing so the backward time will be largely reduced (e.g. the time takes to finish loss.backward()). I think the backward time should be agnostic to whether the model is float16 or not since the gradients are fp16 when the fp16 is enabled. Do you have any hint about why this may happen? |
st175552 | I’m running Distributed Data Parallel example 3 in jupyter labs, and getting an error:
process 1 terminated with exit code 1
How can I fix it? Where should I look at? I tried using “nccl” or “mpi” in dist.init_process_group, no effect. Replacing the entire body of example() with pass: no effect.
Full stack trace:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-29-703537cd0120> in <module>
33 join=True)
34
---> 35 main()
<ipython-input-29-703537cd0120> in main()
28 def main():
29 world_size = 2
---> 30 mp.spawn(example,
31 args=(world_size,),
32 nprocs=world_size,
/usr/local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py in spawn(fn, args, nprocs, join, daemon, start_method)
197 ' torch.multiprocessing.start_process(...)' % start_method)
198 warnings.warn(msg)
--> 199 return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
/usr/local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)
155
156 # Loop on join until it returns True or raises an exception.
--> 157 while not context.join():
158 pass
159
/usr/local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py in join(self, timeout)
108 )
109 else:
--> 110 raise Exception(
111 "process %d terminated with exit code %d" %
112 (error_index, exitcode)
Exception: process 1 terminated with exit code 1 |
st175553 | Thanks for posting @dyukha, I am seeing that you are running the code inside a jupyter notebook, I think the error you hit here is not a problem with PyTorch distributed, this is related to the incompatibility between python multiprocessing module and Jupyter Notebook because multiprocessing module pickles data to send to processes. If you can try the script in a normal python file it should work fine. |
st175554 | wanchaol:
python multiprocessing module and Jupyter Notebook
You could try multiprocessing.Pool and see if that works Multiprocessing on Python 3 Jupyter - Stack Overflow 3 |
st175555 | I use only one machine with multigpu to train. In init_process_group(), i set world_size==1 and rank==0. I am not very sure that whether it is right for multigpu in one node. It seems that the gpu usage is fine(about 100% in 2 gpu). But when i want to gather the same tensor in different gpus, dist.gather and dist.all_gather doesn’t work, the error like below when running dist.gather:
ValueError: ProcessGroupGloo::gather: Incorrect output list size 2. Output list size should be 1, same as size of the process group.
The environment is Windows, torch 1.7.1, gloo backend, cuda10.2. |
st175556 | Hi, if you’re aiming to do distributed training across 2 GPUs you’d want to set world_size to 2 and the corresponding ranks would be 0 and. 1.
The GPU utilization is at 100% across both GPUs possibly because both GPUs are operating independently with no distributed coordination between them.
Your call to all_gather is the right approach and the error message confirms that world_size is set incorrectly. Fixing the world_size issue should unblock your use case. |
st175557 | I have 2 gpus in one machine for example. When using DistributedDataParallel, i need to set init_process_group. In TORCH.DISTRIBUTED doc I find an example like below:
For example, if the system we use for distributed training has 2 nodes, each of which has 8 GPUs. On each of the 16 GPUs, there is a tensor that we would like to all-reduce. The following code can serve as a reference:
Code running on Node 0
import torch
import torch.distributed as dist
dist.init_process_group(backend=“nccl”,
init_method=“file:///distributed_test”,
world_size=2,
rank=0)
tensor_list = []
for dev_idx in range(torch.cuda.device_count()):
tensor_list.append(torch.FloatTensor([1]).cuda(dev_idx))
dist.all_reduce_multigpu(tensor_list)
Code running on Node 1
import torch
import torch.distributed as dist
dist.init_process_group(backend=“nccl”,
init_method=“file:///distributed_test”,
world_size=2,
rank=1)
tensor_list = []
for dev_idx in range(torch.cuda.device_count()):
tensor_list.append(torch.FloatTensor([1]).cuda(dev_idx))
dist.all_reduce_multigpu(tensor_list)
Like this example, i think if i have only one node, i should set world_size==1 and rank==0 in init_process_group. It works because i can see 2 gpus usage is about 100%. But it seems that all of the collective functions like all_reduce/all_reduce_multigpu do not work well.
If i set world_size=2 and rank=local_rank( the arg in torch.distributed.launch) in init_process_group. The collective functions work well, and I can collective tensors in different gpus. But, the gpu usage is very low, and loss.backward() runs very slow. The gpu usage drop to 0% sometimes when running loss.backward().
The environment is cuda10.2, torch1.7.1, windows with gloo backend.
So which setting is right? world_size==1 with rank==0 or world_size==2 with changing rank according to local_rank. If the first one is right, how can i collective tensors infomation in different gpus.
Thanks a lot. |
st175558 | Hi, world_size = 1 and rank = 0 wouldn’t really work for distributed training as we generally want to train with > 1 GPU (i.e world size > 1). In particular # of GPUs (not necessarily # of nodes) is usually used as the world size.
Regarding slowness in loss.backward(), can you provide a repro of that (it appears that your code snippet is an allreduce above)? In general, loss.backward() will trigger additional allreduces during the backwards pass to synchronize parameter gradients, but especially for 2 GPUs we don’t expect this to add significant overhead. |
st175559 | Hi,
I am trying to train dino with 2 A6000 gpus. The code works fine when I train on a single gpu but crashes when I use 2 gpus. My python version is 3.8.11, pytorch version is 1.9.0, torch.version.cuda: 11.1.
Does anyone have any idea how to debug this error or to solve this problem? Thanks in Advance!
Command:
python -m torch.distributed.launch --nproc_per_node=2 main_dino.py --arch resnet50 --optimizer sgd --weight_decay 1e-4 --weight_decay_end 1e-4 --global_crops_scale 0.14 1 --local_crops_scale 0.05 0.14 --data_path /home/ma/uname/dataset/imagenet/train --output_dir /temp
Error message:
/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/launch.py:163: DeprecationWarning: The ‘warn’ method is deprecated, use ‘warning’ instead
logger.warn(
The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
WARNING:torch.distributed.run:–use_env is deprecated and will be removed in future releases.
Please read local_rank from os.environ('LOCAL_RANK') instead.
INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:
entrypoint : main_dino.py
min_nodes : 1
max_nodes : 1
nproc_per_node : 2
run_id : none
rdzv_backend : static
rdzv_endpoint : 127.0.0.1:29500
rdzv_configs : {‘rank’: 0, ‘timeout’: 900}
max_restarts : 3
monitor_interval : 5
log_dir : None
metrics_cfg : {}
INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_q72lm7ip/none_glcjvhtd
INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous’ing worker group
/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future.
warnings.warn(
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=0
master_addr=127.0.0.1
master_port=29500
group_rank=0
group_world_size=1
local_ranks=[0, 1]
role_ranks=[0, 1]
global_ranks=[0, 1]
role_world_sizes=[2, 2]
global_world_sizes=[2, 2]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_q72lm7ip/none_glcjvhtd/attempt_0/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_q72lm7ip/none_glcjvhtd/attempt_0/1/error.json
Using cache found in /home/ma/uname/.cache/torch/hub/facebookresearch_xcit_master
Using cache found in /home/ma/uname/.cache/torch/hub/facebookresearch_xcit_master
| distributed init (rank 0): env://
| distributed init (rank 1): env://
[W ProcessGroupNCCL.cpp:1569] Rank 0 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
[W ProcessGroupNCCL.cpp:1569] Rank 1 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
[E ProcessGroupNCCL.cpp:566] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1807903 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:566] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1807905 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:325] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of ‘std::runtime_error’
what(): [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1807903 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:325] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of ‘std::runtime_error’
what(): [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1807905 milliseconds before timing out.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 23796) of binary: /home/ma/uname/maanaconda3/envs/mdetr/bin/python
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 3/3 attempts left; will restart worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous’ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=1
master_addr=127.0.0.1
master_port=29500
group_rank=0
group_world_size=1
local_ranks=[0, 1]
role_ranks=[0, 1]
global_ranks=[0, 1]
role_world_sizes=[2, 2]
global_world_sizes=[2, 2]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_q72lm7ip/none_glcjvhtd/attempt_1/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_q72lm7ip/none_glcjvhtd/attempt_1/1/error.json
Using cache found in /home/ma/uname/.cache/torch/hub/facebookresearch_xcit_master
Using cache found in /home/ma/uname/.cache/torch/hub/facebookresearch_xcit_master
Traceback (most recent call last):
File “main_dino.py”, line 472, in
train_dino(args)
File “main_dino.py”, line 134, in train_dino
utils.init_distributed_mode(args)
File “/home/ma/uname/code/dino_orig/utils.py”, line 468, in init_distributed_mode
Traceback (most recent call last):
File “main_dino.py”, line 472, in
train_dino(args)
File “main_dino.py”, line 134, in train_dino
utils.init_distributed_mode(args)
File “/home/ma/uname/code/dino_orig/utils.py”, line 468, in init_distributed_mode
dist.init_process_group(
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 547, in init_process_group
dist.init_process_group(
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 219, in _store_based_barrier
_store_based_barrier(rank, store, timeout)
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=2, worker_count=4, timeout=0:30:00)
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=2, worker_count=4, timeout=0:30:00)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 23980) of binary: /home/ma/uname/maanaconda3/envs/mdetr/bin/python
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 2/3 attempts left; will restart worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous’ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=2
master_addr=127.0.0.1
master_port=29500
group_rank=0
group_world_size=1
local_ranks=[0, 1]
role_ranks=[0, 1]
global_ranks=[0, 1]
role_world_sizes=[2, 2]
global_world_sizes=[2, 2]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_q72lm7ip/none_glcjvhtd/attempt_2/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_q72lm7ip/none_glcjvhtd/attempt_2/1/error.json
Using cache found in /home/ma/uname/.cache/torch/hub/facebookresearch_xcit_master
Using cache found in /home/ma/uname/.cache/torch/hub/facebookresearch_xcit_master
Traceback (most recent call last):
File “main_dino.py”, line 472, in
train_dino(args)
File “main_dino.py”, line 134, in train_dino
utils.init_distributed_mode(args)
File “/home/ma/uname/code/dino_orig/utils.py”, line 468, in init_distributed_mode
dist.init_process_group(
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=2, worker_count=6, timeout=0:30:00)
Traceback (most recent call last):
File “main_dino.py”, line 472, in
train_dino(args)
File “main_dino.py”, line 134, in train_dino
utils.init_distributed_mode(args)
File “/home/ma/uname/code/dino_orig/utils.py”, line 468, in init_distributed_mode
dist.init_process_group(
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=2, worker_count=6, timeout=0:30:00)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 24138) of binary: /home/ma/uname/maanaconda3/envs/mdetr/bin/python
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 1/3 attempts left; will restart worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous’ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=3
master_addr=127.0.0.1
master_port=29500
group_rank=0
group_world_size=1
local_ranks=[0, 1]
role_ranks=[0, 1]
global_ranks=[0, 1]
role_world_sizes=[2, 2]
global_world_sizes=[2, 2]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_q72lm7ip/none_glcjvhtd/attempt_3/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_q72lm7ip/none_glcjvhtd/attempt_3/1/error.json
Using cache found in /home/ma/uname/.cache/torch/hub/facebookresearch_xcit_master
Using cache found in /home/ma/uname/.cache/torch/hub/facebookresearch_xcit_master
Traceback (most recent call last):
Traceback (most recent call last):
File “main_dino.py”, line 472, in
File “main_dino.py”, line 472, in
train_dino(args)
File “main_dino.py”, line 134, in train_dino
train_dino(args)
File “main_dino.py”, line 134, in train_dino
utils.init_distributed_mode(args)
File “/home/ma/uname/code/dino_orig/utils.py”, line 468, in init_distributed_mode
utils.init_distributed_mode(args)
File “/home/ma/uname/code/dino_orig/utils.py”, line 468, in init_distributed_mode
dist.init_process_group(
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 547, in init_process_group
dist.init_process_group(
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 219, in _store_based_barrier
_store_based_barrier(rank, store, timeout)
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py”, line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=2, worker_count=8, timeout=0:30:00)
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=2, worker_count=8, timeout=0:30:00)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 24711) of binary: /home/ma/uname/maanaconda3/envs/mdetr/bin/python
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:Local worker group finished (FAILED). Waiting 300 seconds for other agents to finish
/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:70: FutureWarning: This is an experimental API and will be changed in future.
warnings.warn(
INFO:torch.distributed.elastic.agent.server.api:Done waiting for other agents. Elapsed: 0.0004601478576660156 seconds
{“name”: “torchelastic.worker.status.FAILED”, “source”: “WORKER”, “timestamp”: 0, “metadata”: {“run_id”: “none”, “global_rank”: 0, “group_rank”: 0, “worker_id”: “24711”, “role”: “default”, “hostname”: “ma-gpu04”, “state”: “FAILED”, “total_run_time”: 7237, “rdzv_backend”: “static”, “raw_error”: “{“message”: “”}”, “metadata”: “{“group_world_size”: 1, “entry_point”: “python”, “local_rank”: [0], “role_rank”: [0], “role_world_size”: [2]}”, “agent_restarts”: 3}}
{“name”: “torchelastic.worker.status.FAILED”, “source”: “WORKER”, “timestamp”: 0, “metadata”: {“run_id”: “none”, “global_rank”: 1, “group_rank”: 0, “worker_id”: “24712”, “role”: “default”, “hostname”: “ma-gpu04”, “state”: “FAILED”, “total_run_time”: 7237, “rdzv_backend”: “static”, “raw_error”: “{“message”: “”}”, “metadata”: “{“group_world_size”: 1, “entry_point”: “python”, “local_rank”: [1], “role_rank”: [1], “role_world_size”: [2]}”, “agent_restarts”: 3}}
{“name”: “torchelastic.worker.status.SUCCEEDED”, “source”: “AGENT”, “timestamp”: 0, “metadata”: {“run_id”: “none”, “global_rank”: null, “group_rank”: 0, “worker_id”: null, “role”: “default”, “hostname”: “ma-gpu04”, “state”: “SUCCEEDED”, “total_run_time”: 7237, “rdzv_backend”: “static”, “raw_error”: null, “metadata”: “{“group_world_size”: 1, “entry_point”: “python”}”, “agent_restarts”: 3}}
/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py:354: UserWarning:
CHILD PROCESS FAILED WITH NO ERROR_FILE
CHILD PROCESS FAILED WITH NO ERROR_FILE
Child process 24711 (local_rank 0) FAILED (exitcode 1)
Error msg: Process failed with exitcode 1
Without writing an error file to <N/A>.
While this DOES NOT affect the correctness of your application,
no trace information about the error will be available for inspection.
Consider decorating your top level entrypoint function with
torch.distributed.elastic.multiprocessing.errors.record. Example:
from torch.distributed.elastic.multiprocessing.errors import record
@record
def trainer_main(args):
# do train
warnings.warn(_no_error_file_warning_msg(rank, failure))
Traceback (most recent call last):
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/runpy.py”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/runpy.py”, line 87, in _run_code
exec(code, run_globals)
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/launch.py”, line 173, in
main()
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/launch.py”, line 169, in main
run(args)
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/run.py”, line 621, in run
elastic_launch(
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/launcher/api.py”, line 116, in call
return launch_agent(self._config, self._entrypoint, list(args))
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py”, line 348, in wrapper
return f(*args, **kwargs)
File “/home/ma/uname/maanaconda3/envs/mdetr/lib/python3.8/site-packages/torch/distributed/launcher/api.py”, line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
main_dino.py FAILED
=======================================
Root Cause:
[0]:
time: 2021-09-17_13:42:15
rank: 0 (local_rank: 0)
exitcode: 1 (pid: 24711)
error_file: <N/A>
msg: “Process failed with exitcode 1”
Other Failures:
[1]:
time: 2021-09-17_13:42:15
rank: 1 (local_rank: 1)
exitcode: 1 (pid: 24712)
error_file: <N/A>
msg: “Process failed with exitcode 1” |
st175560 | Hi,
In the error output, if you look closely, you will see a recommendation to gather more information about the root cause:
adelaide:
CHILD PROCESS FAILED WITH NO ERROR_FILE
Child process 24711 (local_rank 0) FAILED (exitcode 1)
Error msg: Process failed with exitcode 1
Without writing an error file to <N/A>.
While this DOES NOT affect the correctness of your application,
no trace information about the error will be available for inspection.
Consider decorating your top level entrypoint function with
torch.distributed.elastic.multiprocessing.errors.record. Example:
from torch.distributed.elastic.multiprocessing.errors import record
@record
def trainer_main(args):
You can find more information about the record decorator here 26. If you decorate your main function with @record you should get a full stack trace of the actual exception.
Having said that I would also suggest running your code with the --max_restarts=0 option. This was a bug in v1.9 that is fixed in v1.9.1. Without that option, your training will be restarted up to 3 times before ultimately failing. |
st175561 | Good Day
As example:
I have two computers which both work on Linux and connect both with a ethernet cable to the router. I checked and proved with a simple python script with socket that the computers are reachable. (Code: Send/receive data with python socket - Stack Overflow )
Can I use this Linux-computers connected to the internet to use DistributedDataParallel (DistributedDataParallel — PyTorch master documentation )?
I searched the web for an answer but there where no clear answer to this question and so I am not sure if I have to create a special network architecture and install some programms first to use DistributedDataParallel.
I am grateful for every answer and suggestion where I can find this answer. Thank you very much for your time. |
st175562 | I would assume you could use DDP as long as both nodes can reach each other.
However, if the traffic is indeed using an internet connection (and not a local network) I would assume to see a massive slowdown in your training as this would most likely be the bottleneck of your application. |
st175563 | Thank you very much for your answer.
So it would become faster if I use a ethernet switch?
I tried this out with two windows computer. They connected but as soon as they did I got an “RuntimeError: Stop_waiting response is expected” generated from “init_process_group”. Do you know why thats happening? |
st175564 | I built a custom filter in pytorch as shown below.
Screen Shot 2021-09-17 at 5.44.56 PM2482×526 65.1 KB
When using only one cpu or gpu, the code works fine, but when using dataparallel and model dataparallel, the code does not run.
Is there any way to solve the dataparallel problem in custom filter?
Screen Shot 2021-09-17 at 5.48.31 PM2726×992 333 KB |
st175565 | Could you post a minimal, executable code snippet which would reproduce the issue so that we could take a look at it, please? |
st175566 | Hey, folks,
Wonder what pytorch’s expected behavior is when data partition is not balanced, then as a result each training process has a drastically different number of batches. As a concrete example, two training machines, each does 10 epoches on its local partition of data, then b/c data is not balanced between partitions, one partition has 100 batches and the other partition has 10 batches, how to make pytorch work in this scenario? Can we have the remaining 90 batches proceed without doing all-reduce?
One idea is probably to let each partition has different batch_size, but for some reason, it is hard to have that information as well. |
st175567 | We have built uneven input support in DDP for this purpose. It is designed to allow users to train DDP models when there are a different # of inputs across ranks (if model has no communication other than those done by DDP), as well as throw an exception that can be caught and recovered from if the model does have custom user communication.
Here are the docs: torch.nn.parallel.distributed — PyTorch 1.9.0 documentation
and tutorial Distributed Training with Uneven Inputs Using the Join Context Manager — PyTorch Tutorials 1.9.0+cu102 documentation 3. Feel free to follow up with anymore questions! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.