id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st175768 | I see, is there any plan to support zero level 2 and 3 as well as zero offloading in the future? |
st175769 | Hi,
I checked the example on github: examples/distributed/ddp at master · pytorch/examples · GitHub 1
I also pasted the example as follows for discussion.
My question is:
should I manually call some API functions to make sure the distributed functionality runs correctly?
such as:
dist.broadcast(indices, 0)
dist.all_reduce(rt, op=dist.ReduceOp.SUM)
I saw some code call those upper APIs manually, but some code didn’t manually call those upper APIs.
Is there any reason why those API should be called and why they are not called?
If broadcast and all_reduce are not called manually, the gradient will be collective for all GPU automatically?
should I initialize all model with the same value in different GPU manually?
def spmd_main(local_world_size, local_rank):
# These are the parameters used to initialize the process group
env_dict = {
key: os.environ[key]
for key in ("MASTER_ADDR", "MASTER_PORT", "RANK", "WORLD_SIZE")
}
print(f"[{os.getpid()}] Initializing process group with: {env_dict}")
dist.init_process_group(backend="nccl")
print(
f"[{os.getpid()}] world_size = {dist.get_world_size()}, "
+ f"rank = {dist.get_rank()}, backend={dist.get_backend()}"
)
demo_basic(local_world_size, local_rank)
# Tear down the process group
dist.destroy_process_group()
def demo_basic(local_world_size, local_rank):
# setup devices for this process. For local_world_size = 2, num_gpus = 8,
# rank 0 uses GPUs [0, 1, 2, 3] and
# rank 1 uses GPUs [4, 5, 6, 7].
n = torch.cuda.device_count() // local_world_size
device_ids = list(range(local_rank * n, (local_rank + 1) * n))
print(
f"[{os.getpid()}] rank = {dist.get_rank()}, "
+ f"world_size = {dist.get_world_size()}, n = {n}, device_ids = {device_ids}"
)
model = ToyModel().cuda(device_ids[0])
ddp_model = DDP(model, device_ids)
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(device_ids[0])
loss_fn(outputs, labels).backward()
optimizer.step() |
st175770 | Hi, DDP will broadcast model’s weights to all GPUs when you construct DDP. DDP will reduce gradient when you call backward(). DDP takes care of broadcast and all_reduce so that you can treat them as if they are on a single GPU (This is only true when you do standard network training that updates model after one forward-backward pair). |
st175771 | @Hongkai_Zheng ,
Many thanks for your reply!
Hongkai_Zheng:
This is only true when you do standard network training that updates model after one forward-backward pair
My question are:
what is standard network training? such as, will modified yolov3 be updated by DDP automatically? why couldn’t the modified network be updated automatically by DDP? how could the modified yolov3 be updated manually?
is there any official example code about how to update models automatically or manually? |
st175772 | By standard training I mean forward → backward → optimizer.step() as shown in the following pseudo code. see concrete example here DDP tutorial 8.
I’m not sure how modified yolov3 works. But DDP doesn’t update your model automatically. Your optimizer takes care of update step. What DDP does is just to reduce gradient (synchronize over all devices) so that each replica of model see the same gradient. More specifically, backward() call triggers gradient reduction. See more details here DDP paper 3
# forward pass
out = model(x)
loss = loss_fn(out, y)
# backward pass
loss.backward()
# Update model
optimizer.step() |
st175773 | Just to help explain the meaning of standard training. For instance, if you do one forward() and multiple backward, then it’s not considered as standard training. It has to be forward pass followed by one single backward(). |
st175774 | Hi @Hongkai_Zheng
Thank you!
Hongkai_Zheng:
What DDP does is just to reduce gradient (synchronize over all devices) so that each replica of model see the same gradient
reducing gradient will be done by DDP automatically. if I print gradient on each GPU, the printed gradient should be identical. I saw some code call all_reduce function to sync loss. My question is:
if all_reduce is called manually and DDP also sync gradient, what will happen?
according to DDP tutorial, weights in each GPU should be identical, It means that DDP will sync initial weights(how to sync initial weights?) in each GPU to make sure all model in all GPU are identical, right?
if gradient is broadcast to each GPU, it means that all GPU run optimizer separately. For example, if we trained a model on 1 GPU with batch_size=32, lr = 0.001, should batch_size=32 and lr=0.001 when we train the model on 8 GPU? |
st175775 | Hi,
It will slow down your code. You just sync the gradient twice but everything should remain the same.
initial weights are synchronized when you construct DDP class. (DDP does it you don’t have to do it manually)
Right, each process has its own optimizer. But since the optimizer sees the same gradient, the update of the weights is also the same.
I’m not sure what your question is in this example. But the second case (8 GPUs) in your example is equivalent to 1 GPU with batch_size= 32 * 8=256. So you may want to increase the learning rate since the batchsize is increased. |
st175776 | @Hongkai_Zheng ,
Many thanks for your kind reply!
Hongkai_Zheng:
It will slow down your code. You just sync the gradient twice but everything should remain the same.
If I broadcast loss or somehting to all GPUs manually, the loss should be identical one the dist.all_reduce() is done. besides the function dist.all_reduce() should be called, what else should I do if I want to sync loss manually?
Hongkai_Zheng:
I’m not sure what your question is in this example. But the second case (8 GPUs) in your example is equivalent to 1 GPU with batch_size= 32 * 8=256. So you may want to increase the learning rate since the batchsize is increased.
for this example, I am confused:
now that the average gradient is broadcast to each GPU and each GPU has its own optimizer and learning rate, why should we 8x learning rate?
Please check the code:
Line 259 of this link yolov3/train.py at master · ultralytics/yolov3 · GitHub 1
The upper code broadcast dataset to all GPUs. My question is: broadcasting dataset to all GPU manyally necessary? I assume this is done by dataloader automatically. |
st175777 | I think calling all_reduce is enough to sync loss manually. For this question, I encourage you to look at document and try cooking up a toy example yourself and see how it works in practice to help understand.
Gradient is averaged over 32 * 8 samples in 8-GPUs case. In single GPU case, your gradient is computed from 32 samples. I don’t think there is any theory saying you have to use 8x learning rate. But intuitively, the averaged gradient over a bigger batch has a smaller variance (think gradient from a sample as a random variable). This paper 2 also gives some theoretic intuition and practical advice.
Regarding the code you referred to, it uses DistributedSampler (check createdataset function in the utils.dataset.py ) to divide dataset into several subsets so that each GPU only sees one subset of the whole dataset.
PS: I think it would be better to start from DDP tutorial and try to write the toy example yourself so that you can understand every component of the DDP through coding yourself. |
st175778 | @Hongkai_Zheng ,
Thank you!
I will cooking up a toy example to validate my idea and understanding.
Hongkai_Zheng:
Regarding the code you referred to, it uses DistributedSampler (check createdataset function in the utils.dataset.py ) to divide dataset into several subsets so that each GPU only sees one subset of the whole dataset.
Do you think the code is necessary to broadcast the dataset to each GPU?
or in what case master process should broadcast dataset to each GPU? |
st175779 | As far as I’m concerned, if you need to broadcast the whole dataset to each GPU, why do you even use DDP at the first place? DDP is used for Data Parallel where each GPU get different subset of data. If you need model parallel, you can look at Pytorch rpc. |
st175780 | Hi everyone!
First post here so apologies if it is unclear. I am currently facing a problem similar to what was mentioned here (Share gradients in multiprocessing). Given that no answer was provided I will try to further specify the problem.
I have implemented a Hogwild model training procedure. I am trying to modify it such that gradients calculated with local models over N cores can be accumulated/averaged onto the shared model. After aggregating the gradients from N-1 cores, I want the Nth core to update the shared model given such accumulated/averaged gradients.
An overview of what I am considering is as follows:
import torch.multiprocessing as mp
from model import MyModel
def train(shared_model, shared_optimizer, rank):
# Construct data_loader, optimizer, etc.
local_model = MyModel()
for data, labels in data_loader:
local_model.load_state_dict(shared_model.state_dict()) # Ensure local model up to date
loss_fn(local_model(data), labels).backward()
sync_grads(local_model, shared_model)
if rank == 0:
shared_optimizer.step() # This will update the shared parameters
shared_optimizer.zero_grad()
def sync_grads(local_model, shared_model):
for local_param, shared_param in zip(local_model.parameters(), shared_model.parameters()):
shared_param._grad = shared_param.grad + local_param.grad # Accumulating in this case
if __name__ == '__main__':
num_processes = 4
shared_model = MyModel()
shared_model.share_memory()
# SharedAdam: https://github.com/ikostrikov/pytorch-a3c/blob/master/my_optim.py
shared_optimizer = SharedAdam(shared_model.parameters()) # Note shared model parameters
shared_optimizer.share_memory()
processes = []
for rank in range(num_processes):
p = mp.Process(target=train, args=(shared_model, shared_optimizer, rank, ))
p.start()
processes.append(p)
for p in processes:
p.join()
My questions are:
Will sync_grads() upload gradients to the shared model (and thus grads will be shared among processes)?
Or will this only aggregate gradients to a local version of the shared_model, leading to the update only really occurring with gradients computed for the rank0 process?
In other words, does putting a shared_model on shared memory share parameters and gradients or only parameters?
Finally, if my fears are correct, how can I do this? I have seen this 1:
""" Gradient averaging. """
def average_gradients(model):
size = float(dist.get_world_size())
for param in model.parameters():
dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM)
param.grad.data /= size
However, I am skeptical of whether it will work (last update to the tutorial is from 2017). Else, I am also considering moving grads to an mp.Queue for all processes, then query, average & assign to the local version of shared_model in rank0 and gradient step on that process only.
Thanks a lot! |
st175781 | Solved by gcramer23 in post #2
@jleguina0 Does DDP fit your need Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.9.0+cu102 documentation? You will have one model replica per process. Instead of storing the gradients in a shared model DDP will update all the model replicas. You can define a communication hook D… |
st175782 | @jleguina0 Does DDP fit your need Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.9.0+cu102 documentation? You will have one model replica per process. Instead of storing the gradients in a shared model DDP will update all the model replicas. You can define a communication hook DDP Communication Hooks — PyTorch 1.9.0 documentation for gradient communication logic. |
st175783 | I created DDP in two nodes (8 GPUs/node, DDP world_size = 16) and then for each DDP worker (GPU), I created a new process group for it (using API dist.new_group()) to do all_to_all() with other DDP workers which has the same local rank in other nodes. So the main API calls are as follows:
# initialize the process group
dist.init_process_group(
init_method="tcp://" + str(self.master_addr) + ":" + str(self.master_port),
backend=Backend.NCCL,
rank=self.global_rank,
world_size=self.world_size,
)
model = DDP(model)
def create_inter_node_all_to_all_process_group(self):
self.inter_node_all_to_all_ranks = []
n_process_per_node = int(self.world_size / self.num_nodes)
for node_index in range(self.num_nodes):
rank = self.local_rank + node_index * n_process_per_node
self.inter_node_all_to_all_ranks.append(rank)
logging.info(
"local_rank = {}, global_rank = {}, ranks = {}".format(
self.local_rank, self.global_rank, self.inter_node_all_to_all_ranks
)
)
self.inter_node_all_to_all_process_group = dist.new_group(
ranks=self.inter_node_all_to_all_ranks, backend=Backend.NCCL, timeout=timedelta(days=365)
)
self.process_group = inter_node_all_to_all_process_group
expert_inputs = _AllToAll.apply(self.process_group, torch.cat(route_inputs))
class _AllToAll(torch.autograd.Function):
@staticmethod
def forward(ctx: Any, group: dist.ProcessGroup, input: Tensor) -> Tensor: # type: ignore
ctx.group = group
input = input.contiguous()
output = torch.empty_like(input)
dist.all_to_all_single(output, input, group=group)
return output
@staticmethod
def backward(ctx: Any, *grad_output: Tensor) -> Tuple[None, Tensor]:
return (None, _AllToAll.apply(ctx.group, *grad_output))
When I did forward propagation for the model, the program crashed. The log is as follows:
29642 2021-08-20,04:26:30.151 - {dp_manager.py (114)} - create_inter_node_all_to_all_process_group(): local_rank = 1, global_rank = 9, ranks = [1, 9]
29642 2021-08-20,04:26:30.151 - {distributed_c10d.py (194)} - _store_based_barrier(): Added key: store_based_barrier_key:3 to store for rank: 9
29641 2021-08-20,04:26:30.158 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 8: Completed store-based barrier for 16 nodes.
29641 2021-08-20,04:26:30.158 - {dp_manager.py (114)} - create_inter_node_all_to_all_process_group(): local_rank = 0, global_rank = 8, ranks = [0, 8]
29648 2021-08-20,04:26:30.158 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 15: Completed store-based barrier for 16 nodes.
29644 2021-08-20,04:26:30.158 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 11: Completed store-based barrier for 16 nodes.
29648 2021-08-20,04:26:30.159 - {dp_manager.py (114)} - create_inter_node_all_to_all_process_group(): local_rank = 7, global_rank = 15, ranks = [7, 15]
29641 2021-08-20,04:26:30.159 - {distributed_c10d.py (194)} - _store_based_barrier(): Added key: store_based_barrier_key:3 to store for rank: 8
29644 2021-08-20,04:26:30.159 - {dp_manager.py (114)} - create_inter_node_all_to_all_process_group(): local_rank = 3, global_rank = 11, ranks = [3, 11]
29647 2021-08-20,04:26:30.159 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 14: Completed store-based barrier for 16 nodes.
29647 2021-08-20,04:26:30.159 - {dp_manager.py (114)} - create_inter_node_all_to_all_process_group(): local_rank = 6, global_rank = 14, ranks = [6, 14]
29648 2021-08-20,04:26:30.159 - {distributed_c10d.py (194)} - _store_based_barrier(): Added key: store_based_barrier_key:3 to store for rank: 15
29644 2021-08-20,04:26:30.159 - {distributed_c10d.py (194)} - _store_based_barrier(): Added key: store_based_barrier_key:3 to store for rank: 11
29643 2021-08-20,04:26:30.159 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 10: Completed store-based barrier for 16 nodes.
29643 2021-08-20,04:26:30.159 - {dp_manager.py (114)} - create_inter_node_all_to_all_process_group(): local_rank = 2, global_rank = 10, ranks = [2, 10]
29645 2021-08-20,04:26:30.159 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 12: Completed store-based barrier for 16 nodes.
29645 2021-08-20,04:26:30.159 - {dp_manager.py (114)} - create_inter_node_all_to_all_process_group(): local_rank = 4, global_rank = 12, ranks = [4, 12]
29647 2021-08-20,04:26:30.159 - {distributed_c10d.py (194)} - _store_based_barrier(): Added key: store_based_barrier_key:3 to store for rank: 14
29643 2021-08-20,04:26:30.160 - {distributed_c10d.py (194)} - _store_based_barrier(): Added key: store_based_barrier_key:3 to store for rank: 10
29645 2021-08-20,04:26:30.160 - {distributed_c10d.py (194)} - _store_based_barrier(): Added key: store_based_barrier_key:3 to store for rank: 12
29646 2021-08-20,04:26:30.160 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 13: Completed store-based barrier for 16 nodes.
29646 2021-08-20,04:26:30.160 - {dp_manager.py (114)} - create_inter_node_all_to_all_process_group(): local_rank = 5, global_rank = 13, ranks = [5, 13]
29646 2021-08-20,04:26:30.160 - {distributed_c10d.py (194)} - _store_based_barrier(): Added key: store_based_barrier_key:3 to store for rank: 13
29642 2021-08-20,04:26:30.162 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 9: Completed store-based barrier for 16 nodes.
29642 2021-08-20,04:26:30.162 - {dp_manager.py (136)} - create_inter_node_all_reduce_process_group(): local_rank = 1, global_rank = 9, ranks = [1, 9]
29642 2021-08-20,04:26:30.163 - {distributed_c10d.py (194)} - _store_based_barrier(): Added key: store_based_barrier_key:4 to store for rank: 9
29641 2021-08-20,04:26:30.169 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 8: Completed store-based barrier for 16 nodes.
29644 2021-08-20,04:26:30.170 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 11: Completed store-based barrier for 16 nodes.
29648 2021-08-20,04:26:30.170 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 15: Completed store-based barrier for 16 nodes.
29647 2021-08-20,04:26:30.170 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 14: Completed store-based barrier for 16 nodes.
29641 2021-08-20,04:26:30.170 - {dp_manager.py (136)} - create_inter_node_all_reduce_process_group(): local_rank = 0, global_rank = 8, ranks = [0, 8]
29643 2021-08-20,04:26:30.170 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 10: Completed store-based barrier for 16 nodes.
29645 2021-08-20,04:26:30.170 - {distributed_c10d.py (225)} - _store_based_barrier(): Rank 12: Completed store-based barrier for 16 nodes.
29644 2021-08-20,04:26:30.170 - {dp_manager.py (136)} - create_inter_node_all_reduce_process_group(): local_rank = 3, global_rank = 11, ranks = [3, 11]
29648 2021-08-20,04:26:30.170 - {dp_manager.py (136)} - create_inter_node_all_reduce_process_group(): local_rank = 7, global_rank = 15, ranks = [7, 15]
29647 2021-08-20,04:26:30.170 - {dp_manager.py (136)} - create_inter_node_all_reduce_process_group(): local_rank = 6, global_rank = 14, ranks = [6, 14]
ip-10-0-95-41:29643:29762 [2] NCCL INFO Call to connect returned Connection refused, retrying
ip-10-0-95-41:29647:29765 [6] NCCL INFO Call to connect returned Connection refused, retrying
ip-10-0-95-41:29644:29763 [3] NCCL INFO Call to connect returned Connection refused, retrying
ip-10-0-95-41:29648:29764 [7] NCCL INFO Call to connect returned Connection refused, retrying
ip-10-0-95-41:29643:29762 [2] include/socket.h:403 NCCL WARN Connect to 10.0.87.143<45579> failed : Connection refused
ip-10-0-95-41:29643:29762 [2] NCCL INFO bootstrap.cc:95 -> 2
ip-10-0-95-41:29643:29762 [2] NCCL INFO bootstrap.cc:309 -> 2
ip-10-0-95-41:29643:29762 [2] NCCL INFO init.cc:555 -> 2
ip-10-0-95-41:29643:29762 [2] NCCL INFO init.cc:840 -> 2
ip-10-0-95-41:29643:29762 [2] NCCL INFO group.cc:73 -> 2 [Async thread]
...
dist.all_to_all_single(output, input, group=group)
File "/usr/local/lib64/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 2386, in all_to_all_single
work = group.alltoall_base(output, input, output_split_sizes, input_split_sizes, opts)
RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8
ncclSystemError: System call (socket, malloc, munmap, etc) failed. |
st175784 | Hey Chaoyang, does Gloo backend work? It’s a known issue that NCCL backend might hang/crash when there are concurrent communications from different NCCL communicators (PyTorch creates one NCCL communicator per ProcessGroup instance) on the same device. But I am not sure if that’s relevant here. I assume the allToAll is called in model forward? |
st175785 | In this case, the inter-node all_to_all_single() call is NOT concurrent in the SAME device. It’s concurrent in multiple devices. There are 8 processes groups doing all_to_all_single() in parallel using following ranks:
create_inter_node_all_to_all_process_group(): local_rank = 0, global_rank = 8, ranks = [0, 8]
create_inter_node_all_to_all_process_group(): local_rank = 1, global_rank = 8, ranks = [1, 9]
…
create_inter_node_all_to_all_process_group(): local_rank = 7, global_rank = 8, ranks = [7, 15] |
st175786 | 10.0.87.143: 10.0.92.168: terminate called after throwing an instance of ‘gloo::EnforceNotMet’
10.0.87.143: 10.0.92.168: what(): [enforce fail at /tmp/pytorch/third_party/gloo/gloo/transport/tcp/address.cc:88] rv != -1. -1 vs -1. getpeername: Transport endpoint is not connected
10.0.87.143: 10.0.87.143: 151 2021-08-25,08:02:49.070 - {distributed_c10d.py (194)} - _store_based_barrier(): Added key: store_based_barrier_key:3 to store for rank: 7
10.0.87.143: 10.0.92.168: main()
10.0.87.143: 10.0.92.168: File “main_moe.py”, line 108, in main
10.0.87.143: 10.0.92.168: main()
10.0.87.143: 10.0.92.168: File “main_moe.py”, line 108, in main
10.0.87.143: 10.0.92.168: dp_ep.init_ddp()
10.0.87.143: 10.0.92.168: File “/home/deepspeed/.local/lib/python3.6/site-packages/moe_layer/dp/dp_manager.py”, line 68, in init_ddp
10.0.87.143: 10.0.92.168: self.create_inter_node_all_to_all_process_group()
10.0.87.143: 10.0.92.168: File “/home/deepspeed/.local/lib/python3.6/site-packages/moe_layer/dp/dp_manager.py”, line 150, in create_inter_node_all_to_all_process_group
10.0.87.143: 10.0.92.168: backend=Backend.GLOO
10.0.87.143: 10.0.92.168: File “/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py”, line 2700, in new_group
10.0.87.143: 10.0.92.168: timeout=timeout)
10.0.87.143: 10.0.92.168: File “/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py”, line 620, in _new_process_group_helper
10.0.87.143: 10.0.92.168: timeout=timeout)
10.0.87.143: 10.0.95.41: main()
10.0.87.143: 10.0.95.41: File “main_moe.py”, line 108, in main
10.0.87.143: 10.0.95.41: main()
10.0.87.143: 10.0.95.41: File “main_moe.py”, line 108, in main
10.0.87.143: 10.0.95.41: dp_ep.init_ddp()
10.0.87.143: 10.0.95.41: main()
10.0.87.143: 10.0.95.41: File “/home/deepspeed/.local/lib/python3.6/site-packages/moe_layer/dp/dp_manager.py”, line 68, in init_ddp
10.0.87.143: 10.0.92.168: RuntimeError: [/tmp/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:799] connect [10.0.87.143]:57754: Connection refused
10.0.87.143: 10.0.95.41: File “main_moe.py”, line 108, in main
10.0.87.143: 10.0.95.41: main()
10.0.87.143: 10.0.95.41: main()
10.0.87.143: 10.0.95.41: File “main_moe.py”, line 108, in main
10.0.87.143: 10.0.95.41: File “main_moe.py”, line 108, in main
10.0.87.143: 10.0.95.41: self.create_inter_node_all_to_all_process_group()
10.0.87.143: 10.0.95.41: File “/home/deepspeed/.local/lib/python3.6/site-packages/moe_layer/dp/dp_manager.py”, line 150, in create_inter_node_all_to_all_process_group
10.0.87.143: 10.0.95.41: main()
10.0.87.143: 10.0.95.41: File “main_moe.py”, line 108, in main
10.0.87.143: 10.0.95.41: backend=Backend.GLOO
10.0.87.143: 10.0.95.41: File “/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py”, line 2700, in new_group
10.0.87.143: 10.0.92.168: dp_ep.init_ddp()
10.0.87.143: 10.0.92.168: File “/home/deepspeed/.local/lib/python3.6/site-packages/moe_layer/dp/dp_manager.py”, line 68, in init_ddp
10.0.87.143: 10.0.92.168: self.create_inter_node_all_to_all_process_group()
10.0.87.143: 10.0.92.168: File “/home/deepspeed/.local/lib/python3.6/site-packages/moe_layer/dp/dp_manager.py”, line 150, in create_inter_node_all_to_all_process_group
10.0.87.143: 10.0.92.168: backend=Backend.GLOO
10.0.87.143: 10.0.92.168: File “/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py”, line 2700, in new_group
10.0.87.143: 10.0.92.168: timeout=timeout)
10.0.87.143: 10.0.92.168: File “/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py”, line 620, in _new_process_group_helper
10.0.87.143: 10.0.95.41: timeout=timeout)
10.0.87.143: 10.0.95.41: File “/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py”, line 620, in _new_process_group_helper
10.0.87.143: 10.0.95.41: timeout=timeout)
10.0.87.143: 10.0.95.41: RuntimeError: [/tmp/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:589] Read error [10.0.87.143]:31117: Connection reset by peer
10.0.87.143: 10.0.92.168: timeout=timeout)
10.0.87.143: 10.0.92.168: RuntimeError: [/tmp/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:589] Read error [10.0.87.143]:59302: Connection reset by peer
10.0.87.143: 10.0.95.41: dp_ep.init_ddp()
10.0.87.143: 10.0.95.41: File “/home/deepspeed/.local/lib/python3.6/site-packages/moe_layer/dp/dp_manager.py”, line 68, in init_ddp
10.0.87.143: 10.0.95.41: self.create_inter_node_all_to_all_process_group()
10.0.87.143: 10.0.95.41: File “/home/deepspeed/.local/lib/python3.6/site-packages/moe_layer/dp/dp_manager.py”, line 150, in create_inter_node_all_to_all_process_group
10.0.87.143: 10.0.95.41: backend=Backend.GLOO
10.0.87.143: 10.0.95.41: File “/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py”, line 2700, in new_group
10.0.87.143: 10.0.95.41: timeout=timeout)
10.0.87.143: 10.0.95.41: File “/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py”, line 620, in _new_process_group_helper
10.0.87.143: 10.0.95.41: timeout=timeout)
10.0.87.143: 10.0.95.41: RuntimeError: [/tmp/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:799] connect [10.0.87.143]:31117: Connection refused |
st175787 | – EDIT: some final answers 2. in short, cuda streams are very limited.
hi,
pytorch doc defines cuda streams as A CUDA stream is a linear sequence of execution that belongs to a specific device.
i assume by linear sequence, they meant a set of cuda instructions without control statements. (?)
what would happen if the sequence is not linear?
what would happen if the instructions contain mixed instructions: once for cuda, other for cpu such as print(!!!) or more cpu stuff. so, if your set of instructions are mixed and not linear, cuda streams dont seem to be a solution to speedup.
so fare, i am unable to see any speedup using streams over linear cuda sequence. any explanations? this 3 suggests that when the cuda instructions are run in a time shorter that time required for cpu to start the next stream, you wont see any speedup. i modified the code in order to slow down a stream in order to give enough time to the cpu to start the other one, but i still dont see any speedup.
thanks
this is the run time of the below code, and it uses only 3gb/16gb of gpu memory:
$ CUDA_LAUNCH_BLOCKING=1 python streamer.py
time linear: 276498.625ms
time concurrent: 277744.0625ms
code streamer.py:
import time
import torch
import torch.nn as nn
def run(iters=10, streams=False):
device = torch.device(1)
s1 = torch.cuda.Stream(device=device)
s2 = torch.cuda.Stream(device=device)
x = torch.rand(size=(1024 * 10, 1024 * 10)).to(device)
w1 = torch.rand(size=(1024 * 10, 1024 * 10)).to(device)
w2 = torch.rand(size=(1024 * 10, 1024 * 10)).to(device)
def op():
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
torch.cuda.synchronize()
for i in range(iters):
torch.cuda.nvtx.range_push('iter{}'.format(i))
if streams:
with torch.cuda.stream(s1):
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
x.matmul(w1)
with torch.cuda.stream(s2):
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
x.matmul(w2)
else:
op()
torch.cuda.nvtx.range_pop()
torch.cuda.synchronize()
if __name__ == '__main__':
# warmup
run()
torch.cuda.cudart().cudaProfilerStart()
start_event = torch.cuda.Event(enable_timing=True)
end_event = torch.cuda.Event(enable_timing=True)
start_event.record()
run(streams=False)
end_event.record()
elapsed_time_ms = start_event.elapsed_time(end_event)
print('time linear: {}ms'.format(elapsed_time_ms))
torch.cuda.cudart().cudaProfilerStop()
start_event.record()
run(streams=True)
end_event.record()
elapsed_time_ms = start_event.elapsed_time(end_event)
print('time concurrent: {}ms'.format(elapsed_time_ms)) |
st175788 | he docs suggest to use:
To let a non-DDP model load a state dict from a DDP model, consume_prefix_in_state_dict_if_present() needs to be applied to strip the prefix “module.” in the DDP state dict before loading.
Source: DistributedDataParallel — PyTorch 1.9.0 documentation 7
however., this line:
model_state_dict=torch.nn.modules.utils.consume_prefix_in_state_dict_if_present(checkpoint['model_dict'],prefix='module.')
gives back None
I do something ugly like this at the moment:
new_state_dict = collections.OrderedDict()
for k, v in checkpoint['model_dict'].items():
name = k.replace("module.", '') # remove `module.`
new_state_dict[name] = v
but I would prefer to use the consume_prefix_in_state_dict_if_present.
Can someone elucidate the correct usage of this please? Obviously, I am not getting it! |
st175789 | Hi @John_J_Watson, sorry for the confusion. consume_prefix_in_state_dict_if_present removes the prefix in place rather than returns any value. You just use checkpoint['model_dict'] instead of creating a temporary variable here.
Example: pytorch/test_c10d_gloo.py at 34c9f5a8dad74ba23de5c2fba9d071a6c2dd1fa4 · pytorch/pytorch · GitHub 23
I created a PR to improve the documentation: Add return type hint and improve the docstring of consume_prefix_in_state_dict_if_present method by SciPioneer · Pull Req 7 |
st175790 | thank you @wayi for this. I dont know why I didnt think to try thinking it could be inplace!
Just as a follow up quetsion, this wrapper basically does the same as follows?
new_state_dict = collections.OrderedDict()
for k, v in checkpoint['model_dict'].items():
name = k.replace("module.", '') # remove `module.`
new_state_dict[name] = v
Is that right? Are there any advantages/disadvantages of using either approaches?
Thank you again! |
st175791 | One subtle difference is that “_metadata” field (if any) is handled separately. See:
github.com
pytorch/pytorch/blob/e000dfcf976454fdadfdc556248976e6e560d155/torch/nn/modules/utils.py#L63 2
state_dict (OrderedDict): a state-dict to be loaded to the model.
prefix (str): prefix.
"""
keys = sorted(state_dict.keys())
for key in keys:
if key.startswith(prefix):
newkey = key[len(prefix) :]
state_dict[newkey] = state_dict.pop(key)
# also strip the prefix in metadata if any.
if "_metadata" in state_dict:
metadata = state_dict["_metadata"]
for key in list(metadata.keys()):
# for the metadata dict, the key can be:
# '': for the DDP module, which we want to remove.
# 'module': for the actual model.
# 'module.xx.xx': for the rest.
if len(key) == 0:
continue
newkey = key[len(prefix) :]
Other than that, I don’t think there is a big difference. Your own implementation is a little less memory efficient I will say, as you don’t do it in place. |
st175792 | If I raise a SystemExit, only the process encountered while exit, while rest are waiting infinitely. |
st175793 | Processes participating in a distributed data parallel job communicate with each other using collective communication calls (e.g. all_reduce, all_gather). If one of your processes fails, those calls will block the rest of your processes. Fault tolerance and job retries are not part of the DDP framework. I would suggest checking TorchElastic 2, and optionally Slurm and other OSS job schedulers for what you want to achieve. |
st175794 | If one process fails, it will exit. For example, if I raise a SystemError, the whole process exits as expected. |
st175795 | I wish to train multiple models in turn in one python script. After the training of one model, I will run this to release memory, so that there will be enough memory for the training of the next model:
def destruction(self):
torch.cuda.synchronize(device=self._get_device())
dist.destroy_process_group(group=self.group)
del self.optimizer
del self.ddp_model
del self.train_loader
torch.cuda.set_device(device=self._get_device())
torch.cuda.empty_cache()
torch.cuda.synchronize(device=self._get_device())
However, from nvidia-smi, I see that after calling destruction() each time, there was still some GPU memory allocated. And the unreleased memory increase as I train more model. For example, after training the 3rd model and calling destruction(), the memory allocation is like this:
image1616×612 49.4 KB
Then, after training the 4th model, the memory allocation is like this:
image1616×612 49.5 KB
Finally, this leads to OOM error in training.
Did I miss out some step to clear unused CUDA memory? Or did I forget to delete anything that remained in CUDA memory? I would really appreciate any help! |
st175796 | torch.cuda.empty_cache() would free the cached memory so that other processes could reuse it.
However, if you are using the same Python process, this won’t avoid OOM issues and will slow down the code instead.
Based on the reported issue I would assume that you haven’t deleted all references to the model, activations, optimizers, etc. so that some tensors are still alive. |
st175797 | Hi,
on 1 GPU, using the parameters batch_size=32 and lr=0.0001, I get a good accuracy.
I would like to use 8 GPU to retrain the model.
My question is:
should be change lr to 0.0001*8 when I use 8 GPU? |
st175798 | yes ofc LR should change (follow the linear rule in the paper): https://arxiv.org/pdf/1706.02677.pdf 19 |
st175799 | I have a distributed code base I am trying to work with, but, with every epoch I see that my CPU memory increases almost linearly eventually running into OOM on a large 128GB machine :((
Without distributed, the code runs fine with no such issues. The isue is exactly describe here: CPU memory gradually leaks when num_workers > 0 in the DataLoader · Issue #13246 · pytorch/pytorch · GitHub 2
I do use num_workers=16
but the solution posted there, using pyarrow, does not solve my issue - I still have this memory leak. I am on python3.7, poytorch1.9.0.
I do have a custom dataloader, and it looks like so:
class TrainDataSet(Dataset):
def __init__(self, data_root, mode,label_class=None):
labels0 = []
file_paths0 = []
self.mode = mode
data_path = Path(data_root)
if self.mode == "train":
data_path = data_path / self.mode
else:
raise ValueError("Mode not recognised")
datasets = ImageFolderWithPaths(root=data_path)
print(datasets.classes)
print(datasets.class_to_idx)
# sample is img here and is not used!
for idx, (sample, target, path) in enumerate(datasets):
if target==label_class:
labels0.append(target)
file_paths0.append(path)
self.labels=pa.array(labels0)
self.file_paths=pa.array(file_paths0)
del labels0 # try to avoid leak?
del file_paths0 # try to avoid leak?
self.transform_color=transforms.Compose([transforms.ToTensor(),transforms.Resize(224),
# transforms.CenterCrop(224),
# transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
def __len__(self):
return len(self.file_paths)
def __getitem__(self, idx):
label = self.labels[idx].as_py()
file_path = self.file_paths[idx].as_py()
img_rgb = read_img(file_path) # read from opencv to try and see if PIL causes leaks.
return self.transform_color(img_rgb), label#, file_path
I have totally run out of ideas now :(( and would love to hear from someone with ideas.
UPDATE: I can also confirm that the model+ the other code works totally fine in distributed mode when I swap the dataset to CIFAR from torch datasets and simply use imagefolder on them. i.e the cpu memory consumption stays constant. So, yea, this seems like a dataloader bug |
st175800 | Looks like it’s a known dataloader bug, so all possible ideas are already mentioned in CPU memory gradually leaks when num_workers > 0 in the DataLoader · Issue #13246 · pytorch/pytorch · GitHub 6 by authors/people who are responsible for this code
dataloader is not part of distributed module and as you mentioned you don’t have any issues with distributed mode |
st175801 | The problem @pbelevich is that none of the solutions mentioned on the thread work! and I was hoping the community here might have a solution. |
st175802 | Hi everyone! I found that autograd.backward() doesn’t trigger reduction when I tried to compute certain Jacobian (w.r.t. D network) vector product in the following toy GAN case. However if I use the same approach to compute Jacobian (w.r.t. G network) vector product, DDP works perfectly with autograd.backward(). This is a follow-up post on my previous post where I found a way to compute Jacobian vector product with DDP(ddp-second-backward-accumulate-the-wrong-gradient).
Compute Jacobian vector product w.r.t D networks: as shown below, the results I got from backward is different across different device.
Running on rank 0
Running on rank 1
Hessian vector product of d param: tensor([3., 3., 0.], device='cuda:1')
Hessian vector product of d param: tensor([2., 2., 0.], device='cuda:0')
Done!
The block below is repro code.
import torch
import torch.multiprocessing as mp
import torch.nn as nn
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.autograd as autograd
from utils.helper import setup, cleanup
from argparse import ArgumentParser
def zero_grad(params):
'''
Clean the gradient of each parameter
'''
for p in params:
if p.grad is not None:
p.grad.detach()
p.grad.zero_()
def collect_grad(params):
'''
Collect grads of parameters and concatenate them into a vector.
If grad is None, it will be filled with zeros
:param params: list of parameters
:return: vector
'''
grad_list = []
for p in params:
if p.grad is not None:
grad_list.append(p.grad.contiguous().view(-1))
del p.grad
else:
# replace None with zeros
grad_list.append(torch.zeros_like(p).view(-1))
return torch.cat(grad_list)
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
print(f'Running on rank {rank}')
D = nn.Linear(2, 1, bias=True).to(rank)
G = nn.Linear(1, 2, bias=True).to(rank)
# initialize weights
nn.init.constant_(D.weight, 2.0)
nn.init.constant_(D.bias, -1.0)
nn.init.constant_(G.weight, 4.0)
nn.init.constant_(G.bias, 1.0)
if args.distributed:
G = DDP(G, device_ids=[rank], broadcast_buffers=False)
D = DDP(D, device_ids=[rank], broadcast_buffers=False)
d_params = list(D.parameters())
g_params = list(G.parameters())
if not args.distributed:
z = torch.tensor([[2.0], [1.0]]).to(rank)
elif rank == 0:
z = torch.tensor([[1.0]]).to(rank)
elif rank == 1:
z = torch.tensor([[2.0]]).to(rank)
loss = D(G(z)).mean()
zero_grad(d_params)
autograd.backward(gradvec_g,
grad_tensors=torch.ones_like(gradvec_g),
inputs=d_params)
hvp_d = collect_grad(d_params)
print(f'Hessian vector product of d param: {hvp_d}')
cleanup()
if __name__ == '__main__':
torch.backends.cudnn.benchmark = True
parser = ArgumentParser()
parser.add_argument('--num_gpus', type=int, help='Number of GPUs', default=1)
args = parser.parse_args()
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!')
Compute Jacobian vector product w.r.t G networks: as shown below, the Jacobian vector product is synchronized across devices.
Running on rank 0
Running on rank 1
Hessian vector product of g param: tensor([1.5000, 1.5000, 1.0000, 1.0000], device='cuda:1')
Hessian vector product of g param: tensor([1.5000, 1.5000, 1.0000, 1.0000], device='cuda:0')
Done!
The repro code only switches the order of derivative as attached below.
import torch
import torch.multiprocessing as mp
import torch.nn as nn
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.autograd as autograd
from utils.helper import setup, cleanup
from argparse import ArgumentParser
def zero_grad(params):
'''
Clean the gradient of each parameter
'''
for p in params:
if p.grad is not None:
p.grad.detach()
p.grad.zero_()
def collect_grad(params):
'''
Collect grads of parameters and concatenate them into a vector.
If grad is None, it will be filled with zeros
:param params: list of parameters
:return: vector
'''
grad_list = []
for p in params:
if p.grad is not None:
grad_list.append(p.grad.contiguous().view(-1))
del p.grad
else:
# replace None with zeros
grad_list.append(torch.zeros_like(p).view(-1))
return torch.cat(grad_list)
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
print(f'Running on rank {rank}')
D = nn.Linear(2, 1, bias=True).to(rank)
G = nn.Linear(1, 2, bias=True).to(rank)
# initialize weights
nn.init.constant_(D.weight, 2.0)
nn.init.constant_(D.bias, -1.0)
nn.init.constant_(G.weight, 4.0)
nn.init.constant_(G.bias, 1.0)
if args.distributed:
G = DDP(G, device_ids=[rank], broadcast_buffers=False)
D = DDP(D, device_ids=[rank], broadcast_buffers=False)
d_params = list(D.parameters())
g_params = list(G.parameters())
if not args.distributed:
z = torch.tensor([[2.0], [1.0]]).to(rank)
elif rank == 0:
z = torch.tensor([[1.0]]).to(rank)
elif rank == 1:
z = torch.tensor([[2.0]]).to(rank)
loss = D(G(z)).mean()
grad_d = autograd.grad(loss, d_params, create_graph=True)
gradvec_d = torch.cat([g.contiguous().view(-1) for g in grad_d])
zero_grad(g_params) # clean the grad before backward
autograd.backward(gradvec_d,
grad_tensors=torch.ones_like(gradvec_d),
inputs=g_params) # compute d{torch.dot(gradvec_d, vec)} / d{G}
hvp_g = collect_grad(g_params) # gather results
print(f'Hessian vector product of g param: {hvp_g}')
cleanup()
if __name__ == '__main__':
torch.backends.cudnn.benchmark = True
parser = ArgumentParser()
parser.add_argument('--num_gpus', type=int, help='Number of GPUs', default=1)
args = parser.parse_args()
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!') |
st175803 | Update: there is some typo in the first block. But my question remains the same.
The first code block should be:
import torch
import torch.multiprocessing as mp
import torch.nn as nn
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.autograd as autograd
from utils.helper import setup, cleanup
from argparse import ArgumentParser
def zero_grad(params):
'''
Clean the gradient of each parameter
'''
for p in params:
if p.grad is not None:
p.grad.detach()
p.grad.zero_()
def collect_grad(params):
'''
Collect grads of parameters and concatenate them into a vector.
If grad is None, it will be filled with zeros
:param params: list of parameters
:return: vector
'''
grad_list = []
for p in params:
if p.grad is not None:
grad_list.append(p.grad.contiguous().view(-1))
del p.grad
else:
# replace None with zeros
grad_list.append(torch.zeros_like(p).view(-1))
return torch.cat(grad_list)
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
print(f'Running on rank {rank}')
D = nn.Linear(2, 1, bias=True).to(rank)
G = nn.Linear(1, 2, bias=True).to(rank)
# initialize weights
nn.init.constant_(D.weight, 2.0)
nn.init.constant_(D.bias, -1.0)
nn.init.constant_(G.weight, 4.0)
nn.init.constant_(G.bias, 1.0)
if args.distributed:
G = DDP(G, device_ids=[rank], broadcast_buffers=False)
D = DDP(D, device_ids=[rank], broadcast_buffers=False)
d_params = list(D.parameters())
g_params = list(G.parameters())
if not args.distributed:
z = torch.tensor([[2.0], [1.0]]).to(rank)
elif rank == 0:
z = torch.tensor([[1.0]]).to(rank)
elif rank == 1:
z = torch.tensor([[2.0]]).to(rank)
loss = D(G(z)).mean()
grad_g = autograd.grad(loss, d_params, create_graph=True)
gradvec_g = torch.cat([g.contiguous().view(-1) for g in grad_g])
zero_grad(d_params)
autograd.backward(gradvec_g,
grad_tensors=torch.ones_like(gradvec_g),
inputs=d_params)
hvp_d = collect_grad(d_params)
print(f'Hessian vector product of d param: {hvp_d}')
cleanup()
if __name__ == '__main__':
torch.backends.cudnn.benchmark = True
parser = ArgumentParser()
parser.add_argument('--num_gpus', type=int, help='Number of GPUs', default=1)
args = parser.parse_args()
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!') |
st175804 | Does handle.wait() block and synchronize all processes like torch.distributed.barrier()? I have having trouble understanding wait and barrier`. Based on the description in torch documentation, I thought wait would also synchronize? Am I misunderstanding? I need to synchronize the results in all processes before continuing. In particular, what are the differences between the 3 blocks of code below.
x = func()
handle = torch.distributed.all_reduce(x, op=torch.distributed.ReduceOp.SUM, async_op=True)
handle.wait()
x = func()
handle = torch.distributed.all_reduce(x, op=torch.distributed.ReduceOp.SUM, async_op=True)
handle.wait()
torch.distributed.barrier()
x = func()
torch.distributed.all_reduce(x, op=torch.distributed.ReduceOp.SUM, async_op=False)
torch.distributed.barrier() |
st175805 | They are in a sense similar, but serve different purposes. wait() ensures that, once returned, the async operation that its is associated with has completed. This means:
x = func()
torch.distributed.all_reduce(x, op=torch.distributed.ReduceOp.SUM, async_op=False)
and
x = func()
handle = torch.distributed.all_reduce(x, op=torch.distributed.ReduceOp.SUM, async_op=True)
handle.wait()
are in fact equivalent. The advantage of the async version is that you can have additional logic before you call wait() to overlap communication and computation.
On the other hand barrier() is a standalone collective function. You use it mostly to ensure that all processes in your job reach a certain point in execution (e.g. for coordinating checkpointing). The calls to barrier() in your second and third examples are redundant though, since collective reduce operations also implicitly represent a barrier. |
st175806 | HI all.
I have strange problem: I’m trying to run 2 tasks on 2 machines via following
trivial script:
dist.init_process_group(backend = "gloo",init_method = 'tcp://192.168.0.1:29500',rank = irank,world_size = iwsize)
arg = None
if(dist.get_rank()==0):
arg = Dist_Trainer()
run(dist.get_rank(),dist.get_world_size(),arg)
When I run them on one machine, all works fine.
But when I start process with rank = 0 on one machine,
and process with rank = 1 on another machine,
process with rank = 0 fails with the following output:
python train_dist.py 0 2
RANK: 0 wsize: 2
terminate called after throwing an instance of ‘gloo::IoException’ what(): [/opt/conda/conda-bld/pytorch_1544176307774/work/third_party/gloo/gloo/transport/tcp/pair.cc:724] connect [127.0.0.1]:45965: Connection refused
This happens only when I start process with rank=1. If I don’t started it,
process with rank =0 is waiting for connection.
i.e.,I assume that tcp connection happens, but then process with rank = 0
tries to work with 127.0.0.1?
Upd: I tried setting export GLOO_SOCKET_IFNAME=enp2s0,
the problem still remains. |
st175807 | Looks like rank 0 is working with [127.0.0.1]:45965. Have you unset MASTER_ADDR and MASTER_PORT environment vars before launching the script? |
st175808 | Yes, they were unset.
By the way, if I swap scripts with rank=0 and rank=1 on these machines,
then script with rank=1crashes:
python train_dist.py 1 2
RANK: 1 wsize: 2
terminate called after throwing an instance of ‘gloo::IoException’
what(): [/opt/conda/conda-bld/pytorch_1544176307774/work/third_party/gloo/gloo/transport/tcp/pair.cc:724] connect [127.0.1.1]:3978: Connection refused
Script with rank=0 still waiting for connection |
st175809 | If you just run the following without any other code, does it fail?
import torch.distributed as dist
# on rank 0
dist.init_process_group(
backend = "gloo",
init_method = 'tcp://192.168.0.1:29500',
rank = 0,
world_size = 2
)
# on rank 1
dist.init_process_group(
backend = "gloo",
init_method = 'tcp://192.168.0.1:29500',
rank = 1,
world_size = 2
) |
st175810 | Hey @ZiyiZhu Are you trying to run this with RPC? Currently init_rpc does not work together with init_process_group. There are work around to create non-default process groups. Or we can also add a fix to init_rpc if necessary. This is the tracking issue: https://github.com/pytorch/pytorch/issues/33583 16 |
st175811 | Hi @mrshenli,
This is different from the RPC problem. Back then I was using Google Cloud VMs. The torch.distributed and RPC worked fine there.
However, just recently we built up new servers with GPU in our lab and connect them using an electrical packet switch. They can ping each other using the internal IP. For me now it is 10.1.1.101 for rank 0 and 10.1.1.102 for rank 1. So I run the following:
import torch.distributed as dist
# on rank 0
dist.init_process_group(
backend = "gloo",
init_method = 'tcp://10.1.1.101:29500',
rank = 0,
world_size = 2
)
import torch.distributed as dist
# on rank 1
dist.init_process_group(
backend = "gloo",
init_method = 'tcp://10.1.1.101:29500',
rank = 1,
world_size = 2
)
However, it failed with
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-532df564c254> in <module>
6 init_method = 'tcp://10.1.1.101:29500',
7 rank = 1,
----> 8 world_size = 2
9 )
~/anaconda3/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py in init_process_group(backend, init_method, timeout, world_size, rank, store, group_name)
401 store,
402 group_name=group_name,
--> 403 timeout=timeout)
404
405 _pg_group_ranks[_default_pg] = {i: i for i in range(_default_pg.size())}
~/anaconda3/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py in _new_process_group_helper(world_size, rank, group_ranks, backend, store, group_name, timeout)
469 rank,
470 world_size,
--> 471 timeout=timeout)
472 _pg_map[pg] = (Backend.GLOO, store)
473 _pg_names[pg] = group_name
RuntimeError: [/opt/conda/conda-bld/pytorch_1587428398394/work/third_party/gloo/gloo/transport/tcp/pair.cc:769] connect [127.0.0.1]:31662: Connection refused
Which I guess is the same problem for @Oleg_Ivanov too. In terms of
export GLOO_SOCKET_IFNAME=eno2
Should I simply do it in any terminal? eno2 is my NIC.
Please let me know if you have any thoughts. Thank you very much for your help! |
st175812 | ZiyiZhu:
Should I simply do it in any terminal?
Yes, either set it in terminal or pass GLOO_SOCKET_IFNAME=eno2 as a prefix to the command that launches the process.
Another cause might be hostname to ip mapping. IIUC, Gloo would try to resolve the the ip using the hostname. What does the following command return for you?
getent hosts `hostname` |
st175813 | Hi @mrshenli,
oh. This may be the problem. In the new servers (10.1.1.101 & 10.1.1.102), getent hosts hostname returns nothing.
I am also testing the torch.distributed with some old servers in my lab now and they can work. In one of the old servers, it does return IPs currently in use.
Where I am using 10.0.1.101 and 10.0.1.102 for testing the torch.distributed (old servers).
I will figure this out in the new servers and let you know! Thank you!
Best,
Ziyi |
st175814 | Hi @mrshenli,
Problem solved. Like in the old server, I made 10.1.1.101 as a host in /etc/hosts and updated it in /etc/hostname. Now if I run getent hosts hostname, 10.1.1.101 host1 will pop up like in the screenshot below
ZiyiZhu:
However, it happens to be one of the NIC port (eno2)'s IP. What if I want to use another ethernet port which is 10.1.2.101 for another NIC port (eno3), do I need to change the /etc/hostname every time?
Thank you, |
st175815 | Looking at the code, this is not the expected behavior. It would always first try GLOO_SOCKET_IFNAME if that’s available. Somehow, it didn’t pick up the env var.
github.com
pytorch/pytorch/blob/945d7a7408891e25bc54a65015724f6d2de644e6/torch/csrc/distributed/c10d/init.cpp#L603-L615 4
char* ifnameEnv = getenv(GLOO_SOCKET_IFNAME_ENV);
if (ifnameEnv) {
for (const auto& iface : split(',', ifnameEnv)) {
options.devices.push_back(
::c10d::ProcessGroupGloo::createDeviceForInterface(iface));
}
} else {
// If no hostname is specified, this function looks up
// the machine's hostname and returns a device instance
// associated with the address that the hostname resolves to.
options.devices.push_back(
::c10d::ProcessGroupGloo::createDefaultDevice());
}
char* ifnameEnv = getenv(GLOO_SOCKET_IFNAME_ENV);
if (ifnameEnv) {
for (const auto& iface : split(',', ifnameEnv)) {
options.devices.push_back(
::c10d::ProcessGroupGloo::createDeviceForInterface(iface));
}
} else {
// If no hostname is specified, this function looks up
// the machine's hostname and returns a device instance
// associated with the address that the hostname resolves to.
options.devices.push_back(
::c10d::ProcessGroupGloo::createDefaultDevice());
}
Can you try reading the GLOO_SOCKET_IFNAME env var immediately before init_process_group from Python and see if that gives the correct result?
Let me check Gloo code. |
st175816 | Gloo part looks correct to me:
github.com
facebookincubator/gloo/blob/7b58938c5d87380f88a5266035dda1041d45626e/gloo/transport/tcp/device.cc#L142-L162
std::shared_ptr<transport::Device> CreateDevice(const struct attr& src) {
struct attr attr = src;
if (attr.iface.size() > 0) {
// Initialize attributes using network interface name
lookupAddrForIface(attr);
} else {
// Initialize attributes using hostname/IP address
// If not already specified, use this machine's hostname
if (attr.hostname.size() == 0) {
std::array<char, HOST_NAME_MAX> hostname;
auto rv = gethostname(hostname.data(), hostname.size());
GLOO_ENFORCE_EQ(rv, 0);
attr.hostname = hostname.data();
}
lookupAddrForHostname(attr);
}
auto device = std::make_shared<Device>(attr);
return std::shared_ptr<transport::Device>(device);
} |
st175817 | Another way to test, is to use some non-exist interface, e.g.
export GLOO_SOCKET_IFNAME=nonexist
And then check if init_process_group throws the follow error for you:
dist.init_process_group("gloo", rank=rank, world_size=world_size)
File "/scratch/shenli/pytorch/torch/distributed/distributed_c10d.py", line 425, in init_process_group
_default_pg = _new_process_group_helper(
File "/scratch/shenli/pytorch/torch/distributed/distributed_c10d.py", line 499, in _new_process_group_helper
pg = ProcessGroupGloo(
RuntimeError: [enforce fail at ../third_party/gloo/gloo/transport/tcp/device.cc:83] ifa != nullptr. Unable to find address for: nonexist |
st175818 | Hi @mrshenli,
After I
export GLOO_SOCKET_IFNAME=nonexist
image1107×708 36.8 KB
This is the error I got. Does it seem that it bypasses the nonexist and look at some others? If I add the master_address then it will just hang there for the second rank to come in.
Thanks, |
st175819 | can you keep/uncomment the init_method line or set MASTER_ADDR and MASTER_PORT? It seems failed during Python land arg checking due to missing master addr/port, before entering C++ pybind methods. |
st175820 | Hi @mrshenli,
I guess I found the problem. If I do export GLOO_SOCKET_IFNAME=nonexist in the terminal then it does not become an environment variable in the Jupter Notebook. But see it in Python launched from that terminal directly.
So I guess I have to do the other way around and set that in the JupyterNotebook explicitly? Here is the result if I do what you suggested.
mrshenli:
can you keep/uncomment the init_method line or set MASTER_ADDR and MASTER_PORT ? It seems failed during Python land arg checking due to missing master addr/port, before entering C++ pybind methods.
import torch.distributed as dist
import os
print(os.environ.get('GLOO_SOCKET_IFNAME'))
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '23456'
os.environ['GLOO_SOCKET_IFNAME']='nonexist'
print(os.environ.get('GLOO_SOCKET_IFNAME'))
# on rank 0
dist.init_process_group(
backend = "gloo",
init_method = 'tcp://10.1.1.101:29500',
rank = 0,
world_size = 1
)
None
nonexist
----------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-ad5d77a63395> in <module>
14 init_method = 'tcp://10.1.1.101:29500',
15 rank = 0,
---> 16 world_size = 1
17 )
18
~/anaconda3/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py in init_process_group(backend, init_method, timeout, world_size, rank, store, group_name)
401 store,
402 group_name=group_name,
--> 403 timeout=timeout)
404
405 _pg_group_ranks[_default_pg] = {i: i for i in range(_default_pg.size())}
~/anaconda3/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py in _new_process_group_helper(world_size, rank, group_ranks, backend, store, group_name, timeout)
469 rank,
470 world_size,
--> 471 timeout=timeout)
472 _pg_map[pg] = (Backend.GLOO, store)
473 _pg_names[pg] = group_name
RuntimeError: [enforce fail at /opt/conda/conda-bld/pytorch_1587428398394/work/third_party/gloo/gloo/transport/tcp/device.cc:83] ifa != nullptr. Unable to find address for: nonexist
Thanks,
Ziyi |
st175821 | Hi @mrshenli,
To follow up with the issue below, I wonder if you could let me know more about why currently we cannot use the “nccl” as a backend for the communication for RCP? We have to explicitly copy the data to CPU and then do the transmission.
What are the concerns that the RPC does not do something similar to the DDP that has the GPU directly access to the NIC for the model parallel algorithm?
Thank you
mrshenli:
Hey @ZiyiZhu Are you trying to run this with RPC? Currently init_rpc does not work together with init_process_group . There are work around to create non-default process groups. Or we can also add a fix to init_rpc if necessary. This is the tracking issue: |
st175822 | To follow up with the issue below, I wonder if you could let me know more about why currently we cannot use the “nccl” as a backend for the communication for RCP?
This is because NCCL does not support p2p (send/recv) communication yet when we develop RPC. It is possible to use NCCL broadcast to mimic that send/recv, but that’s too hackish.
The p2p comm is coming to NCCL in v2.7. When that is ready, we probably can add it to ProcessGroupAgent or the new TensorPipeAgent (the latter is a more performant RPC agent implementation and should be able to use the best channels, e.g., IB/ETH/NvLink/etc.). See this PR: https://github.com/pytorch/pytorch/pull/35483 2
We have to explicitly copy the data to CPU and then do the transmission.
For Gloo backend even if application don’t copy the tensor from CUDA to CPU, Gloo would need to do that internally anyway. Hence, this explicit copy in application not a perf limitation when using Gloo backend.
We used to do that GPU-to-CPU copy implicitly in v1.4, but later realized that applications could run into unexpected errors if the destination device is not available on the callee. E.g., when I do rpc.rpc_sync(...., args=(torch.zeros(2).to(3),)) and if cuda:3 is not available on the callee, it would throw an error. So, we decided to make it explicit for applications.
What are the concerns that the RPC does not do something similar to the DDP that has the GPU directly access to the NIC for the model parallel algorithm?
From the API level, the difference is that DDP is supposed to run on a set of homogeneous servers, and RPC should be able to support heterogeneous clusters. So the device mismatch in RPC can be common. We are adding explicit device placement support (sth. similar to map_location on torch.save and torch.load) to the RPC API. This is an early issue to track. @osalpekar is working on a design RFC for that. Look forward to hear your comments when that RFC is posted. |
st175823 | Thank you very much for your detailed explanations! I agree that explicit can avoid lots of unexpected errors and really look forward to seeing RFC design.
Best,
Ziyi |
st175824 | I have a problem running the spawn function from mp on Slurm on multiple GPUs.
Instructions To Reproduce the Issue:
Full runnable code:
import torch, os
def test_nccl_ops():
num_gpu = 2
print("NCCL init before spawn")
import torch.multiprocessing as mp
dist_url = "file:///tmp/nccl_tmp_file"
mp.spawn(_test_nccl_worker, nprocs=num_gpu, args=(num_gpu, dist_url), daemon=False)
print("NCCL init succeeded.")
def _test_nccl_worker(rank, num_gpu, dist_url):
import torch.distributed as dist
dist.init_process_group(backend="NCCL", init_method=dist_url, rank=rank, world_size=num_gpu)
dist.barrier()
print("Worker after barrier")
if __name__ == "__main__":
test_nccl_ops()
On the other hand, we implemented this Slurm script to run an experiment on 2 GPUs:
#!/bin/bash -l
#SBATCH --account=Account
#SBATCH --partition=gpu # gpu partition
#SBATCH --nodes=1 # 1 node, 4 GPUs per node
#SBATCH --time=24:00:00
#SBATCH --job-name=detectron2_demo4 # job name
module load Python/3.9.5-GCCcore-10.3.0
module load CUDA/11.1.1-GCC-10.2.0
cd /experiment_path
export NCCL_DEBUG=INFO
srun python main.py --num-gpus 2
When I ran this script I faced an error (cat slurm-xxx.out), and no error file:
The following have been reloaded with a version change:
1) GCCcore/10.3.0 => GCCcore/10.2.0
2) binutils/2.36.1-GCCcore-10.3.0 => binutils/2.35-GCCcore-10.2.0
3) zlib/1.2.11-GCCcore-10.3.0 => zlib/1.2.11-GCCcore-10.2.0
NCCL init before spawn
[W ProcessGroupNCCL.cpp:1569] Rank 0 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
[W ProcessGroupNCCL.cpp:1569] Rank 1 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
gpu04:9770:9770 [0] NCCL INFO Bootstrap : Using [0]bond0:10.10.1.4<0>
gpu04:9770:9770 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
gpu04:9770:9770 [0] NCCL INFO NET/IB : No device found.
gpu04:9770:9770 [0] NCCL INFO NET/Socket : Using [0]bond0:10.10.1.4<0>
gpu04:9770:9770 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda10.2
gpu04:9771:9771 [1] NCCL INFO Bootstrap : Using [0]bond0:10.10.1.4<0>
gpu04:9771:9771 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
gpu04:9771:9771 [1] NCCL INFO NET/IB : No device found.
gpu04:9771:9771 [1] NCCL INFO NET/Socket : Using [0]bond0:10.10.1.4<0>
gpu04:9771:9771 [1] NCCL INFO Using network Socket
gpu04:9771:9862 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
gpu04:9771:9862 [1] NCCL INFO Trees [0] -1/-1/-1->1->0|0->1->-1/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1
gpu04:9771:9862 [1] NCCL INFO Setting affinity for GPU 1 to 3fff
gpu04:9770:9861 [0] NCCL INFO Channel 00/02 : 0 1
gpu04:9770:9861 [0] NCCL INFO Channel 01/02 : 0 1
gpu04:9770:9861 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
gpu04:9770:9861 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1
gpu04:9770:9861 [0] NCCL INFO Setting affinity for GPU 0 to 3fff
gpu04:9771:9862 [1] NCCL INFO Channel 00 : 1[6000] -> 0[5000] via P2P/IPC
gpu04:9770:9861 [0] NCCL INFO Channel 00 : 0[5000] -> 1[6000] via P2P/IPC
gpu04:9771:9862 [1] NCCL INFO Channel 01 : 1[6000] -> 0[5000] via P2P/IPC
gpu04:9770:9861 [0] NCCL INFO Channel 01 : 0[5000] -> 1[6000] via P2P/IPC
gpu04:9771:9862 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
gpu04:9771:9862 [1] NCCL INFO comm 0x7f057c000e00 rank 1 nranks 2 cudaDev 1 busId 6000 - Init COMPLETE
gpu04:9770:9861 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
gpu04:9770:9861 [0] NCCL INFO comm 0x7f5210000e00 rank 0 nranks 2 cudaDev 0 busId 5000 - Init COMPLETE
gpu04:9770:9770 [0] NCCL INFO Launch mode Parallel
Expected behavior:
To run training on 2 GPUs and print other more outputs then “NCCL init before spawn” and NCCL debug info.
Environment:
Paste the output of the following command:
No CUDA runtime is found, using CUDA_HOME='/usr/local/software/CUDAcore/11.1.1'
--------------------- --------------------------------------------------------------------------------
sys.platform linux
Python 3.9.5 (default, Jul 9 2021, 09:35:24) [GCC 10.3.0]
numpy 1.21.1
detectron2 0.5 @/home/users/aimhigh/detectron2/detectron2
Compiler GCC 10.2
CUDA compiler CUDA 11.1
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.9.0+cu102 @/home/users/aimhigh/.local/lib/python3.9/site-packages/torch
PyTorch debug build False
GPU available No: torch.cuda.is_available() == False
Pillow 8.3.1
torchvision 0.10.0+cu102 @/home/users/aimhigh/.local/lib/python3.9/site-packages/torchvision
fvcore 0.1.5.post20210727
iopath 0.1.9
cv2 4.5.3
--------------------- --------------------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.1.2
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
Additional note: the first time I assumed it is a detectron2 problem but it’s not. You can find my previous discussion with detectron2 developers: link. Maybe dist_url is somehow problematic, we maybe need some additional Slurm configuration |
st175825 | Some additional example:
Here is some new example. Same thing:
import os
import sys
import tempfile
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
# On Windows platform, the torch.distributed package only
# supports Gloo backend, FileStore and TcpStore.
# For FileStore, set init_method parameter in init_process_group
# to a local file. Example as follow:
# init_method="file:///f:/libtmp/some_file"
# dist.init_process_group(
# "gloo",
# rank=rank,
# init_method=init_method,
# world_size=world_size)
# For TcpStore, same way as on Linux.
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10)
self.relu = nn.ReLU()
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(self.relu(self.net1(x)))
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
setup(rank, world_size)
# create model and move it to GPU with id rank
model = ToyModel().to(rank)
ddp_model = DDP(model, device_ids=[rank])
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(rank)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
def run_demo(demo_fn, world_size):
mp.spawn(demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
class ToyMpModel(nn.Module):
def __init__(self, dev0, dev1):
super(ToyMpModel, self).__init__()
self.dev0 = dev0
self.dev1 = dev1
self.net1 = torch.nn.Linear(10, 10).to(dev0)
self.relu = torch.nn.ReLU()
self.net2 = torch.nn.Linear(10, 5).to(dev1)
def forward(self, x):
x = x.to(self.dev0)
x = self.relu(self.net1(x))
x = x.to(self.dev1)
return self.net2(x)
def demo_model_parallel(rank, world_size):
print(f"Running DDP with model parallel example on rank {rank}.")
setup(rank, world_size)
# setup mp_model and devices for this process
dev0 = (rank * 2) % world_size
dev1 = (rank * 2 + 1) % world_size
mp_model = ToyMpModel(dev0, dev1)
ddp_mp_model = DDP(mp_model)
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_mp_model.parameters(), lr=0.001)
optimizer.zero_grad()
# outputs will be on dev1
outputs = ddp_mp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(dev1)
loss_fn(outputs, labels).backward()
optimizer.step()
cleanup()
if __name__ == "__main__":
n_gpus = torch.cuda.device_count()
assert n_gpus >= 2, f"Requires at least 2 GPUs to run, but got {n_gpus}"
world_size = n_gpus
run_demo(demo_basic, world_size)
# run_demo(demo_model_parallel, world_size)
Output:
The following have been reloaded with a version change:
1) GCCcore/10.3.0 => GCCcore/10.2.0
2) binutils/2.36.1-GCCcore-10.3.0 => binutils/2.35-GCCcore-10.2.0
3) zlib/1.2.11-GCCcore-10.3.0 => zlib/1.2.11-GCCcore-10.2.0
/home/users/aimhigh/.local/lib/python3.9/site-packages/torch/distributed/launch.py:163: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(
The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run
INFO:torch.distributed.run:Using nproc_per_node=auto, seting to 8 since the instance has 28 gpu
WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases.
Please read local_rank from `os.environ('LOCAL_RANK')` instead.
INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:
entrypoint : main.py
min_nodes : 1
max_nodes : 1
nproc_per_node : 8
run_id : none
rdzv_backend : static
rdzv_endpoint : 127.0.0.1:29500
rdzv_configs : {'rank': 0, 'timeout': 900}
max_restarts : 3
monitor_interval : 5
log_dir : None
metrics_cfg : {}
INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_tp__4tqc/none_we2sza_6
INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
/home/users/aimhigh/.local/lib/python3.9/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future.
warnings.warn(
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=0
master_addr=127.0.0.1
master_port=29500
group_rank=0
group_world_size=1
local_ranks=[0, 1, 2, 3, 4, 5, 6, 7]
role_ranks=[0, 1, 2, 3, 4, 5, 6, 7]
global_ranks=[0, 1, 2, 3, 4, 5, 6, 7]
role_world_sizes=[8, 8, 8, 8, 8, 8, 8, 8]
global_world_sizes=[8, 8, 8, 8, 8, 8, 8, 8]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_tp__4tqc/none_we2sza_6/attempt_0/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_tp__4tqc/none_we2sza_6/attempt_0/1/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_tp__4tqc/none_we2sza_6/attempt_0/2/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_tp__4tqc/none_we2sza_6/attempt_0/3/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker4 reply file to: /tmp/torchelastic_tp__4tqc/none_we2sza_6/attempt_0/4/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker5 reply file to: /tmp/torchelastic_tp__4tqc/none_we2sza_6/attempt_0/5/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker6 reply file to: /tmp/torchelastic_tp__4tqc/none_we2sza_6/attempt_0/6/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker7 reply file to: /tmp/torchelastic_tp__4tqc/none_we2sza_6/attempt_0/7/error.json
When I check GPUs and CPUs there is almost no activity at all, but job continues to execute, and no any output after this that I sent (no changes in Slurm) |
st175826 | StevanCakic:
import torch, os
def test_nccl_ops():
num_gpu = 2
print("NCCL init before spawn")
import torch.multiprocessing as mp
dist_url = "file:///tmp/nccl_tmp_file"
mp.spawn(_test_nccl_worker, nprocs=num_gpu, args=(num_gpu, dist_url), daemon=False)
print("NCCL init succeeded.")
def _test_nccl_worker(rank, num_gpu, dist_url):
import torch.distributed as dist
dist.init_process_group(backend="NCCL", init_method=dist_url, rank=rank, world_size=num_gpu)
dist.barrier()
print("Worker after barrier")
if __name__ == "__main__":
test_nccl_ops()
Hey @StevanCakic, for the above script, have you tried setting CUDA_VISIBLE_DEVICES for each spawned process before any torch operation, so that each process only sees one GPU? You can also try torch.cuda.set_device(), but I would recommend CUDA_VISIBLE_DEVICES, as with that you can know for sure that each process is exclusively using the expected device. |
st175827 | I am training models with DDP. I started several processes manually and each process is responsible for the training on one certain GPU. When the model is trained in the processes, I send messages to these processes. I would like these processes would stop training the current model, release GPU memory of the old model, and start training another model after receiving such a message.
However, if the training of these processes are not “synchronized”, I can never successfully release the memory of the “old model” in GPU.
I am releasing the memory like this:
def destruction(self):
torch.cuda.synchronize()
del self.optimizer
del self.ddp_model
del self.train_loader
torch.cuda.empty_cache()
For example, when process A receives the message, it is still in the ith iteration of training; when process B receives the message, it has already enters the (i+1)th iteration. Then, both processes enters destruction() (because they both received the message). Then, the processes will hangs there.
Is there any way to make sure that the training on each process are “synchronized” before they try to release the memory of the “old model” Thanks! |
st175828 | Which data parallel API are you using? DataParallel or Distributed Data Parallel (aka DDP)? With DDP you can use torch.distributed.barrier() function to explicitly synchronize your workers. Check out our docs 13 for further info. |
st175829 | I set the model hidden layer a litter bigger(like 256), and use load_state_dict in the subprocess.
When I give a multi-dimensional tensor to the model, the process will be terminated without any exception.
I write a demo to reproduce the error.
import torch.multiprocessing as mp
import torch
import torch.nn as nn
class AC(nn.Module):
def __init__(self, features_n, actions_n):
super(AC, self).__init__()
# if the hidden layer cells are lower, for example, 128, no error occurs.
self.hidden_layer_cells = 256
self.l1 = nn.Linear(features_n, self.hidden_layer_cells)
self.l2 = nn.Linear(self.hidden_layer_cells, self.hidden_layer_cells)
self.actor_linear = nn.Linear(self.hidden_layer_cells, actions_n)
self.critic_linear = nn.Linear(self.hidden_layer_cells, 1)
def forward(self, inputs):
x = torch.tanh(self.l1(inputs))
x = torch.tanh(self.l2(x))
pi = self.actor_linear(x)
q = self.critic_linear(x)
return pi, q
class Worker2(mp.Process):
def __init__(self) -> None:
super(Worker2, self).__init__()
self.t = AC(10, 10)
self.tt = AC(10, 10)
# if I load state dict from exist model, it will be terminated when passing Multidimensional tensor
self.tt.load_state_dict(self.t.state_dict())
def run(self):
while True:
s = torch.ones(size=(1, 10))
a = self.t(s)
ss = torch.cat((s, s))
# this line will terminate the process
aa = self.t(ss)
w = Worker2()
w.start()
w.join() |
st175830 | Solved by cbalioglu in post #4
I tested it on a Fedora machine and although it does not get terminated, I does get stuck on aa = self.t(ss) line.
The problem is that you are calling self.tt.load_state_dict() in the parent process and then use self.tt in the forked process. There are known issues in “implicitly” passing tensors v… |
st175831 | I am not able to reproduce your issue. Do you mind giving a bit more detail about your execution environment? Which version of PyTorch? Which operating system? |
st175832 | @cbalioglu I’m running it in a docker container.
Operation system:
image860×576 38.5 KB
Pytorch version:
image1682×444 32.9 KB
When I run it on my MacBook, indeed, no problem occurs. |
st175833 | I tested it on a Fedora machine and although it does not get terminated, I does get stuck on aa = self.t(ss) line.
The problem is that you are calling self.tt.load_state_dict() in the parent process and then use self.tt in the forked process. There are known issues in “implicitly” passing tensors via fork to child processes. If you move the logic of __init__() into run(), you will mitigate the issue.
Overall my advice would be to avoid forks in all circumstances. Make sure to spawn your child processes so that they have their own clean state. As you have already experienced forking can be very fragile and unpredictable (and historically it long precedes multi-threading and was never meant to be used with multithreaded processes). |
st175834 | DataParallel and DistributedDataParallel are working with no runtime errors, and network is loaded to the correct GPUs, but then the GPU usage is at 100% forever ( I tried waiting an hour max).
GPU: RTX 8000 (50GB of Memory) and no the memory is not full.
I’m pretty sure the code isn’t the issue since I downloaded different sample codes and they all cause the same issue. This is one of the codes I’ve tried, I tried each case, the one without any distributed training is the one that worked, both DistributedDataParallel and DataParallel both have the same issue described above.
System:
uname -a: Linux lqfaris 5.4.0-58-generic #64~18.04.1-Ubuntu SMP Wed Dec 9 17:11:11 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
# pytorch installed through:
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
$ conda list|grep torch
pytorch 1.9.0 py3.8_cuda10.2_cudnn7.6.5_0 pytorch
torchaudio 0.9.0 py38 pytorch
torchvision 0.10.0 py38_cu102 pytorch
$ pip list|grep torch
torch 1.9.0
torchaudio 0.9.0a0+33b2469
torchvision 0.10.0
Code to reproduce:
import torch.nn as nn
import torch
import time
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2048, 1024)
def forward(self, x):
x = self.fc1(x)
return x
net = Net()
net = net.cuda()
net = nn.DataParallel(net, device_ids=[0, 1])
net.train()
x = torch.randn(1 * 3 * 4 * 8, 2048).cuda()
for _ in range(10):
tis = time.time()
x = x.cuda()
print('net(x)')
net(x) # <------ stuck here
print(time.time() - tis)
Example for DistributedDataParallel
import os
from datetime import datetime
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
from tqdm.auto import tqdm
def find_free_port():
import socket
from contextlib import closing
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return s.getsockname()[1]
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=1, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=0, type=int,
help='ranking within the nodes')
parser.add_argument('--epochs', default=2, type=int, metavar='N',
help='number of total epochs to run')
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = str(find_free_port())
mp.spawn(train, nprocs=args.gpus, args=(args,))
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7 * 7 * 32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
def train(gpu, args):
rank = args.nr * args.gpus + gpu
dist.init_process_group(backend='nccl', init_method='env://', world_size=args.world_size, rank=rank)
torch.manual_seed(0)
model = ConvNet()
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 16 * args.world_size
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
# Wrap the model
model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])
# Data loading code
train_dataset = torchvision.datasets.MNIST(root='/data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset,
num_replicas=args.world_size,
rank=rank)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
start = datetime.now()
total_step = len(train_loader)
for epoch in range(args.epochs):
for i, (images, labels) in tqdm(enumerate(train_loader)):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, args.epochs, i + 1, total_step,
loss.item()))
if gpu == 0:
print("Training complete in: " + str(datetime.now() - start))
if __name__ == '__main__':
main() |
st175835 | @FarisHijazi Its recommended to use DistributedDataParallel over DataParallel, could you please share sample code to reproduce the issue using DistributedDataParallel? |
st175836 | Hello @pritamdamania87 , thanks for your reply, yes I’m aware of DDP being better than DP, but both have the exact same issue.
I added a DDP example in my post, and here’s more code that I’ve tried and resulted in the same issues:
GitHub
yangkky/distributed_tutorial 6
Contribute to yangkky/distributed_tutorial development by creating an account on GitHub. |
st175837 | @FarisHijazi Could you share which torch version are you using? I tried the DDP script on my local box using PyTorch 1.9 and it is running fine:
3276it [00:10, 309.25it/s]Epoch [2/2], Step [3300/3750], Loss: 0.4405
3371it [00:10, 309.47it/s]Epoch [2/2], Step [3400/3750], Loss: 0.2047
3468it [00:10, 314.98it/s]Epoch [2/2], Step [3500/3750], Loss: 0.3345
3596it [00:11, 308.55it/s]Epoch [2/2], Step [3600/3750], Loss: 0.3061
3691it [00:11, 309.97it/s]Epoch [2/2], Step [3700/3750], Loss: 0.2019
3750it [00:11, 317.59it/s]
Training complete in: 0:00:23.282688
Even the DataParallel script is working as expected:
net(x)
3.030186414718628
net(x)
0.0012230873107910156
net(x)
0.0010361671447753906
net(x)
0.0010747909545898438
net(x)
0.0009708404541015625
net(x)
0.0010142326354980469
net(x)
0.0009481906890869141
net(x)
0.0009622573852539062
net(x)
0.0009491443634033203
net(x)
0.0009527206420898438 |
st175838 | hmmm, that’s interesting, then maybe I have a cuda or cudnn issue
the exact torch and pytorch versions are listed in the question
pytorch 1.9.0 py3.8_cuda10.2_cudnn7.6.5_0 pytorch |
st175839 | @FarisHijazi Does the same stuckness issue occur when trying training with the Gloo backend? |
st175840 | still couldn’t resolve this issue, but I did get the amp_recipe.ipynb 22 to work with me. I get almost 2x speedup on RTX8000
same exact environment, the difference is that we don’t use apex in this code |
st175841 | I was using my code without any problem until now.
Tried running more than one process in a single machine.
It causes error saying the port is already in use, and I solved by setting
CUDA_VISIBLE_DEVICES=1,2 python3 -m torch.distributed.launch --nproc_per_node=2 --master_port=$RANDOM
As I’ve already mentioned, I had no problem… until now…
**********************************************************************
CHILD PROCESS FAILED WITH NO ERROR_FILE
**********************************************************************
CHILD PROCESS FAILED WITH NO ERROR_FILE
Child process 12190 (local_rank 0) FAILED (exitcode 1)
Error msg: Process failed with exitcode 1
Without writing an error file to <N/A>.
While this DOES NOT affect the correctness of your application,
no trace information about the error will be available for inspection.
Consider decorating your top level entrypoint function with
torch.distributed.elastic.multiprocessing.errors.record. Example:
from torch.distributed.elastic.multiprocessing.errors import record
@record
def trainer_main(args):
# do train
**********************************************************************
warnings.warn(_no_error_file_warning_msg(rank, failure))
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 173, in <module>
main()
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 169, in main
run(args)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 621, in run
elastic_launch(
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
return f(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
***************************************
train.py FAILED
=======================================
Root Cause:
[0]:
time: 2021-08-16_10:44:53
rank: 0 (local_rank: 0)
exitcode: 1 (pid: 12190)
error_file: <N/A>
msg: "Process failed with exitcode 1"
=======================================
Other Failures:
[1]:
time: 2021-08-16_10:44:53
rank: 1 (local_rank: 1)
exitcode: 1 (pid: 12191)
error_file: <N/A>
msg: "Process failed with exitcode 1"
***************************************
I am running my code again with that “record” decorator on my train function as the error told me to. (but should I also add that to validation function…?)
Not sure if this will fix the error but I don’t see many threads having same error as mine… |
st175842 | Hi,
Were you able to run your training code with more than one process before?
The record decorator should be applied to the entrypoint of your script (e.g. def main(args)). It is not meant for fixing things, but for dumping the Python stack trace of your failed subprocess(es) to the log output so you can have more information about the problem.
Right now there is no chance for us to suggest anything since your log output does not contain any meaningful information to root cause the issue.
Also besides the record decorator, you can also the new torch.distributed.run script in place of torch.distributed.launch, and set its --log-dir, --redirects, and --tee options to dump the stdout/stderr of your worker processes to a file. You can learn more about our new launcher script here 22. |
st175843 | Hi thx for your reply!
Yes I was able to run multiple processes with using random ports without problem…
I dont know what might be the error… |
st175844 | Ok. I would still recommend giving torch.distributed.run a try and see what log output you get for worker processes. |
st175845 | Oh right
Yes I will try that
I did run processes again without that … strangly it works now… so i have no idea what is wrong… |
st175846 | The multiprocessing best practices in the documentations states:
“The CUDA runtime does not support the fork start method; either the spawn or forkserver start method are required to use CUDA in subprocesses”
Does this mean that I can’t write a ddp training script that works on gpus with ‘fork’?
I haven’t found a clear answer for this and I’m not sure what CUDA runtime means in the docs. In my specific use case, I kinda have to use ‘fork’ so I can pass object like data with shared memory.
If so, what are the limitations of using mp.Process with fork method? |
st175847 | Hey @amirhf,
It’s OK to fork a process as long as the parent process has not yet created a CUDA runtime/context. The CUDA context will be created lazily when the process creates CUDA tensors or run CUDA operations.
In my specific use case, I kinda have to use ‘fork’ so I can pass object like data with shared memory.
I would expect PyTorch Tensors shared_memory also works in spawn mode. Did you hit any error when doing that? |
st175848 | Hi,
I’m using DDP to train on multiple GPUs and I’m using a shared server. I sometimes get errors where one GPU process will run out of memory for the dataloader and error out, but the other GPUs will keep going. Eventually, this results in the entire job freezing, but not erroring out (and resulting in large server costs without any progress being made). Is there any way to specify with DDP to kill all GPUs if one fails?
Thanks! |
st175849 | Hey @EricWiener,
TorchElastic is designed to recover from such errors, see: Torch Distributed Elastic — PyTorch 1.9.0 documentation 1
If you just need the non-failing processes to crash when any peer hit OOM, you can set a timeout when calling init_process_group, and set NCCL_ASYNC_ERROR_HANDLING env var when using NCCL backend. see the timeout argument docstring for init_process_group at Distributed communication package - torch.distributed — PyTorch master documentation 2 |
st175850 | Hello folks, I was trying to compute Hessian vector product by using backward twice with DDP. However, the second backward doesn’t seem to gather gradient as desired. The same code worked in single GPU but got wrong value in multi-GPU case (in my case 2 GPUs). This is also autograd related issue maybe also invite @albanD to take a look here.
Results on single GPU, which is correct :
Running on rank 0
Hessian vector product: tensor([1., 1., 1., 1.], device='cuda:0', grad_fn=<CatBackward>)
Done!
Results on two GPUs, which is wrong
Running on rank 0
Running on rank 1
Hessian vector product: tensor([0., 0., 0., 0.], device='cuda:0', grad_fn=<CatBackward>)
Hessian vector product: tensor([0., 0., 0., 0.], device='cuda:1', grad_fn=<CatBackward>)
Done!
Here is the minimal repro code. I initialized the networks with constant so that everything is deterministic here.
import torch
import torch.multiprocessing as mp
import torch.nn as nn
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.autograd as autograd
from utils.helper import setup, cleanup
from argparse import ArgumentParser
def zero_grad(params):
'''
Clean the gradient of each parameter
'''
for p in params:
if p.grad is not None:
p.grad.detach()
p.grad.zero_()
def collect_grad(params):
'''
Collect grads of parameters and concatenate them into a vector.
If grad is None, it will be filled with zeros
:param params: list of parameters
:return: vector
'''
grad_list = []
for p in params:
if p.grad is not None:
grad_list.append(p.grad.contiguous().view(-1))
del p.grad
else:
# replace None with zeros
grad_list.append(torch.zeros_like(p).view(-1))
return torch.cat(grad_list)
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
print(f'Running on rank {rank}')
D = nn.Linear(2, 1, bias=True).to(rank)
G = nn.Linear(1, 2, bias=True).to(rank)
# initialize weights
nn.init.constant_(D.weight, 2.0)
nn.init.constant_(D.bias, -1.0)
nn.init.constant_(G.weight, 4.0)
nn.init.constant_(G.bias, 1.0)
if args.distributed:
G = DDP(G, device_ids=[rank], broadcast_buffers=False)
D = DDP(D, device_ids=[rank], broadcast_buffers=False)
d_params = list(D.parameters())
g_params = list(G.parameters())
z = torch.ones((2, 1)).to(rank)
loss = D(G(z)).mean()
loss.backward(create_graph=True)
gradvec_d = collect_grad(d_params) # d{loss} / d{D}
zero_grad(g_params) # clean the grad before backward
autograd.backward(gradvec_d,
grad_tensors=torch.ones_like(gradvec_d),
inputs=g_params) # compute d{torch.dot(gradvec_d, vec)} / d{G}
hvp = collect_grad(g_params) # gather results
print('Hessian vector product: ', hvp)
cleanup()
if __name__ == '__main__':
torch.backends.cudnn.benchmark = True
parser = ArgumentParser()
parser.add_argument('--num_gpus', type=int, help='Number of GPUs', default=1)
args = parser.parse_args()
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!') |
st175851 | I just realize that reason is probably DPP does gradient gathering operation across all devices every time when backward() is called, which does not define grad_fn and break the computation graph such that gradient will not be accumulated in the second backward().
Do you folks have any idea about how to get around with this? Any thought will be appreciated. |
st175852 | Update:
I eventually managed to get around this by using autograd.grad for the first backward because autograd.grad won’t trigger DDP gradient synchronization. I attached my code below for people who have the same problem.
import torch
import torch.multiprocessing as mp
import torch.nn as nn
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.autograd as autograd
from utils.helper import setup, cleanup
from argparse import ArgumentParser
def subprocess_fn(rank, args):
setup(rank, args.num_gpus)
print(f'Running on rank {rank}')
D = nn.Linear(2, 1, bias=True).to(rank)
G = nn.Linear(1, 2, bias=True).to(rank)
# initialize weights
nn.init.constant_(D.weight, 2.0)
nn.init.constant_(D.bias, -1.0)
nn.init.constant_(G.weight, 4.0)
nn.init.constant_(G.bias, 1.0)
if args.distributed:
G = DDP(G, device_ids=[rank], broadcast_buffers=False)
D = DDP(D, device_ids=[rank], broadcast_buffers=False)
d_params = list(D.parameters())
g_params = list(G.parameters())
z = torch.ones((2, 1)).to(rank)
loss = D(G(z)).mean()
grad_d = autograd.grad(loss, d_params, create_graph=True)
gradvec_d = torch.cat([g.contiguous().view(-1) for g in grad_d])
autograd.backward(gradvec_d,
grad_tensors=torch.ones_like(gradvec_d),
inputs=g_params) # compute d{torch.dot(gradvec_d, vec)} / d{G}
hvp = collect_grad(g_params) # gather results
print('Hessian vector product: ', hvp)
cleanup()
if __name__ == '__main__':
torch.backends.cudnn.benchmark = True
parser = ArgumentParser()
parser.add_argument('--num_gpus', type=int, help='Number of GPUs', default=1)
args = parser.parse_args()
args.distributed = args.num_gpus > 1
if args.distributed:
mp.spawn(subprocess_fn, args=(args, ), nprocs=args.num_gpus)
else:
subprocess_fn(0, args)
print('Done!') |
st175853 | Hey!
Interesting investigation.
Could you actually open an issue on github about this? Asking to add support natively (or raise a nice error if it doesn’t) and document workarounds.
Thanks! |
st175854 | Hi All,
Looking at the creation of a sub group using new_group(ranks, backend) construct, I see it goes through _new_process_group_helper().
I am interested in getting this call to a custom backend, which I see is delegated to the registered process group creation method in the same method at L724
However, it doesn’t seem to forward the participating ranks list to the method. Any help on how to get this information in the custom backend during new_group() creation?
Thanks, |
st175855 | Perhaps I am misunderstanding the question, but the rank is passed into the method and the rank argument is there in the code snippet you sent. For a third party backend you need to register it and perform init process group on all the ranks (Distributed communication package - torch.distributed — PyTorch master documentation). Then you can use the collectives as usual. |
st175856 | H-Huang:
is there in the code snippet you sent. For a third party backend you need to register it and perform init process group on
Thanks, yes, I am registering a third-party backend as in the document you’ve shared.
What I am referring to is the new_group() method in a ProcessGroup.
This allows to create a subgroup from the default process group by taking on a list of ranks. This list is referred to as the group_ranks in the _new_process_group_helper() method I pointed above.
My question was that this group_ranks list is not passed to the custom backend method. Instead, only the rank and world_size are passed.
Sees like a missing feature here. |
st175857 | Confirmed, this is a missing feature add c10d dynamic loading mechanism and unit test by ftian1 · Pull Request #28068 · pytorch/pytorch · GitHub 1 |
st175858 | I compile PyTorch v1.9.0 with CUDA 11.0 and NCCL 2.10.3
NCCL 2.10.3 Upgrade
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub && add-apt-repository “deb Index of /compute/cuda/repos/ubuntu1804/x86_64 1 /” && apt update && apt install -y --allow-change-held-packages libnccl2=2.10.3-1+cuda11.0 libnccl-dev=2.10.3-1+cuda11.0
compile PyTorch v1.9.0 from source
git clone GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration && cd pytorch && git checkout v1.9.0 && git submodule sync && git submodule update --init --recursive && sudo USE_SYSTEM_NCCL=1 TORCH_CUDA_ARCH_LIST=“6.0 6.1 7.0 7.5 8.0” python3 setup.py install
compile AWS-OFI-NCCL to support AWS EFA (fast cross machine communication in AWS)
GitHub
GitHub - aws/aws-ofi-nccl: This is a plugin which lets EC2 developers use... 2
This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications. - GitHub - aws/aws-ofi-nccl: This is a plugin which lets EC2 developers use libfabric a...
git clone https://github.com/aws/aws-ofi-nccl.git $HOME/aws-ofi-nccl \
&& cd $HOME/aws-ofi-nccl \
&& git checkout aws \
&& ./autogen.sh \
&& ./configure --prefix=$HOME/aws-ofi-nccl/install \
--with-libfabric=/opt/amazon/efa/ \
--with-cuda=/usr/local/cuda \
--with-nccl=/tmp/pytorch/build/nccl \
--with-mpi=/opt/amazon/openmpi/ \
&& make && make install
&& make -j$(nproc) && make install
Got the following error when using all_to_all APIs:
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/m5_transformers/models/switch_transformers/switch_transformer_layers.py", line 310, in forward
10.4.22.101: expert_inputs = self.Shuffle(torch.cat(route_inputs))
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/m5_transformers/models/switch_transformers/switch_transformer_layers.py", line 508, in Shuffle
10.4.22.101: return _Shuffle.apply(x)
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/m5_transformers/models/switch_transformers/switch_transformer_layers.py", line 398, in forward
10.4.22.101: return _shuffle(input_)
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/m5_transformers/models/switch_transformers/switch_transformer_layers.py", line 202, in _shuffle
10.4.22.101: output_tensor_list, input_tensor_list, mpu.get_data_parallel_group()
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 2478, in all_to_all
10.4.22.101: work = group.alltoall(output_tensor_list, input_tensor_list, opts)
10.4.22.101: RuntimeError: NCCL error in: /tmp/pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:38, unhandled system error, NCCL version 21.0.3
10.4.22.101: ncclSystemError: System call (socket, malloc, munmap, etc) failed. |
st175859 | the environment is as follows:
------------nvidia-smi------------
Mon Aug 9 22:18:40 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:16.0 Off | 0 |
| N/A 43C P0 43W / 300W | 0MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:00:17.0 Off | 0 |
| N/A 42C P0 43W / 300W | 0MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-SXM2... On | 00000000:00:18.0 Off | 0 |
| N/A 42C P0 45W / 300W | 0MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 Tesla V100-SXM2... On | 00000000:00:19.0 Off | 0 |
| N/A 44C P0 45W / 300W | 0MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 Tesla V100-SXM2... On | 00000000:00:1A.0 Off | 0 |
| N/A 43C P0 43W / 300W | 0MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 Tesla V100-SXM2... On | 00000000:00:1B.0 Off | 0 |
| N/A 43C P0 44W / 300W | 0MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 6 Tesla V100-SXM2... On | 00000000:00:1C.0 Off | 0 |
| N/A 42C P0 43W / 300W | 0MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 7 Tesla V100-SXM2... On | 00000000:00:1D.0 Off | 0 |
| N/A 45C P0 44W / 300W | 0MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
------------python3 --version------------
Python 3.6.9
------------nvcc --version------------
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
------------nvcc --version------------
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
------------python3 -c import torch; print(torch.__version__)------------
1.9.0a0+gitd69c22d
2021-08-09 22:18:41,156 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
------------python3 -c import torch;print(torch.cuda.nccl.version())------------
3003
------------collect environment------------
--2021-08-09 22:18:41-- https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16993 (17K) [text/plain]
Saving to: 'collect_env.py'
0K .......... ...... 100% 64.9M=0s
2021-08-09 22:18:41 (64.9 MB/s) - 'collect_env.py' saved [16993/16993]
Collecting environment information...
PyTorch version: 1.9.0a0+gitd69c22d
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.21.1
Libc version: glibc-2.25
Python version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] (64-bit runtime)
Python platform: Linux-4.14.200-155.322.amzn2.x86_64-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: True
CUDA runtime version: 11.0.221
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
GPU 4: Tesla V100-SXM2-32GB
GPU 5: Tesla V100-SXM2-32GB
GPU 6: Tesla V100-SXM2-32GB
GPU 7: Tesla V100-SXM2-32GB
Nvidia driver version: 450.80.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] pytorch-ignite==0.4.6
[pip3] torch==1.9.0a0+gitd69c22d
[conda] Could not collect |
st175860 | all reduce also has such issue:
10.4.22.101: ip-10-4-22-101:744:3711 [6] NCCL INFO Channel 01 : 14[a01c0] -> 15[a01d0] via P2P/IPC/read
10.4.22.101: self.overflow = self.overflow_checker.check_using_norm(norm_groups)
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/deepspeed/runtime/utils.py", line 102, in check_using_norm
10.4.22.101: dist.all_reduce(cuda_overflow, op=torch.distributed.ReduceOp.MAX)
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 1206, in all_reduce
10.4.22.101: work = default_pg.allreduce([tensor], opts)
10.4.22.101: RuntimeError: NCCL error in: /tmp/pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 21.0.3 |
st175861 | finally, we found the AWS EFA NCCL library doesn’t support v2.10.3 yet. It shows error:
10.4.2.84: ip-10-4-2-84:853:853 [6] find_ofi_provider:543 NCCL WARN NET/OFI Couldn't find any optimal provider
10.4.2.84: ip-10-4-2-84:848:848 [1] NCCL INFO NET/IB : No device found.
10.4.2.84: ip-10-4-2-84:848:848 [1] NCCL INFO NET/Socket : Using [0]eth0:10.4.2.84<0> [1]eth1:10.4.8.96<0> [2]eth2:10.4.28.87<0> [3]eth3:10.4.18.220<0>
10.4.2.84: ip-10-4-2-84:848:848 [1] NCCL INFO Using network Socket
10.4.2.84: ip-10-4-2-84:852:852 [5] NCCL INFO NET/IB : No device found.
10.4.2.84: ip-10-4-2-84:852:852 [5] NCCL INFO NET/Socket : Using [0]eth0:10.4.2.84<0> [1]eth1:10.4.8.96<0> [2]eth2:10.4.28.87<0> [3]eth3:10.4.18.220<0>
10.4.2.84: ip-10-4-2-84:852:852 [5] NCCL INFO Using network Socket
10.4.2.84: ip-10-4-2-84:851:851 [4] NCCL INFO NET/IB : No device found.
10.4.2.84: ip-10-4-2-84:851:851 [4] NCCL INFO NET/Socket : Using [0]eth0:10.4.2.84<0> [1]eth1:10.4.8.96<0> [2]eth2:10.4.28.87<0> [3]eth3:10.4.18.220<0>
10.4.2.84: ip-10-4-2-84:851:851 [4] NCCL INFO Using network Socket
10.4.2.84: ip-10-4-2-84:853:853 [6] NCCL INFO NET/IB : No device found.
10.4.2.84: ip-10-4-2-84:853:853 [6] NCCL INFO NET/Socket : Using [0]eth0:10.4.2.84<0> [1]eth1:10.4.8.96<0> [2]eth2:10.4.28.87<0> [3]eth3:10.4.18.220<0>
10.4.2.84: ip-10-4-2-84:853:853 [6] NCCL INFO Using network Socket
10.4.22.101: ip-10-4-22-101:743:743 [5] NCCL INFO Bootstrap : Using eth0:10.4.22.101<0>
10.4.22.101: ip-10-4-22-101:740:740 [2] NCCL INFO Bootstrap : Using eth0:10.4.22.101<0> |
st175862 | The following combination can make allreduce() work, but alltoall() still failed:
NCCL v2.7.8 + PyTorch v1.9.0 + CUDA 11.0
NCCL_SOCKET_IFNAME is set as “eth”, but our EFA has four eth:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.4.2.84 netmask 255.255.224.0 broadcast 10.4.31.255
inet6 fe80::437:f3ff:fe3a:8529 prefixlen 64 scopeid 0x20<link>
ether 06:37:f3:3a:85:29 txqueuelen 1000 (Ethernet)
RX packets 39803458 bytes 99075257349 (92.2 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 19278512 bytes 35978106875 (33.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.4.8.96 netmask 255.255.224.0 broadcast 10.4.31.255
inet6 fe80::4a0:51ff:fecd:2c15 prefixlen 64 scopeid 0x20<link>
ether 06:a0:51:cd:2c:15 txqueuelen 1000 (Ethernet)
RX packets 325113 bytes 2433198989 (2.2 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 105789 bytes 7092566 (6.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.4.28.87 netmask 255.255.224.0 broadcast 10.4.31.255
inet6 fe80::4de:c7ff:fe3b:1595 prefixlen 64 scopeid 0x20<link>
ether 06:de:c7:3b:15:95 txqueuelen 1000 (Ethernet)
RX packets 863069 bytes 6834097406 (6.3 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 298325 bytes 19788074 (18.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.4.18.220 netmask 255.255.224.0 broadcast 10.4.31.255
inet6 fe80::461:3dff:fead:19bd prefixlen 64 scopeid 0x20<link>
ether 06:61:3d:ad:19:bd txqueuelen 1000 (Ethernet)
RX packets 860674 bytes 6832104026 (6.3 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 293308 bytes 19451440 (18.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 |
st175863 | I tried NCCL v2.9.9 + PyTorch v1.9.0 + CUDA 11.0 + AWS-OFI-NCCL (aws branch), alltoall() operation still failed
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/m5_transformers/models/switch_transformers/switch_transformer_layers.py", line 310, in forward
10.4.22.101: expert_inputs = self.Shuffle(torch.cat(route_inputs))
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/m5_transformers/models/switch_transformers/switch_transformer_layers.py", line 508, in Shuffle
10.4.22.101: return _Shuffle.apply(x)
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/m5_transformers/models/switch_transformers/switch_transformer_layers.py", line 398, in forward
10.4.22.101: return _shuffle(input_)
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/m5_transformers/models/switch_transformers/switch_transformer_layers.py", line 202, in _shuffle
10.4.22.101: output_tensor_list, input_tensor_list, mpu.get_data_parallel_group()
10.4.22.101: File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 2478, in all_to_all
10.4.22.101: work = group.alltoall(output_tensor_list, input_tensor_list, opts)
10.4.22.101: RuntimeError: NCCL error in: /tmp/pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:38, unhandled system error, NCCL version 20.9.9 |
st175864 | we use NCCL Test (GitHub - NVIDIA/nccl-tests: NCCL Tests 4) and found the following bug when using alltoall():
Starting a DeepSpeed Training
+ cd /fsx/hchaoyan/m5/nccl-tests
++ which mpirun
+ /usr/local/mpi/bin/mpirun -allow-run-as-root --mca plm_rsh_no_tree_spawn 1 -x FI_PROVIDER=efa -x NCCL_SOCKET_IFNAME=eth -x FI_EFA_USE_DEVICE_RDMA=1 -x RDMAV_FORK_SAFE=1 -x LD_LIBRARY_PATH=/opt/nccl/build/lib:/usr/local/cuda/lib64:/opt/amazon/efa/lib64:/opt/amazon/openmpi/lib64:/opt/aws-ofi-nccl/lib:/usr/lib:/usr/local/lib:/usr/local/lib:/usr/local/mpi/lib:/usr/local/mpi/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 -x NCCL_DEBUG=WARN -bind-to none -x NCCL_MIN_NCHANNELS=8 -x NCCL_ALGO=Ring -x OMP_NUM_THREADS=8 -x NCCL_NSOCKS_PERTHREAD=8 -x NCCL_SOCKET_NTHREADS=8 -n 16 -N 8 --mca pml '^cm' --hostfile /job/hostfile -mca btl tcp,self --mca btl_tcp_if_exclude lo,docker0 ./build/alltoall_perf -b 0.5G -e 2G -f 2 -g 1 -c 1 -n 10
Warning: Permanently added '[10.4.22.101]:2022' (RSA) to the list of known hosts.
# nThread 1 nGpus 1 minBytes 536870912 maxBytes 2147483648 step: 2(factor) warmup iters: 5 iters: 10 validation: 1
#
# Using devices
# Rank 0 Pid 311 on ip-10-4-2-84 device 0 [0x10] A100-SXM4-40GB
# Rank 1 Pid 312 on ip-10-4-2-84 device 1 [0x10] A100-SXM4-40GB
# Rank 2 Pid 313 on ip-10-4-2-84 device 2 [0x20] A100-SXM4-40GB
# Rank 3 Pid 314 on ip-10-4-2-84 device 3 [0x20] A100-SXM4-40GB
# Rank 4 Pid 315 on ip-10-4-2-84 device 4 [0x90] A100-SXM4-40GB
# Rank 5 Pid 316 on ip-10-4-2-84 device 5 [0x90] A100-SXM4-40GB
# Rank 6 Pid 319 on ip-10-4-2-84 device 6 [0xa0] A100-SXM4-40GB
# Rank 7 Pid 321 on ip-10-4-2-84 device 7 [0xa0] A100-SXM4-40GB
# Rank 8 Pid 286 on ip-10-4-22-101 device 0 [0x10] A100-SXM4-40GB
# Rank 9 Pid 287 on ip-10-4-22-101 device 1 [0x10] A100-SXM4-40GB
# Rank 10 Pid 288 on ip-10-4-22-101 device 2 [0x20] A100-SXM4-40GB
# Rank 11 Pid 289 on ip-10-4-22-101 device 3 [0x20] A100-SXM4-40GB
# Rank 12 Pid 290 on ip-10-4-22-101 device 4 [0x90] A100-SXM4-40GB
# Rank 13 Pid 291 on ip-10-4-22-101 device 5 [0x90] A100-SXM4-40GB
# Rank 14 Pid 292 on ip-10-4-22-101 device 6 [0xa0] A100-SXM4-40GB
# Rank 15 Pid 296 on ip-10-4-22-101 device 7 [0xa0] A100-SXM4-40GB
NCCL version 2.9.9+cuda11.0
#
# out-of-place in-place
# size count type redop time algbw busbw error time algbw busbw error
# (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s)
ip-10-4-22-101:286:351 [0] transport/net_socket.cc:332 NCCL WARN Call to accept failed : Too many open files
ip-10-4-22-101: Test NCCL failure alltoall.cu:76 'unhandled system error'
.. ip-10-4-22-101 pid 286: Test failure common.cu:505
.. ip-10-4-22-101 pid 286: Test failure common.cu:694
.. ip-10-4-22-101 pid 286: Test failure alltoall.cu:111
.. ip-10-4-22-101 pid 286: Test failure common.cu:722
.. ip-10-4-22-101 pid 286: Test failure common.cu:1083
.. ip-10-4-22-101 pid 286: Test failure common.cu:925
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun.real detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[52734,1],8]
Exit code: 3
--------------------------------------------------------------------------
+ '[' 3 -eq 0 ']'
+ log 'Writing exit code 1 to /tmp/batch-exit-code and shutting down supervisord'
+ echo 'mpi-run.sh - Writing exit code 1 to /tmp/batch-exit-code and shutting down supervisord'
mpi-run.sh - Writing exit code 1 to /tmp/batch-exit-code and shutting down supervisord
+ echo 1
++ cat /tmp/supervisord.pid
+ kill 7
+ exit 0 |
st175865 | We finally solved this problem by enlarging the “ulimit -n” value when launching dockers. |
st175866 | Hi, I just started to learn how to do Distributed training in pytorch. And learnt from the basic tutorials from here: Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.9.0+cu102 documentation 4 and I also read the DDP paper. But I still have some questions here.
If I’m spawning 2 process on 1 machine with 2 GPUs. AFAIK, in each process, there will be a model replica during DDP construction. And the all_reduce synchronization happened during loss.backward(). So I’m wondering which of the 2 process is actually doing this reduction work? I can see that during all_reduce, we need to somehow communicate between the 2 processes. But is there like a ‘major’ process doing more work? Like doing the average of the gradients?
I am using GLOO currently, I just checked in Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 1, the GLOO on GPU has limited functionality, just 2 supported functions: broadcast and all_reduce. Are these 2 functions enough for a basic DDP example on GPU to work fine?
Also, how could I check if my all_reduce is done on GPU, not on CPU? And besides, could I use GLOO backend to launch 2 processes on CPU to do DDP?
Much appreciated, I might interpret some materials wrong, please help me out.
Edit:
I know I can use nvidia-smi to check my GPU usage to see if I have both GPU working. But the thing is, I have 2 different machines, one have 3 GPU, the other have 2 GPUs. I have set up to use only 2 processes in my simple DDP script. On the first machine, when running the script, I see that GPU memory usage is around 1.5G and 800M for the 2 GPUs by checking nvidia-smi. But on the second machine, with the same script, I can only see 42Mb GPU memory usage increments on both GPU cards. So that’s why I’m wondering if they actually run on GPUs, or loaded the work to CPUs internally when GPUs is somehow not configured correctly when I build pytorch from source. |
st175867 | Solved by Yanli_Zhao in post #8
in default, gloo is using ring algorithm gloo/allreduce.cc at master · facebookincubator/gloo · GitHub. Gloo will do the all reduce on host, but NCCL will do the all reduce on devices |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.