id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st175368 | Could not find anything interesting in kernlog, syslog or dmesg. But what I did find was this github issue 14 where people have a similar problem. GPU burn etc cannot make the system crash and PSU is powerful enough (on paper). They say, that PyTorch causes a short big power surge which can cause the PSU to fail, even if it’s big enough to support the system on heavy load in other circumstances (gpu burn etc).
I put the 2 2080Tis into a system with a 1600w PSU and they seemed to work fine. The PSU in the other system had 850w. Obv. I cannot guarantee that this is the cause/solution due to many variables being changed but as other people have had similar experiences, I will take it for now (and will post again here if there are new developments). Thanks for your help!
(and btw, since many people have the same problem which seems to originate in PyTorch, I think it would be good if the relevant developers could have a look into that) |
st175369 | AljoSt:
(and btw, since many people have the same problem which seems to originate in PyTorch, I think it would be good if the relevant developers could have a look into that)
Since a 2080Ti can take >300W, a 850W PSU might not be enough for the system.
I’m not sure, if artificially limiting the GPU is a valid use case. |
st175370 | Ah interesting, I didn’t know that, thank you. I have difficulties finding the max power draw (the highest I saw in some random blog post was around 330w). Is it vendor specific (so would it differ say between MSI 2080Ti and EVGA 2080TI?) or is there a “fixed” number? |
st175371 | It should be vendor specific, as some vendors might increase the clock speeds and add a better cooling solution to the device. You won’t see much difference, but it’s not strictly a single number. |
st175372 | hi guys
Just wanted to let you know it is not related to only pytorch framework. Happens also with tensorflow.
It feels like modern power supplies can’t handle 300w + 300w simultaneously during training. I got double 2080 ti with another power supply (with cheap 2$ synchronizer) and everything works like a charm. |
st175373 | Hi mstrfx, may I ask which PSU did you buy? I am facing the same issue with a 1200 watt PSU. |
st175374 | It must be the PSU…
So I have 2 2080Ti GPU (from EVGA) and CORSAIR - HX Series 1200W ATX12V 2.4/EPS12V 2.92 80 Plus Platinum Modular Power Supply. It shuts down the PC sometimes when I run 2 gpu for training (separately or together) and it usually happens when one model reaches the epoch of that fold.
Now I switch to EVGA - GP Series SuperNOVA 1000W ATX 80 Plus Gold Fully Modular Power Supply. It works well without any PC shuts down. I made the switch because my friend has the same setting as mine and he used this PSU. It turns out that it is indeed the case but it is still surprised to see 1200W cannot work but 1000W can.
Not sure if it was a defect for the 1200W PSU though. |
st175375 | In the script 1, it describes using python -m torch.distributed.launch........ to spawn the processes but I see that the Pytorch ImageNet example does not use it and is able to spawn the processes too, so what’s the point of it? I see that a lot of 3rd party open-source training repo also call torch.distributed.launch in their bash script for training. I am not very clear on what it does, seems like it setting the environment variables?
Pytorch imagenet example without the usage of the launch file: examples/imagenet at master · pytorch/examples · GitHub
It will be very nice if there’s an example of using the launch file in the pytorch example to see what’s the difference. |
st175376 | Can anyone comfirm that what shown here Getting Started with Distributed Data Parallel — PyTorch Tutorials 1.10.0+cu111 documentation 5.
Is actually a different way to do the same we can do using torch.distributed.launch ? |
st175377 | torch.distributed.launch is a CLI tool that helps you create k copies of your training script (one on each process). And as you correctly pointed out it sets certain env vars that ddp uses to get information about rank, world size and so on. The closest analogy is what mpirun is to mpi applications. torch.distributed.launch compliments ddp by making it easy for you to run your ddp training script.
It is one way to launch ddp scripts but not the only way. |
st175378 | How does it make it easier? Is there a pytorch example over the ImageNet Pytorch example that shows how torch.distributed.launch could be a better option than just calling the arguments directly like for main.py in imagenet example? |
st175379 | Hi,
I am trying to profile an application using DistributedDataParallel Module.
Is there a specific set of guidelines to measure the communication overheads (allreduce time, broadcast time, etc)?
I used the with torch.autograd.profiler.profile(use_cuda=True). But I didn’t get information about these calls. This may only track basic calls not functions like allreduce or broadcast happening in ProcessGroups (NCCL) layer.
Please correct me if I am wrong.
Thank You,
Vibhatha. |
st175380 | Hey @Vibhatha_Abeykoon DDP does not work with autograd profiler yet, but this is in our roadmap. Before that, will nvprof 43 able to serve your use case? |
st175381 | @mrshenli Sorry for the late response. Yes, it could also be useful. I will check.
Thank You. |
st175382 | Profiling with DDP is be enabled now, i.e. the collectives will be profiled. You can simply run the DDP model under profiler as normal:
with torch.profiler.profile():
ddp(inputs) |
st175383 | Hi,
I try to get DistributedDataParallel working for my application but I run into the following exception when I want start the training. Its happening if the backward() function of the loss is called:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1024]] is at version 5; expected version 4 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
This is my model:
class Model(nn.Module):
def __init__(self, output_channels: int, dropout: float = 0.5):
super().__init__()
self.conv_layer0 = self._make_conv_layer(1, 64)
self.conv_layer1 = self._make_conv_layer(64, 128)
self.conv_layer2 = self._make_conv_layer(128, 256)
self.conv_layer3 = self._make_conv_layer(256, 512)
self.conv_layer4 = self._make_conv_layer(512, 1024)
self.max_pooling = nn.MaxPool3d((2, 2, 2))
self.headnet = self._make_headnet(2 * 2 * 2 * 1024, 2048, output_channels, dropout=dropout)
@staticmethod
def _make_conv_layer(in_c: int, out_c: int):
conv_layer = nn.Sequential(
nn.Conv3d(in_c, out_c, kernel_size=3),
nn.BatchNorm3d(out_c),
nn.LeakyReLU(),
nn.Conv3d(out_c, out_c, kernel_size=3),
nn.BatchNorm3d(out_c),
nn.LeakyReLU(),
)
return conv_layer
@staticmethod
def _make_headnet(in_c1: int, out_c1: int, out_head: int, dropout: float) -> nn.Sequential:
headnet = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(in_c1, out_c1),
nn.LeakyReLU(),
nn.Linear(out_c1, out_c1),
nn.LeakyReLU(),
nn.Linear(out_c1, out_head),
)
return headnet
def forward(self, inputtensor):
"""
Forward pass through the network
:param inputtensor: Input tensor
"""
out = self.conv_layer0(inputtensor)
out = self.conv_layer1(out)
out = self.conv_layer2(out)
out = self.max_pooling(out)
out = self.conv_layer3(out)
out = self.conv_layer4(out)
out = self.max_pooling(out)
out = out.reshape(out.size(0), -1) # flatten
out = self.headnet(out)
out = F.normalize(out, p=2, dim=1)
return out
I can’t see any inplace operation so I’m running out of ideas. I would be happy if someone could point me into a direction to look for.
The model was running fine with DataParallel.
Best,
Thorsten |
st175384 | Solved by H-Huang in post #3
Related issue: Distributed: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2048]] is at version 4; expected version 3 instead · Issue #62474 · pytorch/pytorch · GitHub
May be related to the BatchNorm layers and … |
st175385 | I just found out that the training starts if I set broadcast_buffers=False in DistributedDataParallel, but I’m not sure what the option does… |
st175386 | Related issue: Distributed: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2048]] is at version 4; expected version 3 instead · Issue #62474 · pytorch/pytorch · GitHub 1
May be related to the BatchNorm layers and replacing those with SyncBatchNorm — PyTorch 1.9.1 documentation 1 would help |
st175387 | Hi Howard,
thanks for helping me out!
Replacing nn.BatchNorm3D(out_channels) with nn.SyncBatchNorm(out_channels) worked.
I noticed that there is no nn.SyncBatchNorm3D? Do I need to take any extra measures here?
Best,
Thorsten |
st175388 | You can specify the dimensionality of your input into SyncBatchNorm, so as long as the dimensions are aligned then it will be fine. Here is the follow up issue to track this: BatchNorm runtimeError: one of the variables needed for gradient computation has been modified by an inplace operation · Issue #66504 · pytorch/pytorch · GitHub 7 as well as some additional workarounds. |
st175389 | When the communication graph between threads is connected, the program will get stucked.
It seems like a deadlock.
But I don’t know how to figure it out.
I write a simple demo to reproduce it.
import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
def run(rank, size):
tensor = torch.zeros(1)
rec_tensor = torch.zeros(1)
tensor += rank
if rank == 0:
dist.send(tensor=tensor, dst=1)
dist.recv(tensor=rec_tensor, src=2)
elif rank == 1:
dist.send(tensor=tensor, dst=2)
dist.recv(tensor=rec_tensor, src=0)
else:
dist.send(tensor=tensor, dst=0)
dist.recv(tensor=rec_tensor, src=1)
pass
print("Rank ", rank, " has data ", rec_tensor)
def init_processes(rank, size, fn, backend="gloo"):
"""Initialize the distributed environment."""
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "29500"
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 3
processes = []
torch.multiprocessing.set_start_method("spawn")
for rank in range(size):
p = Process(target=init_processes, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join() |
st175390 | Solved by ptrblck in post #2
The docs point out that the send is applied synchronously, so the process would wait until the tensor is received. If you want to use the async API use isend and irecv and make sure the tensors are available before using them. |
st175391 | The docs point out that the send is applied synchronously, so the process would wait until the tensor is received. If you want to use the async API use isend and irecv and make sure the tensors are available before using them. |
st175392 | The example 2 on Hogwild! gives 99% accuracy, but when I upgrade to multi-gpu versions, it gives 11% accuracy.
The difference is as follows:
# main.py
model = Net() # not to specific device any more
model.share_memory()
processes = []
for rank in range(args.num_processes):
local_device = torch.device(rank%2)
p = mp.Process(target=train, args=(rank, args, model, local_device,
dataset1, kwargs))
p.start()
processes.append(p)
And I move the model to device in the subprocesses.
# train.py
def train(rank, args, model, device, dataset, dataloader_kwargs):
model = model.to(device) # move to specific device in the sub-process
torch.manual_seed(args.seed + rank)
train_loader = torch.utils.data.DataLoader(dataset, **dataloader_kwargs)
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(1, args.epochs + 1):
train_epoch(epoch, args, model, device, train_loader, optimizer)
It seems the model is not shared any more.
Where are the mistakes?
What is the correct steps? |
st175393 | Answer to your questions:
Your model is indeed put on multiple devices, but there is no synchronization of gradients during training likely causing the accuracy loss
Look into DDP (Distributed Data Parallel — PyTorch 1.9.1 documentation) which provides a framework to do distributed training across multiple gpus and multiple machines.
Example modifications of your example to fit DDP (I did not test locally, may have typos):
# main.py
model = Net() # not to specific device any more
processes = []
for rank in range(args.num_processes):
dist.init_process_group("gloo", rank=rank, world_size=2)
p = mp.Process(target=train, args=(rank, args, model, dataset1, kwargs))
p.start()
processes.append(p)
# train.py
def train(rank, args, model, dataset, dataloader_kwargs):
ddp_model = DDP(model, device_ids=[rank])
torch.manual_seed(args.seed + rank)
train_loader = torch.utils.data.DataLoader(dataset, **dataloader_kwargs)
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(1, args.epochs + 1):
train_epoch(epoch, args, model, device, train_loader, optimizer) |
st175394 | @H-Huang Hi, thank you for your kind help. However, it got stuck at dist.init_process_group('nccl', rank=rank, world_size=2). |
st175395 | CMIIW, but I think DDP is not suitable for Hogwild training?
Because DDP basically synchronizes the gradient across all nodes/processes during the backward operation using all reduce. Hence, all of the models in different processes will have the same gradient and have the same weight at every step of the training (after optimizer step). If you indeed want to use DDP, I think you should create the model inside the train function, not being passed when creating the processes in the main function. Because in the latter case, the model tensor will be treated as a shared memory and it is a bit pointless for all processes to update a shared memory with the same value?
Meanwhile, the idea of Hogwild is that sparse gradient updates (because of multiple processes potentially trying to read and write to the shared memory at the same time) can indeed converge and improve the performance. Hence, it is expected for diff processes to update the model with diff gradients (unlike what happens when using DDP).
To reply the main question:
I don’t think you can write model = model.to(device)? Because all of the processes are essentially sharing the same model?
What dataset are you using? And perhaps how do you implement the train_epoch function? I’m not sure about why there’s a huge drop in accuracy, but do you think it sort of follows the experiment mentioned in the original Hogwild paper?
I believe they only parallelize using multiple cores in the original Hogwild paper (but I may misremember this). If you want to use multiple GPUs, I think you should try another workaround like using parameter server, or having multiple local copies of the model (one per process) that occasionally get synced with the one in shared memory.
If you only want to increase the accuracy, I think it may be better to go ahead with DDP since synchronous training in general have a better track record (easier to train) as compared to using Hogwild and some other styles of async training. |
st175396 | @stevenwjy Hi, thank you so much for your kind reply. The code is from this repo 1. I am trying to extend it to multi-gpus version. I think my modification is incorrect, too (as @H-Huang). However, I don’t know what is the proper way. |
st175397 | model = model.to(device) # move to specific device in the sub-process
Hmm… I’m actually not too familiar with this, but I assume that in the beginning your model was on the shared memory, but eventually you moved it into the “device” – which I guess may not make the new model shared across processes anymore? |
st175398 | hi, I try to run the tutorial example 1 in two machines.
One is my local mac(IP: 192.168.1.57), the other is a docker container(ubuntu) in a linux server(server IP: 192.168.60.67). I use a vpn to visit the server from my mac.
When creating the container, I mapped its port 60000(available) to the same port of the server. I want to first create a remote parameter server process(rank0) on the container, and a local worker process(rank1) on my mac. Then the worker will send some objects to the server, through the special address and port(“192.168.60.67:60000”).
The code:
#!/usr/bin/env python
# coding:utf-8
import argparse
import os
import time
from threading import Lock
import torch
import torch.distributed.autograd as dist_autograd
import torch.distributed.rpc as rpc
import torch.multiprocessing as mp
import torch.nn as nn
import torch.nn.functional as F
from torch import optim
from torch.distributed.optim import DistributedOptimizer
from torch.distributed.rpc import BackendType
from torchvision import datasets, transforms
# --------- MNIST Network to train, from pytorch/examples -----
class Net(nn.Module):
def __init__(self, num_gpus=0):
super(Net, self).__init__()
print(f"Using {num_gpus} GPUs to train")
self.num_gpus = num_gpus
device = torch.device(
"cuda:0" if torch.cuda.is_available() and self.num_gpus > 0 else "cpu")
print(f"Putting first 2 convs on {str(device)}")
# Put conv layers on the first cuda device, or CPU if no cuda device
self.conv1 = nn.Conv2d(1, 32, 3, 1).to(device)
self.conv2 = nn.Conv2d(32, 64, 3, 1).to(device)
# Put rest of the network on the 2nd cuda device, if there is one
if "cuda" in str(device) and num_gpus > 1:
device = torch.device("cuda:1")
print(f"Putting rest of layers on {str(device)}")
self.dropout1 = nn.Dropout2d(0.25).to(device)
self.dropout2 = nn.Dropout2d(0.5).to(device)
self.fc1 = nn.Linear(9216, 128).to(device)
self.fc2 = nn.Linear(128, 10).to(device)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
# Move tensor to next device if necessary
next_device = next(self.fc1.parameters()).device
x = x.to(next_device)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
# --------- Helper Methods --------------------
def call_method(method, rref, *args, **kwargs):
return method(rref.local_value(), *args, **kwargs)
def remote_method(method, rref, *args, **kwargs):
args = [method, rref] + list(args)
return rpc.rpc_sync(rref.owner(), call_method, args=args, kwargs=kwargs)
# --------- Parameter Server --------------------
class ParameterServer(nn.Module):
def __init__(self, num_gpus=0):
super().__init__()
model = Net(num_gpus=num_gpus)
self.model = model
self.input_device = torch.device(
"cuda:0" if torch.cuda.is_available() and num_gpus > 0 else "cpu")
def forward(self, inp):
inp = inp.to(self.input_device)
out = self.model(inp)
# This output is forwarded over RPC, which as of 1.5.0 only accepts CPU tensors.
# Tensors must be moved in and out of GPU memory due to this.
out = out.to("cpu")
return out
# Use dist autograd to retrieve gradients accumulated for this model.
# Primarily used for verification.
def get_dist_gradients(self, cid):
grads = dist_autograd.get_gradients(cid)
# This output is forwarded over RPC, which as of 1.5.0 only accepts CPU tensors.
# Tensors must be moved in and out of GPU memory due to this.
cpu_grads = {}
for k, v in grads.items():
k_cpu, v_cpu = k.to("cpu"), v.to("cpu")
cpu_grads[k_cpu] = v_cpu
return cpu_grads
# Wrap local parameters in a RRef. Needed for building the
# DistributedOptimizer which optimizes paramters remotely.
def get_param_rrefs(self):
param_rrefs = [rpc.RRef(param) for param in self.model.parameters()]
return param_rrefs
# The global parameter server instance.
param_server = None
# A lock to ensure we only have one parameter server.
global_lock = Lock()
def get_parameter_server(num_gpus=0):
"""
Returns a singleton parameter server to all trainer processes
"""
global param_server
# Ensure that we get only one handle to the ParameterServer.
with global_lock:
if not param_server:
# construct it once
param_server = ParameterServer(num_gpus=num_gpus)
return param_server
def run_parameter_server(rank, world_size):
print("PS master initializing RPC")
rpc.init_rpc(name="parameter_server", rank=rank, world_size=world_size)
print("RPC initialized! Running parameter server...")
rpc.shutdown()
print("RPC shutdown on parameter server.")
# --------- Trainers --------------------
class TrainerNet(nn.Module):
def __init__(self, num_gpus=0):
super().__init__()
self.num_gpus = num_gpus
self.param_server_rref = rpc.remote(
"parameter_server", get_parameter_server, args=(num_gpus,))
def get_global_param_rrefs(self):
remote_params = remote_method(
ParameterServer.get_param_rrefs,
self.param_server_rref)
return remote_params
def forward(self, x):
model_output = remote_method(
ParameterServer.forward, self.param_server_rref, x)
return model_output
def run_training_loop(rank, num_gpus, train_loader, test_loader):
# Runs the typical nueral network forward + backward + optimizer step, but
# in a distributed fashion.
net = TrainerNet(num_gpus=num_gpus)
# Build DistributedOptimizer.
param_rrefs = net.get_global_param_rrefs()
opt = DistributedOptimizer(optim.SGD, param_rrefs, lr=0.03)
for i, (data, target) in enumerate(train_loader):
with dist_autograd.context() as cid:
model_output = net(data)
target = target.to(model_output.device)
loss = F.nll_loss(model_output, target)
if i % 5 == 0:
print(f"Rank {rank} training batch {i} loss {loss.item()}")
dist_autograd.backward(cid, [loss])
# Ensure that dist autograd ran successfully and gradients were
# returned.
assert remote_method(
ParameterServer.get_dist_gradients,
net.param_server_rref,
cid) != {}
opt.step(cid)
print("Training complete!")
print("Getting accuracy....")
get_accuracy(test_loader, net)
def get_accuracy(test_loader, model):
model.eval()
correct_sum = 0
# Use GPU to evaluate if possible
device = torch.device("cuda:0" if model.num_gpus > 0
and torch.cuda.is_available() else "cpu")
with torch.no_grad():
for i, (data, target) in enumerate(test_loader):
out = model(data, -1)
pred = out.argmax(dim=1, keepdim=True)
pred, target = pred.to(device), target.to(device)
correct = pred.eq(target.view_as(pred)).sum().item()
correct_sum += correct
print(f"Accuracy {correct_sum / len(test_loader.dataset)}")
# Main loop for trainers.
def run_worker(rank, world_size, num_gpus, train_loader, test_loader):
print(f"Worker rank {rank} initializing RPC")
options = rpc.ProcessGroupRpcBackendOptions(
num_send_recv_threads=8,
rpc_timeout=0,
init_method="tcp://192.168.60.67:60000"
)
rpc.init_rpc(
backend=BackendType.PROCESS_GROUP,
# backend=rpc.BackendType.TENSORPIPE,
name=f"trainer_{rank}",
rank=rank,
world_size=world_size,
# rpc_backend_options=options,
)
print(f"Worker {rank} done initializing RPC")
run_training_loop(rank, num_gpus, train_loader, test_loader)
rpc.shutdown()
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description="Parameter-Server RPC based training")
parser.add_argument(
"--world_size",
type=int,
default=2,
help="""Total number of participating processes. Should be the sum of
master node and all training nodes.""")
parser.add_argument(
"--rank",
type=int,
default=0,
help="Global rank of this process. Pass in 0 for master.")
parser.add_argument(
"--num_gpus",
type=int,
default=2,
help="""Number of GPUs to use for training, Currently supports between 0
and 2 GPUs. Note that this argument will be passed to the parameter servers.""")
parser.add_argument(
"--master_addr",
type=str,
default="localhost",
help="""Address of master, will default to localhost if not provided.
Master must be able to accept network traffic on the address + port.""")
parser.add_argument(
"--master_port",
type=str,
default="29500",
help="""Port that master is listening on, will default to 29500 if not
provided. Master must be able to accept network traffic on the host and port.""")
args = parser.parse_args()
assert args.rank is not None, "must provide rank argument."
assert args.num_gpus <= 3, f"Only 0-2 GPUs currently supported (got {args.num_gpus})."
os.environ['MASTER_ADDR'] = args.master_addr
os.environ["MASTER_PORT"] = args.master_port
processes = []
world_size = args.world_size
if args.rank == 0:
p = mp.Process(target=run_parameter_server, args=(0, world_size))
p.start()
processes.append(p)
else:
# Get data to train on
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=32, shuffle=True, )
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(
'./data',
train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=32,
shuffle=True,
)
# start training worker on this node
p = mp.Process(
target=run_worker,
args=(
args.rank,
world_size, args.num_gpus,
train_loader,
test_loader))
p.start()
processes.append(p)
for p in processes:
p.join()
First I run the above code(tutorial.py) in the remote container to create a server process :
python tutorial.py --rank=0 --num_gpus=2 --master_addr="localhost" --master_port="60000" --world_size=2
Second I run the same code in my mac to create the worker process, with some different args:
python tutorial.py --rank=1 --num_gpus=2 --master_addr="192.168.60.67" --master_port="60000" --world_size=2
One error occurs in both nodes.
In the container:
image1344×658 57.7 KB
In my mac:
image1536×408 69.5 KB
I change the master_addr in first comand line to “127.0.0.1” or “192.168.60.67”, but it’s the same error.
Also I set the special environment variables, but it does not work:
# in container
export GLOO_SOCKET_IFNAME=eth0,lo
export TP_SOCKET_IFNAME=eth0,lo
# in mac
export GLOO_SOCKET_IFNAME=en0,lo
export TP_SOCKET_IFNAME=en0,lo
Any advice would be appreciated, thanks. |
st175399 | Solved by lcw in post #6
If you terminate the processes (e.g., Ctrl+C) you should be able to see a backtrace telling you where they are stuck. Is it at the init_rpc function?
If so, @mrshenli do you know if we have a way to get more verbose logging information from the TCPStore to see what’s going on?
There’s one thing yo… |
st175400 | I think the error results from different networks of the two nodes(one 192.168.1.57 and the other 192.168.60.67). So I use two other machines with IP addresses 192.168.206.100(as a worker) and 192.168.206.101(as a params server) respectively. Similarly I use a docker container in each machine.
It successfully solves the above mismatch error, but a new error occurs on the worker node: “… connection to [172.17.0.10]:5807 is refused”. “172.17.0.10” is the IP address of the container and “5807” is a random port. Notice that the master address I set is 192.168.206.101 and the master port 60000 as above.
image2752×798 371 KB
In the disscussion 1, @lcw says some “real” connections exist. The random port is created by one of these connections.
###########
image1378×452 98 KB
##########
As only the port 60000 of the container is mapped to the same port of the host(server), a random port(such as 5807) may not be accessed. So the connection to such a random port is refused.
Can I still use a docker container on each machine?
thanks. |
st175401 | A couple of things here. All the error messages so far come from Gloo, whereas I mostly know TensorPipe, hence I’m not 100% sure about what I’m saying.
First, the “address family mismatch” error in my opinion comes from the fact that you specified localhost and 192.168.60.67 as the master address for your two nodes. Even though these two addresses resolve to the same physical machine, they correspond to different interfaces on that machine, hence the mismatch. In particular it’s possible that localhost resolved to the ::1 IPv6 address, and this caused Gloo to detect a mismatch. You should specify 192.168.60.67 as the master address on both nodes.
Second, your goal seems to be to have the server listen on port 60000 only and have all connections go through there. This, AFAIK, is not possible with Gloo or TensorPipe today. You are certainly allowed to specify the master port, and it will be honored, but that is only used for rendezvous, i.e., for processes to “discover” each other. In practice, each process will start listening on a new random arbitrary port (and it will communicate this port to the other processes using that rendezvous). And I believe there is no way to influence how that arbitrary port is selected. Hence you probably will need to map the whole range of ports, or find some other way to put the two machines on the same network, or something of that sort.
Finally, you seemed to be trying to connect a Linux machine to an OSX machine. While this might in principle be possible, and perhaps it might even work, I don’t think we ever explicitly supported this scenario. I wouldn’t be surprised if somewhere in the code we introduced the assumption that all endpoints are running on the same platform and, possibly, that they are running the same exact binary version of PyTorch. |
st175402 | @lcw thanks.
Now I use two machines in the same network with addresses 192.168.206.100(called office0) and 192.168.206.101(called office1) respectively. Each can ping the other successfully.
I set the master_addr=192.168.206.100 and master_port=5234(random), then launch the master process on office0 and worker process on office1. the commands are as follows:
# launch on office0(192.168.206.100, as master)
python tutorial.py --rank=0 --num_gpus=0 --master_addr="192.168.206.100" --master_port="5024" --world_size=2
# launch on office1(192.168.206.101, as worker)
python tutorial.py --rank=1 --num_gpus=0 --master_addr="192.168.206.100" --master_port="5024" --world_size=2
I find the master process and worker process are created after the two commands. But the problem is both processes appear to be blocked until one times out.
on office0(192.168.206.100)
image1388×160 24.3 KB
on office1(192.168.206.101)
image1368×154 23.6 KB
each “@” indicates one or two minutes.
I don’t have a clue. |
st175403 | If you terminate the processes (e.g., Ctrl+C) you should be able to see a backtrace telling you where they are stuck. Is it at the init_rpc function?
If so, @mrshenli do you know if we have a way to get more verbose logging information from the TCPStore to see what’s going on?
There’s one thing you could try in the meantime. Even if two machines can ping each other, it doesn’t mean that they can connect to all the ports. You could check this by running nc -l 192.168.206.100 5024 on the server and then nc 192.168.206.100 5024 (without -l!) on the client, and type something (+ a newline) on the client’s console and see if it appears on the server’s one. The nc is netcat which on some distributions is called differently, you should check yours. |
st175404 | yes, they are stuck at the init_rpc function. The function creates a process(found using “top” or “ps -ef” command), but can’t proceed further and can’t print the “Worker {rank} done initializing RPC”.
image720×406 32.1 KB
With Ctrl+C, the backtrace is as follows(both master and worker):
image1352×710 93.9 KB |
st175405 | I try to set “master_add=192.168.206.101” and “master_port=5024”, so the office1(192.168.206.101) will be the master and office0(192.168.206.100) the worker, which is the opposite of what I do above.
No stuck, but an error occurs. It says the master can’t route to a random port on the worker. The worker(office0) seems to have a firewall.
image1376×420 83.8 KB |
st175406 | Please try the netcat experiment I suggested. If that fails it means the issue is not in PyTorch but in your network setup, and you will have to solve it on your side (or with your system administrator if you have one). |
st175407 | office0(client)->office1(server) is successful, but not the other way around.
image1080×162 10.4 KB
image1028×142 10.8 KB
So the office0 may have a firewall. I will ask my administrator.
thanks. |
st175408 | And you could also try that with a port other than 5024, to “simulate” a randomly-chosen port. For example, in the screenshot you posted above, port 34204 was being used. |
st175409 | Consider the following MWE, where I attempt to simply sum random tensors that are generated in different GPUs. If I generate tensors of size e.g., 5.000, it still works, but for size 10.000 it timeouts.
I’m using CUDA 11.2 w/ 4 RTX A6000. Tried both torch-1.9.1+cu111 and the nightly one compiled directly from the repo. Note that if I use gloo as the backend, then it works.
Is this a bug, or maybe there is something wrong with my environment? Any idea of what could I try? Thanks.
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
def fn(rank, world_size):
# Set up distributed job
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '11235'
dist.init_process_group('nccl', rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
# Generate random tensor and do all reduce
x = torch.randn(10_000, device='cuda')
dist.all_reduce(tensor=x)
print(x[0])
dist.destroy_process_group()
if __name__ == '__main__':
if not torch.cuda.is_available():
print('No GPUs available, cannot run test')
else:
n_gpus = torch.cuda.device_count()
mp.spawn(
fn,
args=(n_gpus,),
nprocs=n_gpus,
join=True,
) |
st175410 | Maybe you could try export NCCL_DEBUG=INFO to get more information about the error? |
st175411 | There you go, this is the output when I do export NCCL_DEBUG=INFO. Note that all these messages only show up once all_reduce is called (I’ve tried adding a time.sleep just before it).
I can’t see anything wrong in the logs though…
<hostname>:3335158:3335158 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
<hostname>:3335158:3335158 [0] NCCL INFO NET/IB : No device found.
<hostname>:3335158:3335158 [0] NCCL INFO NET/Socket : Using [0]eno1np0:<ip-address><0>
<hostname>:3335158:3335158 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda11.1
<hostname>:3335159:3335159 [1] NCCL INFO Bootstrap : Using [0]eno1np0:<ip-address><0>
<hostname>:3335161:3335161 [3] NCCL INFO Bootstrap : Using [0]eno1np0:<ip-address><0>
<hostname>:3335160:3335160 [2] NCCL INFO Bootstrap : Using [0]eno1np0:<ip-address><0>
<hostname>:3335159:3335159 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
<hostname>:3335161:3335161 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
<hostname>:3335160:3335160 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
<hostname>:3335159:3335159 [1] NCCL INFO NET/IB : No device found.
<hostname>:3335161:3335161 [3] NCCL INFO NET/IB : No device found.
<hostname>:3335160:3335160 [2] NCCL INFO NET/IB : No device found.
<hostname>:3335159:3335159 [1] NCCL INFO NET/Socket : Using [0]eno1np0:<ip-address><0>
<hostname>:3335159:3335159 [1] NCCL INFO Using network Socket
<hostname>:3335161:3335161 [3] NCCL INFO NET/Socket : Using [0]eno1np0:<ip-address><0>
<hostname>:3335161:3335161 [3] NCCL INFO Using network Socket
<hostname>:3335160:3335160 [2] NCCL INFO NET/Socket : Using [0]eno1np0:<ip-address><0>
<hostname>:3335160:3335160 [2] NCCL INFO Using network Socket
<hostname>:3335158:3335216 [0] NCCL INFO Channel 00/04 : 0 1 2 3
<hostname>:3335160:3335219 [2] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
<hostname>:3335158:3335216 [0] NCCL INFO Channel 01/04 : 0 3 2 1
<hostname>:3335161:3335218 [3] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
<hostname>:3335158:3335216 [0] NCCL INFO Channel 02/04 : 0 1 2 3
<hostname>:3335160:3335219 [2] NCCL INFO Trees [0] -1/-1/-1->2->1|1->2->-1/-1/-1 [1] 3/-1/-1->2->1|1->2->3/-1/-1 [2] -1/-1/-1->2->1|1->2->-1/-1/-1 [3] 3/-1/-1->2->1|1->2->3/-1/-1
<hostname>:3335159:3335217 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
<hostname>:3335158:3335216 [0] NCCL INFO Channel 03/04 : 0 3 2 1
<hostname>:3335160:3335219 [2] NCCL INFO Setting affinity for GPU 2 to ffffffff,ffffffff,ffffffff,ffffffff
<hostname>:3335161:3335218 [3] NCCL INFO Trees [0] 1/-1/-1->3->0|0->3->1/-1/-1 [1] 0/-1/-1->3->2|2->3->0/-1/-1 [2] 1/-1/-1->3->0|0->3->1/-1/-1 [3] 0/-1/-1->3->2|2->3->0/-1/-1
<hostname>:3335159:3335217 [1] NCCL INFO Trees [0] 2/-1/-1->1->3|3->1->2/-1/-1 [1] 2/-1/-1->1->-1|-1->1->2/-1/-1 [2] 2/-1/-1->1->3|3->1->2/-1/-1 [3] 2/-1/-1->1->-1|-1->1->2/-1/-1
<hostname>:3335161:3335218 [3] NCCL INFO Setting affinity for GPU 3 to ffffffff,ffffffff,ffffffff,ffffffff
<hostname>:3335159:3335217 [1] NCCL INFO Setting affinity for GPU 1 to ffffffff,ffffffff,ffffffff,ffffffff
<hostname>:3335158:3335216 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
<hostname>:3335158:3335216 [0] NCCL INFO Trees [0] 3/-1/-1->0->-1|-1->0->3/-1/-1 [1] -1/-1/-1->0->3|3->0->-1/-1/-1 [2] 3/-1/-1->0->-1|-1->0->3/-1/-1 [3] -1/-1/-1->0->3|3->0->-1/-1/-1
<hostname>:3335158:3335216 [0] NCCL INFO Setting affinity for GPU 0 to ffffffff,ffffffff,ffffffff,ffffffff
<hostname>:3335160:3335219 [2] NCCL INFO Channel 00 : 2[c1000] -> 3[c2000] via P2P/IPC
<hostname>:3335159:3335217 [1] NCCL INFO Channel 00 : 1[82000] -> 2[c1000] via P2P/IPC
<hostname>:3335158:3335216 [0] NCCL INFO Channel 00 : 0[81000] -> 1[82000] via P2P/IPC
<hostname>:3335161:3335218 [3] NCCL INFO Channel 00 : 3[c2000] -> 0[81000] via P2P/IPC
<hostname>:3335160:3335219 [2] NCCL INFO Channel 00 : 2[c1000] -> 1[82000] via P2P/IPC
<hostname>:3335158:3335216 [0] NCCL INFO Channel 00 : 0[81000] -> 3[c2000] via P2P/IPC
<hostname>:3335159:3335217 [1] NCCL INFO Channel 00 : 1[82000] -> 3[c2000] via P2P/IPC
<hostname>:3335160:3335219 [2] NCCL INFO Channel 01 : 2[c1000] -> 1[82000] via P2P/IPC
<hostname>:3335161:3335218 [3] NCCL INFO Channel 00 : 3[c2000] -> 1[82000] via P2P/IPC
<hostname>:3335158:3335216 [0] NCCL INFO Channel 01 : 0[81000] -> 3[c2000] via P2P/IPC
<hostname>:3335159:3335217 [1] NCCL INFO Channel 01 : 1[82000] -> 0[81000] via P2P/IPC
<hostname>:3335161:3335218 [3] NCCL INFO Channel 01 : 3[c2000] -> 2[c1000] via P2P/IPC
<hostname>:3335159:3335217 [1] NCCL INFO Channel 01 : 1[82000] -> 2[c1000] via P2P/IPC
<hostname>:3335160:3335219 [2] NCCL INFO Channel 01 : 2[c1000] -> 3[c2000] via P2P/IPC
<hostname>:3335161:3335218 [3] NCCL INFO Channel 01 : 3[c2000] -> 0[81000] via P2P/IPC
<hostname>:3335159:3335217 [1] NCCL INFO Channel 02 : 1[82000] -> 2[c1000] via P2P/IPC
<hostname>:3335158:3335216 [0] NCCL INFO Channel 02 : 0[81000] -> 1[82000] via P2P/IPC
<hostname>:3335160:3335219 [2] NCCL INFO Channel 02 : 2[c1000] -> 3[c2000] via P2P/IPC
<hostname>:3335161:3335218 [3] NCCL INFO Channel 02 : 3[c2000] -> 0[81000] via P2P/IPC
<hostname>:3335158:3335216 [0] NCCL INFO Channel 02 : 0[81000] -> 3[c2000] via P2P/IPC
<hostname>:3335160:3335219 [2] NCCL INFO Channel 02 : 2[c1000] -> 1[82000] via P2P/IPC
<hostname>:3335159:3335217 [1] NCCL INFO Channel 02 : 1[82000] -> 3[c2000] via P2P/IPC
<hostname>:3335160:3335219 [2] NCCL INFO Channel 03 : 2[c1000] -> 1[82000] via P2P/IPC
<hostname>:3335161:3335218 [3] NCCL INFO Channel 02 : 3[c2000] -> 1[82000] via P2P/IPC
<hostname>:3335158:3335216 [0] NCCL INFO Channel 03 : 0[81000] -> 3[c2000] via P2P/IPC
<hostname>:3335159:3335217 [1] NCCL INFO Channel 03 : 1[82000] -> 0[81000] via P2P/IPC
<hostname>:3335161:3335218 [3] NCCL INFO Channel 03 : 3[c2000] -> 2[c1000] via P2P/IPC
<hostname>:3335159:3335217 [1] NCCL INFO Channel 03 : 1[82000] -> 2[c1000] via P2P/IPC
<hostname>:3335159:3335217 [1] NCCL INFO 4 coll channels, 4 p2p channels, 2 p2p channels per peer
<hostname>:3335160:3335219 [2] NCCL INFO Channel 03 : 2[c1000] -> 3[c2000] via P2P/IPC
<hostname>:3335159:3335217 [1] NCCL INFO comm 0x7f0190002e10 rank 1 nranks 4 cudaDev 1 busId 82000 - Init COMPLETE
<hostname>:3335161:3335218 [3] NCCL INFO Channel 03 : 3[c2000] -> 0[81000] via P2P/IPC
<hostname>:3335158:3335216 [0] NCCL INFO 4 coll channels, 4 p2p channels, 2 p2p channels per peer
<hostname>:3335158:3335216 [0] NCCL INFO comm 0x7f7008002e10 rank 0 nranks 4 cudaDev 0 busId 81000 - Init COMPLETE
<hostname>:3335158:3335158 [0] NCCL INFO Launch mode Parallel
<hostname>:3335160:3335219 [2] NCCL INFO 4 coll channels, 4 p2p channels, 2 p2p channels per peer
<hostname>:3335160:3335219 [2] NCCL INFO comm 0x7f4170002e10 rank 2 nranks 4 cudaDev 2 busId c1000 - Init COMPLETE
<hostname>:3335161:3335218 [3] NCCL INFO 4 coll channels, 4 p2p channels, 2 p2p channels per peer
<hostname>:3335161:3335218 [3] NCCL INFO comm 0x7f06cc002e10 rank 3 nranks 4 cudaDev 3 busId c2000 - Init COMPLETE |
st175412 | Hi all, currently I’m trying to implement Group_Convolution by dividing one Convolution into 4 sub-parts and send them to 4 different GPUs to reduce the inference time of the model. I use multiple threading to expect them to run concurrently. However, when I check the operations inside GPU by nvidia-smi command, the data still transfers from GPU0 to GPU 1,2,3 and implement sequentially, not parallel. Can you help me to correct this? Thank you
Here is my code for group convolution:
class Residual(nn.Module):
def __init__(self, in_channels, out_channels, dev0, dev1, dev2, dev3, down_sample = False, decouple = False):
super(Residual, self).__init__()
self.dev0 = dev0
self.dev1 = dev1
self.dev2 = dev2
self.dev3 = dev3
self.down_sample = down_sample
self.decouple = decouple
# Try testing with hardcode for threading case (test for conv(258, 512))
self.y0 = torch.zeros((100, 128, 7, 7), device = self.dev0)
self.y1 = torch.zeros((100, 128, 7, 7), device = self.dev1)
self.y2 = torch.zeros((100, 128, 7, 7), device = self.dev2)
self.y3 = torch.zeros((100, 128, 7, 7), device = self.dev3)
# End hardcode (testing purpose)
if (in_channels == out_channels):
if (self.decouple): # Check Grouped Convolution or NOT
self.conv1a = nn.Conv2d(int(in_channels/4), int(out_channels/4), kernel_size = 3, stride =1, padding=1, bias = False).to(self.dev0)
self.conv1b = nn.Conv2d(int(in_channels/4), int(out_channels/4), kernel_size = 3, stride =1, padding=1, bias = False).to(self.dev1)
self.conv1c = nn.Conv2d(int(in_channels/4), int(out_channels/4), kernel_size = 3, stride =1, padding=1, bias = False).to(self.dev2)
self.conv1d = nn.Conv2d(int(in_channels/4), int(out_channels/4), kernel_size = 3, stride =1, padding=1, bias = False).to(self.dev3)
else:
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size = 3, stride =1, padding = 1, bias = False).to(self.dev0)
else:
if (self.decouple):
self.conv1a = nn.Conv2d(int(in_channels/4), int(out_channels/4), kernel_size = 3, stride =2, padding=1, bias = False).to(self.dev0)
self.conv1b = nn.Conv2d(int(in_channels/4), int(out_channels/4), kernel_size = 3, stride =2, padding=1, bias = False).to(self.dev1)
self.conv1c = nn.Conv2d(int(in_channels/4), int(out_channels/4), kernel_size = 3, stride =2, padding=1, bias = False).to(self.dev2)
self.conv1d = nn.Conv2d(int(in_channels/4), int(out_channels/4), kernel_size = 3, stride =2, padding=1, bias = False).to(self.dev3)
else:
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size = 3, stride =2, padding = 1, bias = False).to(self.dev0)
...
def Group_Conv(self, device, in_tensor, out_tensor):
out_tensor = nn.Conv2d(in_channel, out_channel, kernel_size = 3, stride =2, padding =1, bias = False).to(device)
def forward(self, x):
if (self.decouple):
a = torch.chunk(x, 4, dim = 1) # Devide feature maps into 4 sub-part following the channel
# GConv() function for 4 devices concurrently
Thread(target = self.Group_Conv(self.dev0, a[0], self.y0)).start()
Thread(target = self.Group_Conv(self.dev1, a[1], self.y1)).start()
Thread(target = self.Group_Conv(self.dev2, a[2], self.y2)).start()
Thread(target = self.Group_Conv(self.dev3, a[3], self.y3)).start()
out = torch.cat([self.y0, self.y1.to(self.dev0), self.y2.to(self.dev0), self.y3.to(self.dev0)], dim = 1)
else:
out = self.conv1(x)
.... |
st175413 | Here is a problem about how to implement model-parallel on pytorch.
Since I have read tuturial that it uses very simple instructions, here are codes
class Model(nn.Module):
def __init__(self):
super().__init__()
self.model_a = Model_A().to('cuda:0')
self.model_b = Model_B().to('cuda:1')
def forward(self, x):
x = self.model_a(x.to('cuda:0'))
x = self.model_b(x.to('cuda:1'))
return x
However, I do not have many GPUs. I want to use torch.distribute.send and torch.distribute.recv in forward and backward to build model-parallel. But how can I do it? Is there any tutoral?
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.conv1 = nn.Conv2d(1,32,3,padding = 1)
self.pool1 = nn.MaxPool2d(2,2)
self.conv2 = nn.Conv2d(32,64,3,padding = 1)
self.pool2 = nn.MaxPool2d(2,2)
self.conv3 = nn.Conv2d(64,128,3,padding = 1)
self.pool3 = nn.MaxPool2d(2,2)
self.fc1 = nn.Linear(128*3*3,625)
self.fc2 = nn.Linear(625,10)
def forward(self,x,rank):
if rank == 0:
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = self.pool3(F.relu(self.conv3(x)))
dist.send(x,1)
elif rank == 1:
dist.recv(x,0)
x = x.view(-1,128*3*3)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
I could use rank and send and save in forward. But how can I change backward? |
st175414 | I’m wanting to use pytorch for it’s tensor math and not necessarily for training a ML model. I have a machine that has 4 GPUs on them. The overall goal is to calculate the cosine similarity between pairs of embeddings utilizing all 4 GPU. Here is a walkthrough of what i have thus far:
Read the data in as a pandas dataframe
import pandas as pd
tmp_df = pd.read_pickle('./embed_pairs_df.pkl')
There are 2 columns to tmp_df: embeds_a, and embeds_b. Each column contains 1 million rows making tmp_df.shape = (100000,2). Each element in a row is a pytorch tensor that has shape (300,1). I’ve already worked out the combinations of pairs and stored them appropriately in tmp_df. As such, tmp_df['embeds_a'][i] and tmp_df['embeds_b'][i] would constitute the ith pair of embeddings that would need to be run through a cosine similarity function.
My question is, given the format of the data and the 4 GPUs available, what is the best way to distribute the cosine similarity calculation across all 4 GPUs in parallel? |
st175415 | This sounds like a data parallel problem, although the nested structure seems like it might be an issue if you need to “unpack” PyTorch tensors from Python data structures.
For this case, you might want to consider preprocessing the data into a higher dimensional tensor, e.g., 100000,2,300.
From here, it becomes a straightforward data-parallel problem that shouldn’t take long on 4 GPUs (or likely even a single GPU) if you move the split data into equal sized parts, move the different parts to different GPUs, and aggregate the results.
At this point I’d be concerned that all of the preprocessing time (e.g., the steps to generate tmp_df and pack it into a large tensor) would be significantly greater than the calculation time for cosine similarity. |
st175416 | You could write a custom network (subclass of torch.nn.Module) whose forward() function computes the cosine similarity, and then use DataParallel or DistributedDataParallel to run your data through this network.
Note: This is just a hunch, I haven’t done something similar (well, except all the training I do is in fact quite similar …) so there may be pitfalls which I don’t see. |
st175417 | @gphilip Thanks for your response. I’ve been going off the example detailed here, but I have a few questions about testing this out. Here is what I have so far:
from torch import nn
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
class CosSimNetwork(nn.Module):
def __init__(self):
super(CosSimNetwork, self).__init__()
def forward(self, embeds):
#Here do the cosine similarity calculation
cos_sims = 12
return cos_sims
def calc_cos_sims(rank, world_size):
dist.init_process_group('gloo', rank=rank, world_size=world_size)
model = CosSimNetwork()
ddp_model = DDP(model, device_ids=[rank])
cos_sims = ddp_model(DATA.to(rank))
def main():
world_size = 4 #since I have 4 GPUs on a single machine
mp.spawn(calc_cos_sims,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == 'main':
main()
If I understand correctly, because I have 4 GPUs world_size = 4. However, I’m not entirely sure what rank should be given that I have 4 GPUs. Also, I have my data stored in a pickled pandas dataframe with two columns being the embeddings that would be used in the cosine similarity calculation. It looks something like cos_sim(embed_a, embed_b). I’m not entirely sure how to read in the data properly for DistributedDataParallel and the forward method of CosSimNetwork. Any advice is much appreciated. |
st175418 | aclifton314:
If I understand correctly, because I have 4 GPUs world_size = 4. However, I’m not entirely sure what rank should be given that I have 4 GPUs. Also, I have my data stored in a pickled pandas dataframe with two columns being the embeddings that would be used in the cosine similarity calculation. It looks something like cos_sim(embed_a, embed_b). I’m not entirely sure how to read in the data properly for DistributedDataParallel and the forward method of CosSimNetwork. Any advice is much appreciated.
For DistributedDataParallel, you need to run one process per GPU and use ranks 0 - 3 for GPUs 0 - 3. I think your example code above should already do that automatically since mp.spawn would pass in the appropriate rank. For reading the data, each process can read a chunk of the data and process it. For example, maybe create 4 chunks of the ./embed_pairs_df.pkl file or read the same file on all processes and each process only processes a part of the input. For example process 0 computes rows 0, 4, 8 and so on etc. |
st175419 | I have not used DistributedDataParallel yet, so I am not sure about the right way to use it.
I would suggest that you first write code that works on a single GPU, just to ensure that it works properly without the parallelization. You could then try using DataParallel (which involves adding just one extra line of code) to see if that speeds it up sufficiently for your needs. Maybe it will, and then you don’t have to worry about the more involved DistributedDataParallel. |
st175420 | @pritamdamania87 thank you very much for your reply. I believe I have implemented your suggestion, but if you don’t mind checking it for me to make sure it makes sense:
from torch import nn
import torch.multiprocessing as mp
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import pandas as pd
class CosSimNetwork(nn.Module):
def __init__(self):
super(CosSimNetwork, self).__init__()
def forward(self, embeds):
cos_sims = 12
return cos_sims
def determine_subranges(fullrange: tuple, num_subranges: int):
subranges = []
inc = fullrange[1] // num_subranges
for i in range(fullrange[0], fullrange[1], inc):
subranges.append( (i, min(i+inc, fullrange[1])) )
return( subranges )
def calc_cos_sims(rank, world_size):
dist.init_process_group('gloo', rank=rank, world_size=world_size)
model = CosSimNetwork()
ddp_model = DDP(model, device_ids=[rank])
tmp_df = pd.read_pickle('./embed_pairs_df_million.pkl')
sub_ranges = determine_subranges((0,tmp_df.shape[0]), world_size)
sub_range_tuple = sub_ranges[rank]
data = tmp_df.iloc[sub_range_tuple[0]:sub_range_tuple[1]]
cos_sims = ddp_model(data.to(rank))
def main():
world_size = 4 #since I have 4 GPUs on a single machine
mp.spawn(calc_cos_sims,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == 'main':
main()
If I understand this correctly, mp.spawn() will create 4 different processes (based off of world_size) that will run calc_cos_sims(). The input data is sliced into world_size chunks and the rank determines which chunk gets sent to which GPU. Once all the processes are finished, mp.spawn() will aggregate the results. Is my understanding correct?
A few other questions come to mind:
Is the entire dataset read in 4 times with the tmp_df = pd.read_pickle('./embed_pairs_df_million.pkl') line? If so, is there a more efficient way to read it in for this setup?
The forward() method returns the cosine similarity (or it will once I write it) between two embeddings. If calc_cos_sims() is copied to each process, would I need to replace the mp.spawn() line with all_cos_sims = mp.spawn() in order to store the results from all the GPUs?
Thanks in advance for your help! |
st175421 | Thanks for providing a code example. IIUC, you want to just compute a particular math function in parallel and not really train a model in data parallel fashion where you aggregate gradients across ranks in the backward pass? If so, you don’t really need to use DistributedDataParallel.
aclifton314:
Once all the processes are finished, mp.spawn() will aggregate the results. Is my understanding correct?
No, each process would have its results local to that process. You can collect all the results in one process using a collective operation like all_gather. See: Distributed communication package - torch.distributed — PyTorch 1.9.0 documentation 1
Is the entire dataset read in 4 times with the tmp_df = pd.read_pickle('./embed_pairs_df_million.pkl') line? If so, is there a more efficient way to read it in for this setup?
Yes, it is read 4 times. I’m not very familiar with pandas, but if you can pre-split the data before hand and read only what is required on each process then you would avoid reading the data 4 times.
The forward() method returns the cosine similarity (or it will once I write it) between two embeddings. If calc_cos_sims() is copied to each process, would I need to replace the mp.spawn() line with all_cos_sims = mp.spawn() in order to store the results from all the GPUs?
As mentioned above, mp.spawn will not aggregate results for you. You will have to use something like all_gather to do that. |
st175422 | @pritamdamania87 thank you for your explanations. They are really helping out!
Thanks for providing a code example. IIUC, you want to just compute a particular math function in parallel and not really train a model in data parallel fashion where you aggregate gradients across ranks in the backward pass? If so, you don’t really need to use DistributedDataParallel.
Yes this is right. I’m wanting to do a calculation in parallel and collect the results. Is there a better way to do this in pytorch? |
st175423 | Any further thoughts on doing generic calculations in pytorch using multiple gpu on a single machine? I’m not trying to train a model or anything. Just utilize pytorch to make these calculations across the multipe gpu available to me. |
st175424 | @aclifton314 You can perform generic calculations in pytorch using multiple gpus similar to the code example you provided. Basically spawn multiple processes where each process drives a single GPU and have each GPU do part of the computation. Then you can use PyTorch collective APIs 4 to perform any aggregations across GPUs that you need. |
st175425 | @pritamdamania87 Here is what I now have:
import torch
import torch.multiprocessing as mp
import torch.distributed as dist
import torch.nn.functional as F
import pandas as pd
def calc_cos_sims(rank, world_size):
dist.init_process_group('gloo', rank=rank, world_size=world_size)
cuda_device = torch.device('cuda:'+str(rank))
data_path = './embed_pairs_df_million_part_' + str(rank) + '.pkl'
tmp_df = pd.read_pickle(data_path)
embeds_a_list = [embed_a for embed_a in tmp_df['embeds_a']]
embeds_b_list = [embed_b for embed_b in tmp_df['embeds_b']]
embeds_a_tensor = torch.tensor(embeds_a_list, device=cuda_device)
embeds_b_tensor = torch.tensor(embeds_b_list, device=cuda_device)
cosine_tensor = F.cosine_similarity(embeds_a_tensor, embeds_b_tensor)
def main():
world_size = 4 #since I have 4 GPUs on a single machine
mp.spawn(calc_cos_sims,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == 'main':
main()
This has changed to assume that the data has been split up into 4 different parts (1 part for each GPU) by data_path = './embed_pairs_df_million_part_' + str(rank) + '.pkl'. Since each entry in a column is a numpy array, I went ahead and converted everything to pytorch tensors. cosine_tensor is the cosine similarity between each element of the data split. I read the link you posted about aggregating, but I’m not entirely sure how to implement it. How would that be done in this case?
Also, have I called mp.spawn correctly? I have args=(world_size,) but calc_cos_sims(rank, world_size) has the rank first followed by the world size and those go into dist.init_process_group('gloo', rank=rank, world_size=world_size). |
st175426 | @pritamdamania87
One additional question. I read through the collective APIs you linked to, but I don’t quite understand them and can’t figure out which one would be applicable in this case. |
st175427 | Sorry about the delay here.
aclifton314:
Also, have I called mp.spawn correctly? I have args=(world_size,) but calc_cos_sims(rank, world_size) has the rank first followed by the world size and those go into dist.init_process_group('gloo', rank=rank, world_size=world_size).
Yes this is correct.
I read the link you posted about aggregating, but I’m not entirely sure how to implement it. How would that be done in this case?
It really depends on the type of aggregation you want to do across cosine_tensor. You basically have a total of 4 cosine_tensor (one on each rank) and you want to aggregate them. For example if you want to sum them all up and get the total sum on each rank you can do this:
dist.all_reduce(cosine_tensor)
# Now cosine_tensor would be overwritten to contain the sum of all the 4 `cosine_tensor`.
This tutorial will probably give a much better overview of how to use all of these collective APIs: Writing Distributed Applications with PyTorch — PyTorch Tutorials 1.9.1+cu102 documentation 3 |
st175428 | @pritamdamania87 Thank you for your reply, it was very helpful. Here is the code I now have:
import os
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn.functional as F
import pandas as pd
def calc_cos_sims(rank, world_size):
group = dist.new_group([0, 2])
cuda_device = torch.device('cuda:'+str(rank))
data_path = './embed_pairs_df_8000_part' + str(rank) + '.pkl'
tmp_df = pd.read_pickle(data_path)
embeds_a_list = [embed_a for embed_a in tmp_df['embeds_a']]
embeds_b_list = [embed_b for embed_b in tmp_df['embeds_b']]
embeds_a_tensor = torch.tensor(embeds_a_list, device=cuda_device)
embeds_b_tensor = torch.tensor(embeds_b_list, device=cuda_device)
cosine_tensor = F.cosine_similarity(embeds_a_tensor, embeds_b_tensor)
cosine_tensors_concat = dist.gather(cosine_tensor, group=group)
def init_process(rank, size, fn, backend='gloo'):
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
def main():
world_size = 2
processes = []
mp.set_start_method("spawn")
for rank in range(world_size):
p = mp.Process(target=init_process, args=(rank, world_size, calc_cos_sims))
p.start()
processes.append(p)
for p in processes:
p.join()
if __name__ == '__main__':
main()
print('DONE!')
However, I am getting the following error:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
Traceback (most recent call last):
File "/home/aclifton/jca-ai/classification_embeds/pytorch_example_multi_proc.py", line 46, in init_process
fn(rank, size)
File "/home/aclifton/jca-ai/classification_embeds/pytorch_example_multi_proc.py", line 14, in calc_cos_sims
group = dist.new_group([0, 2])
File "/home/aclifton/.local/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2682, in new_group
raise RuntimeError("The new group's rank should be within the "
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
RuntimeError: The new group's rank should be within the the world_size set by init_process_group
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/aclifton/jca-ai/classification_embeds/pytorch_example_multi_proc.py", line 46, in init_process
fn(rank, size)
File "/home/aclifton/jca-ai/classification_embeds/pytorch_example_multi_proc.py", line 14, in calc_cos_sims
group = dist.new_group([0, 2])
File "/home/aclifton/.local/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2682, in new_group
raise RuntimeError("The new group's rank should be within the "
RuntimeError: The new group's rank should be within the the world_size set by init_process_group
Also, how would I go about assigning the gathered tensor to a variable for further processing inside the main() function? |
st175429 | What is the reason you are using new_group? New group should be used only if you want to use the subset of processes. Otherwise you can just do dist.gather(cosine_tensor) with the default group size of the world_size.
The problem in your code is that world_size is 2 and you specify [0, 2] to new_group. new_group should contain a list of ranks between [0, world_size - 1]. So [0, 1], [1] or [0] are valid arguments. However, if you want to just use [0, 1] that is actually the entire world_size and you don’t really need new_group API for that and can just use the default group. |
st175430 | aclifton314:
Also, how would I go about assigning the gathered tensor to a variable for further processing inside the main() function?
You can create a pipe in the main process and pass it to the child processes. Then the child processes can write the cosine_tensor to the pipe which the main process can read and process. You can read these docs to better understand how to use a pipe for communication: multiprocessing — Process-based parallelism — Python 3.10.0 documentation 1 |
st175431 | For Gloo in Pytorch distributed, as shown in this document Distributed communication package - torch.distributed — PyTorch 1.9.1 documentation 2, will the following code get performance benefits of using CUDA-aware MPI? (e.g., GPU-to-GPU transferring via PCIe while bypassing CPU)
group = dist.new_group([0, 1], backend="gloo")
dist.all_reduce(gpu_tensor_a, op=dist.ReduceOp.SUM, group=group) |
st175432 | fedml@ip-172-31-46-221:/home/ec2-user/FedML/fedml_core/distributed/test/test_rpc$ sh run_rpc.sh TRPC 0
rank - 0 - 2021-10-13,03:21:54.277 main.py[line:86] INFO Namespace(backend='TRPC', enable_cuda_rpc=False, gpu_mapping_file='gpu_mapping.yaml', gpu_mapping_key='mapping_default', grpc_ipconfig_path='grpc_ipconfig.csv', rank=0, trpc_master_config_path='trp
c_master_config.csv')
rank - 0 - 2021-10-13,03:21:54.277 trpc_comm_manager.py[line:38] INFO using TRPC backend
Worker rank 0 initializing RPC
Creating the object
rank - 0 - 2021-10-13,03:21:54.277 trpc_comm_manager.py[line:58] INFO /home/ec2-user/FedML/fedml_core/distributed/test/test_rpc
rank - 0 - 2021-10-13,03:21:54.277 trpc_comm_manager.py[line:76] INFO str_init_method = tcp://172.31.46.221:9999
terminate called after throwing an instance of 'c10::Error'
what(): device_index >= 0 && device_index < num_gpusINTERNAL ASSERT FAILED at "../c10/cuda/CUDAStream.cpp":254, please report a bug to PyTorch.
Exception raised from check_gpu at ../c10/cuda/CUDAStream.cpp:254 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fdbf4447a22 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x5f (0x7fdbf44444af in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #2: c10::cuda::getStreamFromPool(bool, signed char) + 0x177 (0x7fdbf469e187 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x8d05 (0x7fdbf46a0d05 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0xe40cc4 (0x7fdc4b3b4cc4 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0xe40e48 (0x7fdc4b3b4e48 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0xe93170 (0x7fdc4b407170 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0xe9334f (0x7fdc4b40734f in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #8: tensorpipe::PipeImpl::callReadDescriptorCallback(tensorpipe::OpsStateMachine<tensorpipe::PipeImpl, tensorpipe::ReadOperation>::Iter) + 0x209 (0x7fdc4b409ac9 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #9: <unknown function> + 0xea2608 (0x7fdc4b416608 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #10: tensorpipe::PipeImpl::advanceReadOperation(tensorpipe::OpsStateMachine<tensorpipe::PipeImpl, tensorpipe::ReadOperation>::Iter, tensorpipe::ReadOperation::State) + 0xf3 (0x7fdc4b400d33 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorc
h_python.so)
frame #11: <unknown function> + 0xea69c2 (0x7fdc4b41a9c2 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #12: <unknown function> + 0xe9857d (0x7fdc4b40c57d in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #13: tensorpipe::ContextImpl::deferToLoop(std::function<void ()>) + 0x154 (0x7fdc4b3e43a4 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #14: <unknown function> + 0xe8ef31 (0x7fdc4b402f31 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #15: <unknown function> + 0xf10c95 (0x7fdc4b484c95 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #16: <unknown function> + 0xf11dc3 (0x7fdc4b485dc3 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #17: tensorpipe::transport::uv::ConnectionImpl::readCallbackFromLoop(long, uv_buf_t const*) + 0x420 (0x7fdc4b502680 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #18: <unknown function> + 0xf929cf (0x7fdc4b5069cf in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #19: <unknown function> + 0x108482f (0x7fdc4b5f882f in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #20: <unknown function> + 0x1084e6c (0x7fdc4b5f8e6c in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #21: uv__io_poll + 0x356 (0x7fdc4b5fd646 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #22: uv_run + 0x107 (0x7fdc4b5f2f27 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #23: tensorpipe::transport::uv::Loop::eventLoop() + 0x1d (0x7fdc4b50bf5d in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #24: <unknown function> + 0xf8074c (0x7fdc4b4f474c in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #25: std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tensorpipe::EventLoopDeferredExecutor::*)(std::string), tensorpipe::EventLoopDeferredExecutor*, std::string> > >::_M_run() + 0x41 (0x7fdc4b4f3ff1 in /usr/local/lib/python3.6/dist-
packages/torch/lib/libtorch_python.so)
frame #26: <unknown function> + 0xbd6df (0x7fdc4e7b96df in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #27: <unknown function> + 0x76db (0x7fdc630026db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #28: clone + 0x3f (0x7fdc6333ba3f in /lib/x86_64-linux-gnu/libc.so.6)
[ip-172-31-46-221:01840] *** Process received signal ***
[ip-172-31-46-221:01840] Signal: Aborted (6)
[ip-172-31-46-221:01840] Signal code: (-6)
[ip-172-31-46-221:01840] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x3efd0)[0x7fdc63258fd0]
[ip-172-31-46-221:01840] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7)[0x7fdc63258f47]
[ip-172-31-46-221:01840] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x141)[0x7fdc6325a8b1]
[ip-172-31-46-221:01840] [ 3] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8c957)[0x7fdc4e788957]
[ip-172-31-46-221:01840] [ 4] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x92ae6)[0x7fdc4e78eae6]
[ip-172-31-46-221:01840] [ 5] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x92b21)[0x7fdc4e78eb21]
[ip-172-31-46-221:01840] [ 6] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x92d54)[0x7fdc4e78ed54]
[ip-172-31-46-221:01840] [ 7] /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so(_ZN3c106detail14torchCheckFailEPKcS2_jS2_+0x8a)[0x7fdbf44444da]
[ip-172-31-46-221:01840] [ 8] /usr/local/lib/python3.6/dist-packages/torch/lib/libc10_cuda.so(_ZN3c104cuda17getStreamFromPoolEba+0x177)[0x7fdbf469e187]
[ip-172-31-46-221:01840] [ 9] /usr/local/lib/python3.6/dist-packages/torch/lib/libc10_cuda.so(+0x8d05)[0x7fdbf46a0d05]
[ip-172-31-46-221:01840] [10] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xe40cc4)[0x7fdc4b3b4cc4]
[ip-172-31-46-221:01840] [11] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xe40e48)[0x7fdc4b3b4e48]
[ip-172-31-46-221:01840] [12] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xe93170)[0x7fdc4b407170]
[ip-172-31-46-221:01840] [13] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xe9334f)[0x7fdc4b40734f]
[ip-172-31-46-221:01840] [14] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(_ZN10tensorpipe8PipeImpl26callReadDescriptorCallbackENS_15OpsStateMachineIS0_NS_13ReadOperationEE4IterE+0x209)[0x7fdc4b409ac9]
[ip-172-31-46-221:01840] [15] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xea2608)[0x7fdc4b416608]
[ip-172-31-46-221:01840] [16] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(_ZN10tensorpipe8PipeImpl20advanceReadOperationENS_15OpsStateMachineIS0_NS_13ReadOperationEE4IterENS2_5StateE+0xf3)[0x7fdc4b400d33]
[ip-172-31-46-221:01840] [17] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xea69c2)[0x7fdc4b41a9c2]
[ip-172-31-46-221:01840] [18] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xe9857d)[0x7fdc4b40c57d]
[ip-172-31-46-221:01840] [19] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(_ZN10tensorpipe11ContextImpl11deferToLoopESt8functionIFvvEE+0x154)[0x7fdc4b3e43a4]
[ip-172-31-46-221:01840] [20] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xe8ef31)[0x7fdc4b402f31]
[ip-172-31-46-221:01840] [21] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xf10c95)[0x7fdc4b484c95]
[ip-172-31-46-221:01840] [22] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xf11dc3)[0x7fdc4b485dc3]
[ip-172-31-46-221:01840] [23] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(_ZN10tensorpipe9transport2uv14ConnectionImpl20readCallbackFromLoopElPK8uv_buf_t+0x420)[0x7fdc4b502680]
[ip-172-31-46-221:01840] [24] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0xf929cf)[0x7fdc4b5069cf]
[ip-172-31-46-221:01840] [25] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0x108482f)[0x7fdc4b5f882f]
[ip-172-31-46-221:01840] [26] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(+0x1084e6c)[0x7fdc4b5f8e6c]
[ip-172-31-46-221:01840] [27] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(uv__io_poll+0x356)[0x7fdc4b5fd646]
[ip-172-31-46-221:01840] [28] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(uv_run+0x107)[0x7fdc4b5f2f27]
[ip-172-31-46-221:01840] [29] /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so(_ZN10tensorpipe9transport2uv4Loop9eventLoopEv+0x1d)[0x7fdc4b50bf5d]
[ip-172-31-46-221:01840] *** End of error message ***
Aborted (core dumped) |
st175433 | Looks like it’s trying to get a stream on an invalid CUDA device? Can you share the code hitting this error? And what’s the HW setup? E.g., how many GPUs on both sides and what type of GPUs.
cc @lcw have you seen similar errors before? |
st175434 | This looks a lot like the memory corruption issue I spent an entire week chasing and fixing. The fix is in https://github.com/pytorch/pytorch/pull/60470 2, hence it’s only available in PyTorch 1.10 for now, sorry.
If my memory serves me right, this problem occurs when an object contained in an RRef is mutated in-place, in particular when one of its tensors is removed. (Like, imagine an RRef holding a dict of tensors, and one of those items being popped from the dict).
My fix ensures that in those cases we at least don’t crash, however it’s not a “full” fix for these kind of scenarios, because the desired behavior for mutating RRefs is, in my view, poorly specified. In general, I’d strongly recommend to use RRefs as immutable. If you need to modify an RRef you should always be able to extract its value, modify it as you want, and then re-wrap the new version of that value in a new RRef (and stop using the old one). This would be fully safe. |
st175435 | I am trying to use pytorch to perform simple calculations across multiple gpu. I am not wanting to train a machine learning model. I’ve posted this in the distributed forum here, but I haven’t gotten a response back about a particular question. Here is the code I have thus far:
import torch
import torch.multiprocessing as mp
import torch.distributed as dist
import torch.nn.functional as F
import pandas as pd
def calc_cos_sims(rank, world_size):
dist.init_process_group('gloo', rank=rank, world_size=world_size)
cuda_device = torch.device('cuda:'+str(rank))
data_path = './embed_pairs_df_million_part_' + str(rank) + '.pkl'
tmp_df = pd.read_pickle(data_path)
embeds_a_list = [embed_a for embed_a in tmp_df['embeds_a']]
embeds_b_list = [embed_b for embed_b in tmp_df['embeds_b']]
embeds_a_tensor = torch.tensor(embeds_a_list, device=cuda_device)
embeds_b_tensor = torch.tensor(embeds_b_list, device=cuda_device)
cosine_tensor = F.cosine_similarity(embeds_a_tensor, embeds_b_tensor)
def main():
world_size = 4 #since I have 4 GPUs on a single machine
mp.spawn(calc_cos_sims,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == 'main':
main()
Basically, the code calculates the cosine similarity between two different embeddings. I have 4 GPU available to me and I have split my data into 4 slices to run on a given GPU.
It was recommended to use the pytorch collective api to aggregate the results. I read through it, but I’m not entirely sure how to implement it. How would that be done in this case or is there a better way to do all of this? I’d like to be able to save off the aggregated results to a file or have available for use at a further point in my program.
I welcome any feedback about potential improvements. Thank you in advance! |
st175436 | Hi,
I would like to train a model on imagenet, but with the default ImageFolder, it is taking too long to train.
To speed up the training process, I want to cache the dataset in RAM. I’ve seen that one way to cache the dataset is to create a large numpy tensor or a list of tensors such as what is done in small datasets (Cifar, Mnist) by torchvision.datasets module.
However, I was wondering how to do that in multiprocessing with distributed data-parallel, because how I understood it is that each process will create an object dataset and I risk duplicating the RAM usage for each process started. Is there a way to cache only once the list of tensors that will then be shared to each process ? |
st175437 | Solved by ptrblck in post #4
Yes, I think you are right regarding the shuffling in DistributedSampler.
If you want to share arrays with a different shape you might want to check the shared_dict implementation and see if this would be a valid approach. |
st175438 | Assuming you are using a DistributedSampler in your DistributedDataParallel use case you could try to cache the sbsets only in each process.
However, note that ImageNet is a large dataset so you would need to use a lot of host memory to store it. I’m not familiar with your approach, but in case you want to create caches and directly reuse them, note that data augmentation would most likely be disabled (unless you want to transform the samples again), which might hurt your training performance. |
st175439 | Thank you for your reply. I am indeed using these tools, however, caching the subsets in each process won’t work for the next epochs I think because the distributed sampler will shuffle the indices across the replicas.
I have enough RAM to store imagenet but I agree that it is a specific use case for large enough clusters. I would like to store the dataset before doing the augmentations, maybe I should try to pass the dataset from the main process to all the processes spawned instead of loading the dataset in each process. I’m unfamiliar with the multiprocessing library but I would guess that there should be a way to share the data before or after the different processes are spawned.
I’ve seen an answer you provided a few years ago: How to share data among DataLoader processes to save memory 4. I think what I’m looking for is close to the solution you provided except that I don’t want a numpy array that requires images of the same size which is not the case for brute imagenet. So I need to find a way to share a list of tensors to all the different processes. |
st175440 | Yes, I think you are right regarding the shuffling in DistributedSampler.
If you want to share arrays with a different shape you might want to check the shared_dict 7 implementation and see if this would be a valid approach. |
st175441 | Thank you, I will definitely look at this and mark it as a solution because I should be able to do what I want using this object. |
st175442 | I’m trying to reproduce the MLPerf v0.7 NVIDIA submission for BERT on a SLURM system. In doing so I encountered an error. Below I’ve included a minimal reproducible example:
test.sh:
#!/bin/env bash
#SBATCH --gpus-per-node=T4:2
#SBATCH -N 1
#SBATCH -t 0-00:05:00
cd $TMPDIR
echo "
import os
import torch
import torch.distributed
torch.distributed.init_process_group('nccl')
for var_name in ['SLURM_LOCALID', 'SLURM_PROCID', 'SLURM_NTASKS']:
print(f'{var_name} = {os.environ.get(var_name)}')
local_rank = int(os.environ['SLURM_LOCALID'])
torch.cuda.set_device(local_rank)
seeds_tensor = torch.LongTensor(5).random_(0, 2**32 - 1).to('cuda')
torch.distributed.broadcast(seeds_tensor, 0)
print('Broadcast successful')
" > tmp.py
srun -l --mpi=none --ntasks=2 --ntasks-per-node=2 singularity exec $SLURM_SUBMIT_DIR/PyTorch-1.8.1.sif python -m torch.distributed.launch --use_env --nproc_per_node=2 tmp.py
Which I then launch with sbatch test.sh and PyTorch-1.8.1.sif is ubild from the official PyTorch docker image docker pull pytorch/pytorch:1.8.1-cuda10.2-cudnn7-devel
The output is:
0: File "tmp.py", line 13, in <module>
0: Traceback (most recent call last):
0: torch.distributed.broadcast(seeds_tensor, 0)
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1039, in broadcast
0: File "tmp.py", line 13, in <module>
0: torch.distributed.broadcast(seeds_tensor, 0)
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1039, in broadcast
0: work = default_pg.broadcast([tensor], opts)
0: RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1616554786529/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, invalid usage, NCCL version 2.7.8
0: ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).
0: work = default_pg.broadcast([tensor], opts)
0: RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1616554786529/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, invalid usage, NCCL version 2.7.8
0: ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc).
0: Traceback (most recent call last):
0: File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main
0: "__main__", mod_spec)
0: File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
0: exec(code, run_globals)
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module>
0: main()
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
0: sigkill_handler(signal.SIGTERM, None) # not coming back
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
srun: error: alvis2-04: task 0: Exited with exit code 1
0: raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
0: subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'tmp.py']' returned non-zero exit status 1.
0: *****************************************
0: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
0: *****************************************
0: Killing subprocess 206223
0: Killing subprocess 206225
0: Traceback (most recent call last):
0: File "tmp.py", line 6, in <module>
0: torch.distributed.init_process_group('nccl')
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 500, in init_process_group
0: store, rank, world_size = next(rendezvous_iterator)
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/rendezvous.py", line 190, in _env_rendezvous_handler
0: store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
0: RuntimeError: Address already in use
0: SLURM_LOCALID = 0
0: SLURM_PROCID = 0
0: SLURM_NTASKS = 2
1: SLURM_LOCALID = 1
1: SLURM_PROCID = 1
1: SLURM_NTASKS = 2
0: Traceback (most recent call last):
0: File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main
0: "__main__", mod_spec)
0: File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
0: exec(code, run_globals)
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module>
0: main()
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
0: sigkill_handler(signal.SIGTERM, None) # not coming back
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
0: raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
0: subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'tmp.py']' returned non-zero exit status 1.
0: *****************************************
srun: error: alvis2-08: task 0: Exited with exit code 1
0: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
0: *****************************************
0: Killing subprocess 49549
0: Killing subprocess 49551
srun: Job step aborted: Waiting up to 122 seconds for job step to finish.
0: slurmstepd: error: *** STEP 111669.0 ON alvis2-08 CANCELLED AT 2021-10-11T10:23:52 DUE TO TIME LIMIT ***
slurmstepd: error: *** JOB 111669 ON alvis2-08 CANCELLED AT 2021-10-11T10:23:52 DUE TO TIME LIMIT ***
1: *****************************************
So here there are two different errors, the first error comes from torch.distributed.init_process_group('nccl') with error
0: store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)
0: RuntimeError: Address already in use
and the second error is from torch.distributed.broadcast with error
0: torch.distributed.broadcast(seeds_tensor, 0)
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1039, in broadcast
0: File "tmp.py", line 13, in <module>
0: torch.distributed.broadcast(seeds_tensor, 0)
0: File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1039, in broadcast
0: work = default_pg.broadcast([tensor], opts)
0: RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1616554786529/work/torch/lib/c10d/ProcessGroupNCCL.cpp:825, invalid usage, NCCL version 2.7.8
0: ncclInvalidUsage: This usually reflects invalid usage of NCCL library (such as too many async ops, too many collectives at once, mixing streams in a group, etc). |
st175443 | Update, only the Address only in use error seem to remain if the last line is replaced with
srun -l --mpi=none --ntasks=2 --ntasks-per-node=2 singularity exec $SLURM_SUBMIT_DIR/PyTorch-1.8.1.sif python -m torch.distributed.launch --use_env tmp.py |
st175444 | Can you also add
print(f"MASTER_ADDR: ${os.environ['MASTER_ADDR']}")
print(f"MASTER_PORT: ${os.environ['MASTER_PORT']}")
before torch.distributed.init_process_group("nccl"), that may give some insight into what endpoint is being used. Once you verify the address and port, then check that there is not a process currently using that address/port combo, you can try instantiating a TCPStore via python command line, to verify that it works. It is likely a port conflict based on what you set your port number to be in the environment variables. |
st175445 | Hi,
I am trying to use swin transformer backbone with my modified mdetr codebase. The code is running perfectly fine on a single GPU but giving attached error when running on multiple GPUs using torch.distributed.launce.
I tried setting up torch.autograd.set_detect_anomaly(True) and it pointed out to the line 133 of swin_transformers.py 1. Note that the error goes away if I set the requires_grad=False for relative_position_bias_table parameter.
I am using torch 1.8.0+cu11 on NVIDIA A6000 GPUs. I will appreciate any help to resolve this issue. Thank You
swing1853×906 210 KB
# --------------------------------------------------------
# Swin Transformer
# Copyright (c) 2021 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ze Liu, Yutong Lin, Yixuan Wei
# --------------------------------------------------------
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.checkpoint as checkpoint
import numpy as np
from timm.models.layers import DropPath, to_2tuple, trunc_normal_
class Mlp(nn.Module):
""" Multilayer perceptron."""
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Linear(in_features, hidden_features)
self.act = act_layer()
self.fc2 = nn.Linear(hidden_features, out_features)
self.drop = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
x = self.drop(x)
x = self.fc2(x)
x = self.drop(x)
return x
def window_partition(x, window_size):
"""
Args:
x: (B, H, W, C)
window_size (int): window size
Returns:
windows: (num_windows*B, window_size, window_size, C)
"""
B, H, W, C = x.shape
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
return windows
def window_reverse(windows, window_size, H, W):
"""
Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C)
"""
B = int(windows.shape[0] / (H * W / window_size / window_size))
x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
return x
class WindowAttention(nn.Module):
""" Window based multi-head self attention (W-MSA) module with relative position bias.
It supports both of shifted and non-shifted window.
Args:
dim (int): Number of input channels.
window_size (tuple[int]): The height and width of the window.
num_heads (int): Number of attention heads.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
proj_drop (float, optional): Dropout ratio of output. Default: 0.0
"""
def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
super().__init__()
self.dim = dim
self.window_size = window_size # Wh, Ww
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim ** -0.5
# define a parameter table of relative position bias
# (requires_grad=False to fix the multi-gpu training issue for downstream task.)
self.relative_position_bias_table = nn.Parameter(
torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads), requires_grad=False)
# 2*Wh-1 * 2*Ww-1, nH
# get pair-wise relative position index for each token inside the window
coords_h = torch.arange(self.window_size[0])
coords_w = torch.arange(self.window_size[1])
coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
relative_coords[:, :, 0] = relative_coords[:, :, 0] + self.window_size[0] - 1 # shift to start from 0
relative_coords[:, :, 1] = relative_coords[:, :, 1] + self.window_size[1] - 1
relative_coords[:, :, 0] = relative_coords[:, :, 0] * (2 * self.window_size[1] - 1)
relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
self.register_buffer("relative_position_index", relative_position_index)
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
trunc_normal_(self.relative_position_bias_table, std=.02)
self.softmax = nn.Softmax(dim=-1)
def forward(self, x, mask=None):
""" Forward function.
Args:
x: input features with shape of (num_windows*B, N, C)
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
"""
B_, N, C = x.shape
qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
q = q * self.scale
attn = (q @ k.transpose(-2, -1))
relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
attn = attn + relative_position_bias.unsqueeze(0)
if mask is not None:
nW = mask.shape[0]
attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
attn = attn.view(-1, self.num_heads, N, N)
attn = self.softmax(attn)
else:
attn = self.softmax(attn)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
class SwinTransformerBlock(nn.Module):
""" Swin Transformer Block.
Args:
dim (int): Number of input channels.
num_heads (int): Number of attention heads.
window_size (int): Window size.
shift_size (int): Shift size for SW-MSA.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float, optional): Stochastic depth rate. Default: 0.0
act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
"""
def __init__(self, dim, num_heads, window_size=7, shift_size=0,
mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
act_layer=nn.GELU, norm_layer=nn.LayerNorm):
super().__init__()
self.dim = dim
self.num_heads = num_heads
self.window_size = window_size
self.shift_size = shift_size
self.mlp_ratio = mlp_ratio
assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
self.norm1 = norm_layer(dim)
self.attn = WindowAttention(
dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
self.H = None
self.W = None
def forward(self, x, mask_matrix):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
mask_matrix: Attention mask for cyclic shift.
"""
B, L, C = x.shape
H, W = self.H, self.W
assert L == H * W, "input feature has wrong size"
shortcut = x
x = self.norm1(x)
x = x.view(B, H, W, C)
# pad feature maps to multiples of window size
pad_l = pad_t = 0
pad_r = (self.window_size - W % self.window_size) % self.window_size
pad_b = (self.window_size - H % self.window_size) % self.window_size
x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
_, Hp, Wp, _ = x.shape
# cyclic shift
if self.shift_size > 0:
shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
attn_mask = mask_matrix
else:
shifted_x = x
attn_mask = None
# partition windows
x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
# W-MSA/SW-MSA
attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
# merge windows
attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
# reverse cyclic shift
if self.shift_size > 0:
x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
else:
x = shifted_x
if pad_r > 0 or pad_b > 0:
x = x[:, :H, :W, :].contiguous()
x = x.view(B, H * W, C)
# FFN
x = shortcut + self.drop_path(x)
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x
class PatchMerging(nn.Module):
""" Patch Merging Layer
Args:
dim (int): Number of input channels.
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
"""
def __init__(self, dim, norm_layer=nn.LayerNorm):
super().__init__()
self.dim = dim
self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
self.norm = norm_layer(4 * dim)
def forward(self, x, H, W):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
"""
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
x = x.view(B, H, W, C)
# padding
pad_input = (H % 2 == 1) or (W % 2 == 1)
if pad_input:
x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
x = self.norm(x)
x = self.reduction(x)
return x
class BasicLayer(nn.Module):
""" A basic Swin Transformer layer for one stage.
Args:
dim (int): Number of feature channels
depth (int): Depths of this stage.
num_heads (int): Number of attention head.
window_size (int): Local window size. Default: 7.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
"""
def __init__(self,
dim,
depth,
num_heads,
window_size=7,
mlp_ratio=4.,
qkv_bias=True,
qk_scale=None,
drop=0.,
attn_drop=0.,
drop_path=0.,
norm_layer=nn.LayerNorm,
downsample=None,
use_checkpoint=False):
super().__init__()
self.window_size = window_size
self.shift_size = window_size // 2
self.depth = depth
self.use_checkpoint = use_checkpoint
# build blocks
self.blocks = nn.ModuleList([
SwinTransformerBlock(
dim=dim,
num_heads=num_heads,
window_size=window_size,
shift_size=0 if (i % 2 == 0) else window_size // 2,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop,
attn_drop=attn_drop,
drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
norm_layer=norm_layer)
for i in range(depth)])
# patch merging layer
if downsample is not None:
self.downsample = downsample(dim=dim, norm_layer=norm_layer)
else:
self.downsample = None
def forward(self, x, H, W):
""" Forward function.
Args:
x: Input feature, tensor size (B, H*W, C).
H, W: Spatial resolution of the input feature.
"""
# calculate attention mask for SW-MSA
Hp = int(np.ceil(H / self.window_size)) * self.window_size
Wp = int(np.ceil(W / self.window_size)) * self.window_size
img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
h_slices = (slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
slice(-self.shift_size, None))
w_slices = (slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
slice(-self.shift_size, None))
cnt = 0
for h in h_slices:
for w in w_slices:
img_mask[:, h, w, :] = cnt
cnt = cnt + 1
mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
for blk in self.blocks:
blk.H, blk.W = H, W
if self.use_checkpoint:
x = checkpoint.checkpoint(blk, x, attn_mask)
else:
x = blk(x, attn_mask)
if self.downsample is not None:
x_down = self.downsample(x, H, W)
Wh, Ww = (H + 1) // 2, (W + 1) // 2
return x, H, W, x_down, Wh, Ww
else:
return x, H, W, x, H, W
class PatchEmbed(nn.Module):
""" Image to Patch Embedding
Args:
patch_size (int): Patch token size. Default: 4.
in_chans (int): Number of input image channels. Default: 3.
embed_dim (int): Number of linear projection output channels. Default: 96.
norm_layer (nn.Module, optional): Normalization layer. Default: None
"""
def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
super().__init__()
patch_size = to_2tuple(patch_size)
self.patch_size = patch_size
self.in_chans = in_chans
self.embed_dim = embed_dim
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
if norm_layer is not None:
self.norm = norm_layer(embed_dim)
else:
self.norm = None
def forward(self, x):
"""Forward function."""
# padding
_, _, H, W = x.size()
if W % self.patch_size[1] != 0:
x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
if H % self.patch_size[0] != 0:
x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
x = self.proj(x) # B C Wh Ww
if self.norm is not None:
Wh, Ww = x.size(2), x.size(3)
x = x.flatten(2).transpose(1, 2)
x = self.norm(x)
x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
return x
class SwinTransformer(nn.Module):
""" Swin Transformer backbone.
A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
https://arxiv.org/pdf/2103.14030
Args:
pretrain_img_size (int): Input image size for training the pretrained model,
used in absolute postion embedding. Default 224.
patch_size (int | tuple(int)): Patch size. Default: 4.
in_chans (int): Number of input image channels. Default: 3.
embed_dim (int): Number of linear projection output channels. Default: 96.
depths (tuple[int]): Depths of each Swin Transformer stage.
num_heads (tuple[int]): Number of attention head of each stage.
window_size (int): Window size. Default: 7.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
drop_rate (float): Dropout rate.
attn_drop_rate (float): Attention dropout rate. Default: 0.
drop_path_rate (float): Stochastic depth rate. Default: 0.2.
norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
patch_norm (bool): If True, add normalization after patch embedding. Default: True.
out_indices (Sequence[int]): Output from which stages.
frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
-1 means not freezing any parameters.
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
"""
def __init__(self,
pretrain_img_size=224,
patch_size=4,
in_chans=3,
embed_dim=96,
depths=[2, 2, 6, 2],
num_heads=[3, 6, 12, 24],
window_size=7,
mlp_ratio=4.,
qkv_bias=True,
qk_scale=None,
drop_rate=0.,
attn_drop_rate=0.,
drop_path_rate=0.2,
norm_layer=nn.LayerNorm,
ape=False,
patch_norm=True,
out_indices=(0, 1, 2, 3),
frozen_stages=-1,
use_checkpoint=False):
super().__init__()
self.pretrain_img_size = pretrain_img_size
self.num_layers = len(depths)
self.embed_dim = embed_dim
self.ape = ape
self.patch_norm = patch_norm
self.out_indices = out_indices
self.frozen_stages = frozen_stages
# split image into non-overlapping patches
self.patch_embed = PatchEmbed(
patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
norm_layer=norm_layer if self.patch_norm else None)
# absolute position embedding
if self.ape:
pretrain_img_size = to_2tuple(pretrain_img_size)
patch_size = to_2tuple(patch_size)
patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]]
self.absolute_pos_embed = nn.Parameter(
torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]))
trunc_normal_(self.absolute_pos_embed, std=.02)
self.pos_drop = nn.Dropout(p=drop_rate)
# stochastic depth
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
# build layers
self.layers = nn.ModuleList()
for i_layer in range(self.num_layers):
layer = BasicLayer(
dim=int(embed_dim * 2 ** i_layer),
depth=depths[i_layer],
num_heads=num_heads[i_layer],
window_size=window_size,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop=drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
norm_layer=norm_layer,
downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
use_checkpoint=use_checkpoint)
self.layers.append(layer)
num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
self.num_features = num_features
# add a norm layer for each output
for i_layer in out_indices:
layer = norm_layer(num_features[i_layer])
layer_name = f'norm{i_layer}'
self.add_module(layer_name, layer)
self._freeze_stages()
def _freeze_stages(self):
if self.frozen_stages >= 0:
self.patch_embed.eval()
for param in self.patch_embed.parameters():
param.requires_grad = False
if self.frozen_stages >= 1 and self.ape:
self.absolute_pos_embed.requires_grad = False
if self.frozen_stages >= 2:
self.pos_drop.eval()
for i in range(0, self.frozen_stages - 1):
m = self.layers[i]
m.eval()
for param in m.parameters():
param.requires_grad = False
def init_weights(self, pretrained=None):
"""Initialize the weights in backbone.
Args:
pretrained (str, optional): Path to pre-trained weights.
Defaults to None.
"""
def _init_weights(m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
if isinstance(pretrained, str):
self.apply(_init_weights)
load_pretrained_weights(self, pretrained, strict=False)
elif pretrained is None:
self.apply(_init_weights)
else:
raise TypeError('pretrained must be a str or None')
def forward(self, x):
"""Forward function."""
x = self.patch_embed(x)
Wh, Ww = x.size(2), x.size(3)
if self.ape:
# interpolate the position embedding to the corresponding size
absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
else:
x = x.flatten(2).transpose(1, 2)
x = self.pos_drop(x)
outs = []
for i in range(self.num_layers):
layer = self.layers[i]
x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
if i in self.out_indices:
norm_layer = getattr(self, f'norm{i}')
x_out = norm_layer(x_out)
out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
outs.append(out)
return tuple(outs)
def train(self, mode=True):
"""Convert the model into training mode while keep layers freezed."""
super(SwinTransformer, self).train(mode)
self._freeze_stages()
def load_pretrained_weights(model, checkpoints_path, strict=False):
print(f"Loading swin backbone checkpoints from {checkpoints_path}.")
checkpoints = torch.load(checkpoints_path)
checkpoints = checkpoints["model"]
status = model.load_state_dict(checkpoints, strict=strict)
print(f"Missing Keys: {status.missing_keys}\nUnexpected Keys: {status.unexpected_keys}") |
st175446 | @Muhammad_Maaz Are you using DDP here for multiple GPUs? If so Distributed: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2048]] is at version 4; expected version 3 instead · Issue #62474 · pytorch/pytorch · GitHub 2 might be related. |
st175447 | @Muhammad_Maaz Can you set broadcast_buffers=False in DDP to see if that helps resolve the issue? Also, if you could provide us with a full repro script, would help a lot more in debugging. |
st175448 | Thanks, @pritamdamania87,
Yes, I am using the DDP, and setting broadcast_buffers=False worked for me. Now I am wondering if it can affect any training outputs/accuracies?
Also, all the code is available at the main branch of GitHub - mmaaz60/mdetr. You just need to set requires_grad=True at mdetr/swin_transformers.py at fe1394c67e76a6c7e521bbda77d8294714038a3a · mmaaz60/mdetr · GitHub for reproducing the issue. I know it is a big codebase but I don’t have any shorter version for reproducing this behavior. |
st175449 | I believe the error was coming from BatchNorm. I think for best accuracy you should convert all your BatchNorm layers to SyncBatchNorm 1 |
st175450 | Hi,
I have more of a conceptual question. When training a NN model (especially through DDP) on single-node, multiple-GPUs, there is an urge to utilize the maximum GPU memory. This is usually done by increasing the training Batch_Size of input samples (e.g., from 32 to 64, 128, 256, 512 or even 1024).
So I have three questions:
Is there any change in terms of validation accuracy (and loss) when we do the training on more # of GPUs?
If I increase the Batch_size to utilize the GPU memory, is there any effect on validation accuracy and loss?
Do we need to tune the other hyperparameters (like learning rate, weight decay, epochs, etc.) when changing the Batch_size in the above scenario?
Thanks. |
st175451 | Response to your questions:
Yes potentially, DDP keeps loss local to each model and averages gradients, which might produce different result compared to having a global loss and back-propagating from there.
Also potentially yes, batch_size is another hyperparameter that can be tuned.
As a general note it is good to re-tune all hyperparameters when switching to DDP. These hyperparameters can all affect the accuracy of the model when compared to single GPU. However difference in accuracy should not be very drastic (this is subjective), if that is the case then it requires additional investigation. |
st175452 | The code is below.
import torch
from torch import nn
import torch.distributed as dist
import torch.multiprocessing as mp
import os
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.attr1 = nn.Parameter(torch.tensor([1., 2., 3.]))
self.register_buffer('attr2', torch.tensor([4., 5., 6.]))
self.attr3 = torch.tensor([7., 8., 9.])
def forward(self, x, rank):
hd = x * self.attr1
self.attr2 = self.attr2 / (rank + 1)
hd = hd * self.attr2
self.attr3 = self.attr3.to(rank)
self.attr3 = self.attr3 / (rank + 1)
y = hd * self.attr3
y = y.mean()
return y
def run(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group('nccl', rank=rank, world_size=world_size)
# torch.cuda.set_device(rank)
os.environ['CUDA_VISIBLE_DEVICES'] = f'{rank}'
my_model = MyModel().to(rank)
my_model = nn.parallel.DistributedDataParallel(my_model, device_ids=[rank], output_device=rank)
optimizer = torch.optim.SGD(my_model.parameters(), lr=0.001, momentum=0.9)
input = torch.tensor([1., 2., 3.]) * (rank + 1)
optimizer.zero_grad()
output = my_model(input, rank)
output.backward()
if rank == 0:
print(my_model.module.attr1.grad)
optimizer.step()
if rank == 0:
print(my_model.module.attr1)
print(my_model.module.attr2)
print(my_model.module.attr3)
if __name__ == '__main__':
world_size = 2
mp.spawn(run, args=(world_size, ), nprocs=2)
print('执行完毕')
Initially, I write this code in order to see the synchronization mechanism of parameter and buffer in multi GPU training.
Finally, I find torch.cuda.set_device(rank) work well, but os.environ['CUDA_VISIBLE_DEVICES'] not work well. The latter will report an error.
The error information is below.
屏幕截图 2021-09-21 1743041453×565 224 KB
Hope someone can tell me why. |
st175453 | Solved by JuanFMontesinos in post #4
it shouldn’t be inside the python script but to be set as an enviroment variable in the console such as
CUDA_VISIBLE_DEVICES=0,1 python your_script.py
Note that you SHOUDLN’T t set it as a permanent enviroment variable in the bashrc as it affects the whole system. |
st175454 | you have to set it before calling the python code.
It’s not pytorch’s but nvidia’s behaviour.
Devices are assigned to the process before starting python therefore it doesn’t work once u are in. |
st175455 | it shouldn’t be inside the python script but to be set as an enviroment variable in the console such as
CUDA_VISIBLE_DEVICES=0,1 python your_script.py
Note that you SHOUDLN’T t set it as a permanent enviroment variable in the bashrc as it affects the whole system. |
st175456 | This way I only set the GPU devices to be used for all processes, not each process.
But torch.cuda.set_device() can set GPU device for each process. |
st175457 | You can manage internally (via torch commands) which gpu to use at any time.
Most of the data parallel funcs allows to set that and you can set the devices manually anyway
Just mentioning that defining cuda_visible_devices inside python won’t work no matter what u do. |
st175458 | So, os.environ['CUDA_VISIBLE_DEVICES] and torch.cuda.set_device() are not conflict.
Use CUDA_VISIBLE_DEVICES=0,1 python your_script.py to set all available GPU devices for all processes. In each process, we can also use torch.cuda.set_device() to specify the GPU device for this process.
Is this the correct understanding? |
st175459 | Use CUDA_VISIBLE_DEVICES=0,1 python your_script.py to set all available GPU devices for all processes.
I’m not aware of the intrinsecs of torch.cuda.set_device.
Just to mention when you pass device_ids this is a list which enlist the available gpus from the pytorch pov.
For example, if you call
CUDA_VISIBLE_DEVICES=5,7,9 there will be 3 gpus from 0 to 2.
so you can pass device_ids=[0,1,2] |
st175460 | Hi,
I need to use multiple GPUs available in the machine in a way that each of the processes uses exactly one GPU. I modified the mnist_hogwild code https://github.com/pytorch/examples/blob/master/mnist_hogwild/main.py 19 as the following:
dataloader_kwargs = {'pin_memory': True} if use_cuda else {}
dcount = torch.cuda.device_count()
devices = []
model = Net()
for i in range(dcount):
devices.append(torch.device("cuda:"+str(i)))
torch.manual_seed(args.seed)
mp.set_start_method('spawn')
# model = Net().to(device)
for i in range(dcount):
model.to(devices[i])
model.share_memory() # gradients are allocated lazily, so they are not shared here
processes = []
for rank in range(args.num_processes):
p = mp.Process(target=train, args=(rank, args, model, devices[int(rank%dcount)], dataloader_kwargs))
# We first train the model across `num_processes` processes
p.start()
processes.append(p)
for p in processes:
p.join()
However, while running this code with num_processes = 2, as there are two GPUs in my machine, I can see only one of them engaged. Can you please suggest what exactly I need in the code here? |
st175461 | Please review my version.
github.com
aurotripathy/menace/blob/master/mnist_multigpu_hogwild/main.py 25
"""
Adding multi-gpu support to mnist w/hogwild
"""
from __future__ import print_function
import argparse
import torch
import torch.multiprocessing as mp
from model import Net
from train import train, test
from shared_optim import SharedAdam
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=10, metavar='N',
help='number of epochs to train (default: 10)')
This file has been truncated. show original
I’m happy to fix issues, improve readability.
This really is derived from an RL implementation by @dgriff available here 4 |
st175462 | bapi:
for i in range(dcount):
model.to(devices[i])
This snippet will first move the model to device 0 and then to device 1. If you don’t explicitly move the model in the functions you’re running through multiprocessing, then you’ll have to make this dependent on the rank of the target process. As is, I assume you’re only using process 1. |
st175463 | Hi everyone,
I am wondering how to set up a multiple-GPUs-cross validation in Pytorch.
In the cross-validation part, my code(only the epoch part) looks like this:
for epoch in range(num_epochs):
loss = 0
train_total_loss=0
model.to(device)
model.train()
for batch_index, (x_batch, y_batch) in enumerate(train_loader):
x_batch,y_batch=x_batch.to(device),y_batch.to(device)
optimizer.zero_grad()
out = model(x_batch)
y_batch=y_batch.view(out.shape)
loss = torch.nn.MSELoss(y_batch,out)
loss.backward()
optimizer.step()
train_total_loss += loss.item()
train_total_loss = train_total_loss/train_loader.__len__() # loss of each epoch
Should I add torch.nn.DataParallel(model) before model.to(device) to use multiple GPUs? And should I change how the loss is calculated?
Thanks a lot for help! |
st175464 | It is recommended to use torch.nn.parallel.DistributedDataParallel see pointers here (Distributed Data Parallel — PyTorch master documentation 4) since DataParallel is not actively being worked on and will eventually be deprecated.
If you do want to use torch.nn.DataParallel (DataParallel — PyTorch master documentation 1) then you also need to specify the device IDs for each GPU that you want to use. For example, for two GPUs you would specify torch.nn.DataParallel(model, device_ids=[0, 1]) for cuda:0 and cuda:1, the model.to(device) is not necessary. |
st175465 | I’m currently running experiment with Distributed Data Parallel, with batch normalization (not synchronized). I have two questions regarindg some issues:
Since I am not synchronizing the batch norm, each model keeps different running means and running stats. However when I evaluate the model with different gpus, the result seems identical. Can somebody tell me how could this be happening?
Here is my code for evaluation:
def evaluate(test_loader, model, rank, epoch):
model.eval()
for module in model.modules():
if isinstance(module, nn.BatchNorm2d):
print(module.running_mean)
break
accuracy = 0.
cnt = 0.
with torch.no_grad():
first = True
for data in test_loader:
inputs, labels = data[0].to(rank), data[1].to(rank)
if epoch == 0 and first:
print(f"Val Batch Size: {inputs.shape[0]}")
first = False
preds = model(inputs)
accuracy += (torch.argmax(preds, 1) == labels).sum().item()
cnt += len(labels)
accuracy *= 100 / cnt
return accuracy
Since evaluation on each device yields same result, I tried to evaluate only on a single model however got a following error:
Traceback (most recent call last):
File "/home/lthilnklover/.conda/envs/torch_1.8/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/lthilnklover/.conda/envs/torch_1.8/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/lthilnklover/sam_torch/parallel_test.py", line 119, in <module>
mp.spawn(main, args=(world_size, args), nprocs=world_size, join=True)
File "/home/lthilnklover/.conda/envs/torch_1.8/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/lthilnklover/.conda/envs/torch_1.8/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/lthilnklover/.conda/envs/torch_1.8/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/lthilnklover/.conda/envs/torch_1.8/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/home/lthilnklover/sam_torch/parallel_test.py", line 101, in main
accuracy = evaluate(test_loader, model, rank)
File "/home/lthilnklover/sam_torch/parallel_test.py", line 32, in evaluate
preds = model(inputs)
File "/home/lthilnklover/.conda/envs/torch_1.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lthilnklover/.conda/envs/torch_1.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 791, in forward
self._sync_params()
File "/home/lthilnklover/.conda/envs/torch_1.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1318, in _sync_params
self._distributed_broadcast_coalesced(
File "/home/lthilnklover/.conda/envs/torch_1.8/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1278, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(
RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:598] Connection closed by peer
But I have no clue why this is happening… |
st175466 | I am using Pytorch Distributed Data Parallel approach and spawning multiple processes from parent process, each running on separate GPU.I am using Pytorch Distributed Data Sampler along with Data Loader for loading batches of input data to each process.
My questions:
Under the hood, how does Pytorch Distributed Data Sampler, Data Loader make slices of input data? Just for simplicity say we have 4 GPUs, and 400 input samples and batch size of say 50, then will Pytorch Distributed Data Sampler (together with Data Loader) make first 50 samples go to GPU-0, next 50 to GPU-1., next 50 to GPU-2, then GPU-3 and then again next 50 to GPU-0 i.e. in the order of GPU device number? or the order of GPU to select for next batch of input is random based on which GPU has finished its previous batch first? or is it like 400 samples get divided into 4 parts first and then GPU-0 would get first 100 samples of input data (50 at a time ), GPU-1 will get next 100 samples ( 50 at a time) and so on…and in this case no matter if say GPU-3 gets its second batch started earlier than GPU-0, but still with respect to input data, GPU-0 would still have first 100 samples and GPU-3 would have last 100?
2). My Second question is how to retrieve output data in same order as input data so that final consolidated output ( having outputs from all processes combined in one data structure) is in same order as original inputs and each output corresponds to the right input |
st175467 | Thanks for posting @kaleemiqb If using the recommended DistributedDataParallel (DDP) mode, where there is a dedicated process for each GPU, DDP does not split input data. Each process will have its own data loader and its own DDP instance. DDP only help to automatically compute the global averaged gradient in the backward pass. So it really depend on the dataload next batch is loaded, which I think it’s random.
for the second question, you can record the input batch, and the output of the model in its own process in a map, and if you want to concat them together do a all_gather manually, but input_batch across process might not rank properly. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.