id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st176368 | Are all the GPUs on the same machine, and they have the same type?
Do you mean the gradients are not sync even in the end of training?
Can you share some code to reproduce this? |
st176369 | Yes, the GPUs are on the same computing node, all GPUs are of the same type. What I mean by not synced is if I call tensor.data.grad after the backward call in the main worker they differ on each GPU. Which essentially means each GPUs are training separately but not really syncing the gradients at any point.
I’m afraid I cannot post the entire code and a minimal example might not capture the problem. That’s why I’m only asking for possible reasons in general. |
st176370 | My model does not contain a forward method, can this confuse the DDP?
I am confused. Do you mean that you have only used built-in layers such as:
model = nn.Sequential(
nn.Conv2d(i, j, k),
nn.ReLU(),
....
)
If so, you still called forward of each module implicitly, and the allreduce during backward pass should be triggered.
Without investigating the source code, I cannot find any other reason of unsynced gradients, if you didn’t use no_sync context manager. You should verify if allreduce is ever been invoked. A few ideas:
You can try torch.profiler and check if there is any allreduce operator in your GPU traces.
Alternatively, you can try registering a PowerSGD DDP comm hook, and check if there is any log about PowerSGD stats.
Not sure if you use the slower DataParallel instead of DistributedDataParallel can bypass this. |
st176371 | I used a function outside of the model class to perform the forward pass. It seems DDP requires a forward method nested in the nn.Module, which is not stated in the documentation.
After changing my code to include the forward method the error is the following:
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the 'forward' function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple 'checkpoint' functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet.
The code works fine on a single GPU. So I do not quite understand the error, since backpropagating twice through the same graph without retain_graph=True would not be possible anyway. |
st176372 | I have trained a model using DistributedDataParallel. After training, I serialized the model like so where the model is wrapped using DistributedDataParallel:
torch.save(model.state_dict(), 'model.pt')
Note that this serialization was performed in the launcher function which is typically passed to spawn() of torch.multiprocessing. My training setup consists of 4 GPUs.
Now when I am trying to load the checkpoint in my local inference setup (single GPU) the keys are not matching. The model, in this case, is not wrapped using DistributedDataParallel. Any pointers would be useful. |
st176373 | Solved by sio277 in post #2
Save your DDP model after unwrapping DataParallel, such as,
torch.save(model.module.state_dict(), 'model.pt')
Here, model.module is where your original model (before DDP wrapping) is placed. |
st176374 | Save your DDP model after unwrapping DataParallel, such as,
torch.save(model.module.state_dict(), 'model.pt')
Here, model.module is where your original model (before DDP wrapping) is placed. |
st176375 | Thanks @sio277.
Also, a slightly unrelated question on how to best log/print the loss and other metrics in these settings. Currently, I am only logging loss and other metrics from the master i.e. when rank == 0. |
st176376 | To get mean metrics all across the ranks, I use all-reduce function something like this:
import torch.distributed as dist
def global_meters_all_avg(args, *meters):
"""meters: scalar values of loss/accuracy calculated in each rank"""
tensors = [torch.tensor(meter, device=args.gpu, dtype=torch.float32) for meter in meters]
for tensor in tensors:
# each item of `tensors` is all-reduced starting from index 0 (in-place)
dist.all_reduce(tensor)
return [(tensor / args.world_size).item() for tensor in tensors]
See Distributed communication package - torch.distributed — PyTorch 1.8.1 documentation 3 for more details about the collective communications. |
st176377 | I see. Could you also share a minimal example as to where global_meters_all_avg() should be placed inside the training loop? Let’s say the following is my loop (part of the launcher train() function called by mp.spawn():
for batch in pbar:
# load image and mask into device memory
image = batch['image'].cuda(rank, non_blocking=True)
mask = batch['mask'].cuda(rank, non_blocking=True)
# pass images into model
pred = model(image)
# get loss
loss = criteria(pred, mask)
# update the model
optimizer.zero_grad(set_to_none=True)
loss.backward()
optimizer.step() |
st176378 | I usually use that function after each epoch (not during the batch iteration), to avoid worsening the training speed. During the batch iterations, I accumulate the loss values, and after 1 epoch, let global_meters_all_avg be called with an input of the accumulated loss value. |
st176379 | Note that for accumulating the loss in each rank, I use this class:
class AvgMeter:
def __init__(self):
self.reset()
def reset(self):
self.sum = 0
self.count = 0
self.avg = 0
def update(self, val, n=1):
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
In your code, you can use this class something like this:
losses = AvgMeter()
for batch in pbar:
# load image and mask into device memory
image = batch['image'].cuda(rank, non_blocking=True)
mask = batch['mask'].cuda(rank, non_blocking=True)
# pass images into model
pred = model(image)
# get loss
loss = criteria(pred, mask)
losses.update(loss.item(), image.size(0)) # accumulate the loss
# update the model
optimizer.zero_grad(set_to_none=True)
loss.backward()
optimizer.step()
# after each epoch
loss = losses.avg
global_loss = global_meters_all_avg(args, loss)
The global_loss is the one all-reduced (averaged) across the ranks. |
st176380 | One more doubt is during logging or printing global_loss I think it needs to be print only from one rank to prevent duplicate entries. Something like if rank == 0: print(global_loss). |
st176381 | I’m trying to specify specify which single GPU to run code on within Python code, by setting the GPU index visible to PyTorch. Here’s what I’ve tried:
for i in range(8): #8 gpus
os.environ["CUDA_AVAILABLE_DEVICES"] = str(i)
print(torch.cuda.device_count())
# this line always outputs 8 (all 8 devices) instead of 1...
...
I’m using PyTorch 1.0.0. How do I specify which GPU machine (by index or otherwise) to run code on (without using .to(device)) within Python code? |
st176382 | Solved by mrshenli in post #8
The init_process_group API only sets up the process where this function is invoked. And, as the world_size is set to 1, It only expects one process in the distributed training gang. If you would like to use multiple processes, please see this example.
but this causes problems later in my code, wh… |
st176383 | Hey @CCL, you will need to set the CUDA_AVAILABLE_DEVICES env var before launching the process. Sth like:
$ CUDA_AVAILABLE_DEVICES=0 python main.py
If you just want to set the default device, you can use set_device 209
Update
Please ignore the code above. I miss-read the variable name. It should be CUDA_VISIBLE_DEVICES |
st176384 | Hi thanks for the reply, is it possible to set it with python instead of in shell? |
st176385 | Hi thanks for the reply, is it possible to set it with python instead of in shell?
Yes, but you need to make sure it is set before initializing the CUDA context. See the code below:
import torch
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
torch.cuda.device_count() # print 1
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
torch.cuda.device_count() # still print 1 |
st176386 | Do you mean that once it’s set, it cannot be changed? For my case, I’m hoping to make GPU 0 visible on the 1st iteration, GPU 1 visible on the 2nd, etc till GPU 7 and iter 8. Is there a way to do this from Python? Thanks a lot! |
st176387 | Do you mean that once it’s set, it cannot be changed?
I believe so.
For my case, I’m hoping to make GPU 0 visible on the 1st iteration, GPU 1 visible on the 2nd, etc till GPU 7 and iter 8. Is there a way to do this from Python?
Can this be done by explicitly passing torch.cuda.device(i) to tensors/modules or use torch.cuda.set_device(i)? Is there any reason that you would like to change the visible devices? |
st176388 | I’m trying to do some parallelism but can’t figure out how to initialise processes with different ranks, each process on 1 different GPU. I am modifying some distributed computing code, but instead of having numerous nodes, I only have 1 8 GPU machine to work with.
(Pls bear with me, I’m a beginner in distributed computing!) So far I’ve worked out that the line dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=1, rank=args.rank) intialises the same process on all 8 GPUs, but this causes problems later in my code, where I need to get the specific GPU machine index. I tried to do this with torch.cuda.current_device() but it also returns 0 despite nvidia-smi showing that all 8 GPUs have been used. |
st176389 | CCL:
So far I’ve worked out that the line dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=1, rank=args.rank) intialises the same process on all 8 GPUs
The init_process_group API only sets up the process where this function is invoked. And, as the world_size is set to 1, It only expects one process in the distributed training gang. If you would like to use multiple processes, please see this example 20.
but this causes problems later in my code, where I need to get the specific GPU machine index.
To get machine index, will it work if you use args.rank?
I tried to do this with torch.cuda.current_device() but it also returns 0 despite nvidia-smi showing that all 8 GPUs have been used.
The torch.cuda.current_device() returns the current device. By default, it is the first GPU, which is indeed indexed by 0. |
st176390 | I just tried using args.rank, but it seems like they all return rank 0. I’m really quite lost on how to do parallelism. |
st176391 | How did you launch those 8 processes? Did you launch it using similar code in the example 18 or the launching script 4?
And how did you set args.rank? I presume it’s command line args + argparse? |
st176392 | It will be helpful to have a self-contained min repro code. So that we can help debug. |
st176393 | To use a different gpu in the system, isn’t when you declare the device
mydevice=torch.device(“cuda:2”)
or
mydevice=torch.device(“cuda”, 2)
the point is you have to pass the ordinal for the gpu you want to use.
See torch.device at Tensor Attributes — PyTorch 1.8.1 documentation 69 |
st176394 | I have coded a model that has different parts on two same GPUs. While training, I got some warning about NaN value or Inf value. BUT same model, same data, same RNG, same GPU but 1, same HPC, I got no warning.
What is the cause of this? |
st176395 | How are you performing model parallelism? Are you using PyTorch RPC? A simple reproducible script would help a lot in understanding the root cause of the issue.
Since your model works with 1 GPU but not 2, I am just guessing it may have something to do with cuda synchronization and this is giving garbage values, but please post an example of the code, your PyTorch version, and any frameworks you are using. |
st176396 | H-Huang:
PyTorch RPC
I think I can’t make a simple scripts to reproduce the problems but I will try, it will take a lot of time.
I’m using Horovod and NVIDIA apex 1. Here is my conda environment.
# packages in environment at /home/anhvd/miniconda3/envs/uniter:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
absl-py 0.12.0 pypi_0 pypi
anyio 2.2.0 pypi_0 pypi
apex 0.1 pypi_0 pypi
argon2-cffi 20.1.0 pypi_0 pypi
async-generator 1.10 pypi_0 pypi
attrs 20.3.0 pypi_0 pypi
autopep8 1.5.6 pypi_0 pypi
babel 2.9.1 pypi_0 pypi
backcall 0.2.0 pypi_0 pypi
blas 1.0 mkl
bleach 3.3.0 pypi_0 pypi
boto3 1.17.59 pypi_0 pypi
botocore 1.20.59 pypi_0 pypi
ca-certificates 2021.4.13 h06a4308_1
cachetools 4.2.2 pypi_0 pypi
certifi 2020.12.5 py36h06a4308_0
cffi 1.14.5 pypi_0 pypi
chardet 4.0.0 pypi_0 pypi
click 7.1.2 pypi_0 pypi
cloudpickle 1.6.0 pypi_0 pypi
cmake 3.18.4.post1 pypi_0 pypi
colorama 0.4.4 pypi_0 pypi
contextvars 2.4 pypi_0 pypi
cytoolz 0.11.0 pypi_0 pypi
dataclasses 0.8 pypi_0 pypi
decorator 5.0.7 pypi_0 pypi
defusedxml 0.7.1 pypi_0 pypi
deprecation 2.1.0 pypi_0 pypi
entrypoints 0.3 pypi_0 pypi
google-auth 1.30.0 pypi_0 pypi
google-auth-oauthlib 0.4.4 pypi_0 pypi
grpcio 1.37.0 pypi_0 pypi
horovod 0.21.3 pypi_0 pypi
idna 2.10 pypi_0 pypi
immutables 0.15 pypi_0 pypi
importlib-metadata 4.0.1 pypi_0 pypi
intel-openmp 2021.2.0 h06a4308_610
ipdb 0.12 pypi_0 pypi
ipykernel 5.5.3 pypi_0 pypi
ipython 7.16.1 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
jedi 0.18.0 pypi_0 pypi
jinja2 2.11.3 pypi_0 pypi
jmespath 0.10.0 pypi_0 pypi
joblib 1.0.1 pyhd3eb1b0_0
json5 0.9.5 pypi_0 pypi
jsonschema 3.2.0 pypi_0 pypi
jupyter-client 6.1.12 pypi_0 pypi
jupyter-core 4.7.1 pypi_0 pypi
jupyter-packaging 0.9.2 pypi_0 pypi
jupyter-server 1.6.4 pypi_0 pypi
jupyterlab 3.0.14 pypi_0 pypi
jupyterlab-pygments 0.1.2 pypi_0 pypi
jupyterlab-server 2.5.0 pypi_0 pypi
ld_impl_linux-64 2.33.1 h53a641e_7
libffi 3.3 he6710b0_2
libgcc-ng 9.1.0 hdf63c60_0
libgfortran-ng 7.3.0 hdf63c60_0
libstdcxx-ng 9.1.0 hdf63c60_0
lmdb 0.97 pypi_0 pypi
lz4 2.1.9 pypi_0 pypi
markdown 3.3.4 pypi_0 pypi
markupsafe 1.1.1 pypi_0 pypi
mistune 0.8.4 pypi_0 pypi
mkl 2020.2 256
mkl-service 2.3.0 py36he8ac12f_0
mkl_fft 1.3.0 py36h54f3939_0
mkl_random 1.1.1 py36h0573a6f_0
msgpack 1.0.2 pypi_0 pypi
msgpack-numpy 0.4.7.1 pypi_0 pypi
nbclassic 0.2.7 pypi_0 pypi
nbclient 0.5.3 pypi_0 pypi
nbconvert 6.0.7 pypi_0 pypi
nbformat 5.1.3 pypi_0 pypi
ncurses 6.2 he6710b0_1
nest-asyncio 1.5.1 pypi_0 pypi
notebook 6.3.0 pypi_0 pypi
numpy 1.19.2 py36h54aff64_0
numpy-base 1.19.2 py36hfa32c7d_0
oauthlib 3.1.0 pypi_0 pypi
openssl 1.1.1k h27cfd23_0
packaging 20.9 pypi_0 pypi
pandas 1.1.5 pypi_0 pypi
pandocfilters 1.4.3 pypi_0 pypi
parso 0.8.2 pypi_0 pypi
pexpect 4.8.0 pypi_0 pypi
pickleshare 0.7.5 pypi_0 pypi
pip 21.0.1 py36h06a4308_0
pretty-errors 1.2.20 pypi_0 pypi
prometheus-client 0.10.1 pypi_0 pypi
prompt-toolkit 3.0.18 pypi_0 pypi
protobuf 3.15.8 pypi_0 pypi
psutil 5.8.0 pypi_0 pypi
ptyprocess 0.7.0 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pycodestyle 2.7.0 pypi_0 pypi
pycparser 2.20 pypi_0 pypi
pygments 2.8.1 pypi_0 pypi
pyparsing 2.4.7 pypi_0 pypi
pyrsistent 0.17.3 pypi_0 pypi
python 3.6.13 hdb3f193_0
python-dateutil 2.8.1 pypi_0 pypi
pytorch-pretrained-bert 0.6.2 pypi_0 pypi
pytz 2021.1 pypi_0 pypi
pyyaml 5.4.1 pypi_0 pypi
pyzmq 22.0.3 pypi_0 pypi
readline 8.1 h27cfd23_0
regex 2021.4.4 pypi_0 pypi
requests 2.25.1 pypi_0 pypi
requests-oauthlib 1.3.0 pypi_0 pypi
rsa 4.7.2 pypi_0 pypi
s3transfer 0.4.2 pypi_0 pypi
scikit-learn 0.24.1 py36ha9443f7_0
scipy 1.5.2 py36h0b6359f_0
send2trash 1.5.0 pypi_0 pypi
setuptools 52.0.0 py36h06a4308_0
six 1.15.0 py36h06a4308_0
sklearn 0.0 pypi_0 pypi
sniffio 1.2.0 pypi_0 pypi
sqlite 3.35.4 hdfb4753_0
tensorboard 2.5.0 pypi_0 pypi
tensorboard-data-server 0.6.0 pypi_0 pypi
tensorboard-plugin-wit 1.8.0 pypi_0 pypi
tensorboardx 1.7 pypi_0 pypi
terminado 0.9.4 pypi_0 pypi
testpath 0.4.4 pypi_0 pypi
threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 hbc83047_0
toml 0.10.2 pypi_0 pypi
tomlkit 0.7.0 pypi_0 pypi
toolz 0.11.1 pypi_0 pypi
torch 1.8.1 pypi_0 pypi
tornado 6.1 pypi_0 pypi
tqdm 4.60.0 pypi_0 pypi
traitlets 4.3.3 pypi_0 pypi
typing-extensions 3.7.4.3 pypi_0 pypi
urllib3 1.26.4 pypi_0 pypi
wcwidth 0.2.5 pypi_0 pypi
webencodings 0.5.1 pypi_0 pypi
werkzeug 1.0.1 pypi_0 pypi
wheel 0.36.2 pyhd3eb1b0_0
xz 5.2.5 h7b6447c_0
zipp 3.4.1 pypi_0 pypi
zlib 1.2.11 h7b6447c_3 |
st176397 | Questions and Help
Hi, everyone. When i use ddp, i have encounter some question…
And I want to running on single node 4 gpus on gcp.
If i set num_work=0, it will be work but training is slow. I want to boost training time.
But i set num_work>0 that always get follow error message.
Error message
RuntimeError: DataLoader worker exited unexpectedly with exit code 1.
Details are lost due to multiprocessing. Rerunning with num_workers=0 may give better error trace
code
import torch
from absl import app
def launch_training_job(local_rank,
processed_dataset):
### ddp ###
torch.distributed.init_process_group(backend='nccl',
world_size=4,
rank=local_rank)
torch.cuda.set_device(local_rank)
print('[INFO] Starting nccl for ddp.')
distributed_sampler = torch.utils.data.distributed.DistributedSampler(processed_dataset)
processed_sms_dataloader = torch.utils.data.DataLoader(processed_dataset,
batch_size=32,
pin_memory=True,
num_workers=2,
sampler=distributed_sampler)
def main(argv):
.......
num_gpus=4
torch.multiprocessing.spawn(launch_training_job,
args=(processed_dataset),
nprocs=num_gpus)
if __name__ == "__main__":
app.run(main)
Environment
GCP ml-engine complex_model_m_p100
CPUs: 16
RAM: 60 GB
GPU: NVIDIA Tesla P100 * 1
gcp image uri: gcr.io/cloud-ml-public/training/pytorch-gpu.1-7
Hope someone can help, I will appreciate…
@ptrblck I found that you have answered a similar issue
Can you get some advice and help… Thank you |
st176398 | Can you confirm that there are 4 GPUs available on your machine? You can find that through torch.cuda.device_count(). An alternative is to do this programmatically when starting DDP, you can try something like:
import torch
from absl import app
def launch_training_job(local_rank, world_size
processed_dataset):
### ddp ###
torch.distributed.init_process_group(backend='nccl',
world_size=world_size,
rank=local_rank)
torch.cuda.set_device(local_rank)
print('[INFO] Starting nccl for ddp.')
distributed_sampler = torch.utils.data.distributed.DistributedSampler(processed_dataset)
processed_sms_dataloader = torch.utils.data.DataLoader(processed_dataset,
batch_size=32,
pin_memory=True,
num_workers=world_size,
sampler=distributed_sampler)
def main(argv):
.......
# world_size == number of GPUs
world_size=torch.cuda.device_count()
torch.multiprocessing.spawn(launch_training_job,
args=(processed_dataset, world_size),
nprocs=world_size)
if __name__ == "__main__":
app.run(main)
If this does not help, perhaps you can try posting this in the data loader questions of the forums instead? |
st176399 | I want to configure the Multiple gpu environment using ‘torch.multiprocessing’ and ‘torch.distributed’. However, I received the following error message.
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
I think I did ‘init_process_group’. My code is as follows.
#opt
...
def run(rank, size):
torch.manual_seed(1234)
device = torch.device("cuda:{}".format(rank))
netG_A2B = Generator(opt.input_nc + opt.mask_nc, opt.output_nc).to(device)
netG_B2A = Generator(opt.output_nc + opt.mask_nc, opt.input_nc).to(device)
netD_A = Discriminator(opt.input_nc).to(device)
netD_B = Discriminator(opt.output_nc).to(device)
netG_A2B.apply(weights_init_normal)
netG_B2A.apply(weights_init_normal)
netD_A.apply(weights_init_normal)
netD_B.apply(weights_init_normal)
# Lossess
...
# Optimizers & LR schedulers
optimizer_G = torch.optim.Adam(itertools.chain(netG_A2B.parameters(), netG_B2A.parameters()),
lr=opt.lr, betas=(0.5, 0.999))
optimizer_D_A = torch.optim.Adam(netD_A.parameters(), lr=opt.lr, betas=(0.5, 0.999))
optimizer_D_B = torch.optim.Adam(netD_B.parameters(), lr=opt.lr, betas=(0.5, 0.999))
lr_scheduler_G = torch.optim.lr_scheduler.LambdaLR(optimizer_G, lr_lambda=LambdaLR(opt.n_epochs, opt.epoch, opt.decay_epoch).step)
lr_scheduler_D_A = torch.optim.lr_scheduler.LambdaLR(optimizer_D_A, lr_lambda=LambdaLR(opt.n_epochs, opt.epoch, opt.decay_epoch).step)
lr_scheduler_D_B = torch.optim.lr_scheduler.LambdaLR(optimizer_D_B, lr_lambda=LambdaLR(opt.n_epochs, opt.epoch, opt.decay_epoch).step)
Tensor = torch.cuda.FloatTensor if opt.cuda else torch.Tensor
input_A = Tensor(opt.batchSize, opt.input_nc, opt.size, opt.size)
input_B = Tensor(opt.batchSize, opt.output_nc, opt.size, opt.size)
input_M = Tensor(opt.batchSize, opt.mask_nc, opt.size, opt.size)
target_real = Variable(Tensor(opt.batchSize).fill_(1.0), requires_grad=False)
target_fake = Variable(Tensor(opt.batchSize).fill_(0.0), requires_grad=False)
fake_A_buffer = ReplayBuffer()
fake_B_buffer = ReplayBuffer()
plt.ioff()
curr_iter = 0
G_losses = []
D_A_losses = []
D_B_losses = []
to_pil = transforms.ToPILImage()
# Dataset loader
print('Preparing data...')
transforms_image = [#transforms.Resize((opt.size, opt.size), Image.BICUBIC),
transforms.Resize(int(opt.size * 1.12), Image.BICUBIC),
transforms.RandomCrop(opt.size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
transforms_mask = [#transforms.Resize((opt.size, opt.size), Image.BICUBIC),
transforms.Resize(int(opt.size * 1.12), Image.BICUBIC),
transforms.RandomCrop(opt.size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])]
size = dist.get_world_size()
bsz = 128 / float(size)
dataloader = DataLoader(ImageNMaskDataset(opt.dataroot, transforms_image=transforms_image, transforms_mask=transforms_mask),
batch_size=opt.batchSize, shuffle=True, num_workers=opt.n_cpu)
for epoch in range(opt.epoch, opt.n_epochs):
for i, batch in enumerate(dataloader):
# Set model input
real_A = Variable(input_A.copy_(batch['A']))
real_B = Variable(input_B.copy_(batch['B']))
real_M = Variable(input_M.copy_(batch['M']))
###### Generators A2B and B2A ######
optimizer_G.zero_grad()
# Identity loss
# G_A2B(B) should equal B if real B is fed
...
# G_B2A(A) should equal A if real A is fed
...
# GAN loss
...
# Cycle loss
...
# Total loss
...
optimizer_G.step()
###################################
###### Discriminator A ######
optimizer_D_A.zero_grad()
# Real loss
...
# Fake loss
...
# Total loss
...
optimizer_D_A.step()
###################################
###### Discriminator B ######
optimizer_D_B.zero_grad()
# Real loss
...
# Fake loss
...
# Total loss
...
optimizer_D_B.step()
###################################
curr_iter += 1
if i % 1 == 0:
log = '[iter %d], [loss_G %.5f], [loss_G_identity %.5f], [loss_G_GAN %.5f],' \
'[loss_G_cycle %.5f], [loss_D %.5f], [epoch %d]' % \
(curr_iter, loss_G, (loss_identity_A + loss_identity_B), (loss_GAN_A2B + loss_GAN_B2A),
(loss_cycle_ABA + loss_cycle_BAB), (loss_D_A + loss_D_B), epoch)
print(log)
img_fake_A = 0.5 * (fake_A.detach().data + 1.0)
img_fake_A = (to_pil(img_fake_A.data[0].squeeze(0).cpu()))
img_fake_A.save('output/fake_A.png')
img_fake_B = 0.5 * (fake_B.detach().data + 1.0)
img_fake_B = (to_pil(img_fake_B.data[0].squeeze(0).cpu()))
img_fake_B.save('output/fake_B.png')
# Progress report (http://137.189.90.150:8097)
# logger.log({'loss_G': loss_G, 'loss_G_identity': (loss_identity_A + loss_identity_B), 'loss_G_GAN': (loss_GAN_A2B + loss_GAN_B2A),
# 'loss_G_cycle': (loss_cycle_ABA + loss_cycle_BAB), 'loss_D': (loss_D_A + loss_D_B)},
# images={'real_A': real_A, 'real_B': real_B, 'fake_A': fake_A, 'fake_B': fake_B})
# Update learning rates
lr_scheduler_G.step()
lr_scheduler_D_A.step()
lr_scheduler_D_B.step()
# Save models checkpoints
...
if (epoch+1) % opt.snapshot_epochs == 0:
torch.save(netG_A2B.state_dict(), ('output/netG_A2B_%d.pth' % (epoch+1)))
torch.save(netG_B2A.state_dict(), ('output/netG_B2A_%d.pth' % (epoch+1)))
print('Epoch:{}'.format(epoch))
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
###################################
if __name__ == "__main__":
mp.set_start_method('spawn', force=True)
size = 4
processes = []
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join() |
st176400 | In your main method, you can do this:
world_size = torch.cuda.device_count()
backend = 'gloo'
mp.spawn(init_process, args=(world_size, backend), nprocs=world_size, join=True)
For more details, check out this example 1 in the tutorial. |
st176401 | I try this, and I get this error message.
– Process 2 terminated with the following error:
Traceback (most recent call last):
File “/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py”, line 59, in _wrap
fn(i, *args)
File “/root/USRGAN_step2/train_custom.py”, line 390, in init_process
fn(rank, size)
TypeError: ‘str’ object is not callable |
st176402 | You need to change the order of fn and backend in init_process method, now backend value is fed to fn position. |
st176403 | Thank you. But I got same error.
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group. |
st176404 | creatives07:
init_process
That’s weird. init_process_group should have been called by your init_process in your last line. Can you add a log and confirm that your init_process_group is really called? |
st176405 | Traceback (most recent call last):
File “”, line 1, in
File “/usr/lib/python3.8/multiprocessing/spawn.py”, line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File “/usr/lib/python3.8/multiprocessing/spawn.py”, line 126, in _main
self = reduction.pickle.load(from_parent)
_pickle.UnpicklingError: pickle data was truncated
Traceback (most recent call last):
File “train_custom.py”, line 398, in
mp.spawn(init_process, args=(world_size, backend, run), nprocs=world_size, join=True)
File “/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py”, line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method=‘spawn’)
File “/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py”, line 188, in start_processes
while not context.join():
File “/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py”, line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
– Process 0 terminated with the following error:
Traceback (most recent call last):
File “/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py”, line 59, in _wrap
fn(i, *args)
File “/root/USRGAN_step2/train_custom.py”, line 390, in init_process
fn(rank, size2)
File “/root/USRGAN_step2/train_custom.py”, line 226, in run
for i, batch in enumerate(dataloader):
File “/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py”, line 517, in next
data = self._next_data()
File “/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py”, line 1199, in _next_data
return self._process_data(data)
File “/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py”, line 1225, in _process_data
data.reraise()
File “/usr/local/lib/python3.8/dist-packages/torch/_utils.py”, line 429, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File “/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/worker.py”, line 202, in _worker_loop
data = fetcher.fetch(index)
File “/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py”, line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File “/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py”, line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File “/root/USRGAN_step2/datasets_S3_2.py”, line 57, in getitem
size = dist.get_world_size()
File “/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py”, line 711, in get_world_size
return _get_group_size(group)
File “/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py”, line 263, in _get_group_size
default_pg = _get_default_group()
File “/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py”, line 347, in _get_default_group
raise RuntimeError("Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
That is all of Traceback. And I will review the code again according to your advice. Thank you so much. |
st176406 | What does “Reducer buckets have been rebuilt in this iteration” mean?
I got this at the start of training using 4 GPUs. |
st176407 | Solved by osalpekar in post #2
This refers to some of the internals of PyTorch DDP - in each backward pass, DDP must allreduce gradients across all the nodes (4 GPUs in this case) so they are all in sync we reap the benefit of using multiple GPUs. We gather these gradients into buckets (which are 25mb by default), and we initiate… |
st176408 | This refers to some of the internals of PyTorch DDP - in each backward pass, DDP must allreduce gradients across all the nodes (4 GPUs in this case) so they are all in sync we reap the benefit of using multiple GPUs. We gather these gradients into buckets (which are 25mb by default), and we initiate the allreduce once the bucket is full. Once during the course of training, these buckets are actually allocated according to when the tensors may receive grads in the backward pass. The log message you saw simply indicates this buck allocation/rebuilding process has taken place, in this case, at the start of training. |
st176409 | Only one GPU is used in my code, but “Reducer buckets have been rebuilt in this iteration” is still printed at the start of training. Is it normal? |
st176410 | I hope to perform the ensemble inference on a same validation data on multiple GPUs (i.e. 4 GPUS).
Originally, there was some data parallellism in this framework, and if I just used one single model for inference, it worked well with utility for all 4 GPUs above 85%.
But if I tried to use 2 models to do the inference, it got much slower and both GPU and CPU utility dropped to 25%. I think it must be caused by my not using the correct method to parallel it (I am using a for loop here):
##### The is for the evaluation ######
pretrained_models = ['model1', 'model2']
pool = []
for i, cur_model in enumerate(pretrained_models):
prediction = prediction_dict[cur_model]
pool.append(prediction.unsqueeze(0))
if(i==len(pretrained_models)-1):
tmp = torch.cat(pool)
ensemble_pred = tmp.mode(dim=0).value
my_metric_save(ensemble_pred);
The basic idea is, assuming we already have the prediction vectors obtained from both pretrained models, I am using a for-loop to extract them one after another, and finally combined them together as a new prediction vector “ensemble_pred”. I don’t know how to profile the runtime, but this probably destroyed the original parallel flow, so it slowed down the validation dramatically.
Could someone provide some guidance on what is the efficient way to do ensemble inference (mulitple pretrained models to evaluate one same data)? |
st176411 | Depending on the relative computational cost of the models, it maybe difficult to parallelize them across multiple GPUs “simultaneously” and synchronize the models across each batch. Since this is for pretrained models, can you do the predictions in a more “offline” way, where the first model processes all the data followed by the second model with the predictions being aggregated after both models are done? |
st176412 | Yeah the original model might occupies multiple GPUs, in this case if you try to parallel to models to do the inference on the set of GPUs, there might be more synchronizations happens to ensure two model computation not interfering each other, which might reduce the GPU utility in general. You can try what @eqy suggested. Also, if you like to know how your two models use GPU separately, you can do a profling for the two models separately with pytorch profiler, and see if there’re already operations occuping multiple GPUs. |
st176413 | Problem
Hi, Everyone. I have encountered some problem about pytorch ddp on single node multiple gpus.
My setting is follow as:
os.environ["MASTER_PORT"] = "9999"
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
.....
distributed_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
torch_dataloader = torch.utils.data.DataLoader(dataset,
batch_size=64,
pin_memory=True,
num_workers=4,
sampler=distributed_sampler)
model.cuda()
model = torch.nn.parallel.DistributedDataParallel(model)
But this setting is slower than DataParallel, and get some message.
Error Message
UserWarning: Single-Process Multi-GPU is not the recommended mode for DDP.
In this mode, each DDP instance operates on multiple devices and creates multiple module replicas within one process.
The overhead of scatter/gather and GIL contention in every forward pass can slow down training.
Please consider using one DDP instance per device or per module replica by explicitly setting device_ids or CUDA_VISIBLE_DEVICES.
Environment
python: 3.7
pytorch: 1.7
GCP ml-engine image_uri: gcr.io/cloud-ml-public/training/pytorch-gpu.1-7
gpu_type: complex_model_m_p100 (p100x4 on single node)
Hope someone can answer my problem. I will appreciate. |
st176414 | As you can see from the error message, it’s better to use multiple process for multiple GPU training even on a single node, you can use torch.multiprocessing.spawn(train_fn, args=(world_size,), nprocs=world_size) to initialize the training in multiple process |
st176415 | Hi.
As I mentioned at title, I trained my model in 2 different device environments to compare training speed.
I used torchvision official resnet50 model and CIFAR10 for dataset, which is enough small to run in single GPU.
I found that DDP on 8 GPUs are about 2x slower than single GPU.
Is this expected for small models? Or am I using something wrong with DDP?
+)
I used same parameters for 2 environments, including batch size. Should I incrase batch size for 8 GPUs DDP 8x times? Does this (hopefully always) guarantees similar accuracy with single GPU cases? |
st176416 | I used same parameters for 2 environments, including batch size. Should I incrase batch size for 8 GPUs DDP 8x times? Does this (hopefully always) guarantees similar accuracy with single GPU cases?
If this is per-process batch size, DDP batch size should actually be 1/8 compared to local training, so that DDP and local both collectively process the same number of samples in each iteration.
If this is already global batch size, can you try to increase the batch size for both DDP and local trianing and see how that changes the perf numbers? DDP would run allreduce on model gradients. So if the batch size is too small, the communication overhead can overshadow the speedup from parallelizing computations. |
st176417 | Maybe you can find a solution together:
DistributedDataParallel training not efficient distributed
Very interesting project!
So basically training with 4 GPUS needs 4 epochs to get the same results like a single GPU achieves in only 1 epoch.
This is not true if you consider the sync among 4 GPUs per epoch. It should be equivalent to running 4 epochs on a single GPU.
Can you confirm if there is any communication between different processes (by printing the gradient values of different ranks after backward)? Gradients of different ranks should be the same after backward.
Additionally, you… |
st176418 | Hello,
While using the latest version of DDP during training with 4 GPUs I am getting log “Reducer buckets have been rebuilt in this iteration” 3 times at the beginning of traning, while in the doc string it is stated that the rebuilding of the buckets should be done only once. My understanding is that I see 3 logs because there are 3 additional GPUs on which the buckets should be built but it doesn’t seem to be a trustworthy explanation. Thank you! |
st176419 | Hey @space1panda, does the same process print that log 3 times or does the log come from different processes? |
st176420 | Hi, Shen, thank you for replying. I’ve actually found out that the extra logs were caused by the arch of my model. I have 3 models and 3 optimizers in my framework, so it makes sense now, why ddp called allocation of buckets 3 times. |
st176421 | @space1panda Hi! I have 3 models and 3 optimizers in my framework as well. Actually, I implemented three torch.nn.parallel.DistributedDataParallel classes as follows, instead of ONE. And also the log ‘Reducer buckets have been rebuilt in this iteration.’ repeated three times. I only use one GPU to train. So it is ok to use multiple DistributedDataParallel models in Pytorch DDP?
model1 = ddp(model1, device_ids=[local_rank], output_device=local_rank)
model2 = ddp(model2, device_ids=[local_rank], output_device=local_rank)
model3 = ddp(model3, device_ids=[local_rank], output_device=local_rank) |
st176422 | Hi,
Let’s say I am using a DistributedSampler for multi-gpu training in the following fashion:
train_sampler = data.distributed.DistributedSampler(train_dataset,
num_replicas=hvd.size(),
rank=hvd.rank())
train_loader = data.DataLoader(dataset=train_dataset,
batch_size=config.dist_train.batch_size,
shuffle=False,
num_workers=args.data_workers,
pin_memory=True,
drop_last=True,
sampler=train_sampler)
During training loop, I set epoch for sampler to ensure reproducibility:
for epoch in range ()...
train_sampler.set_epoch(epoch - 1)
for i, _data in enumerate(train_loader):
...
However, if in the middle of epoch 3 I checkpoint the model:
torch.save({
"epoch": epoch,
"step": i,
"total_iter": total_iter,
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
"scheduler_state_dict": scheduler.state_dict()
}, CHECKPOINT_PATH)
Once I resume it, the sampler will iterate over the same data it has previously seen for epoch 3, instead of remember what it has already iterated and starting from the next unseen data. Is there anyway to do this?
Thanks for your help! |
st176423 | I think that if you fix a seed, and you also restore the last epoch value (and call sampler.set_epoch) you should be good |
st176424 | Hi there,
I am currently using Distributed Data Parallel to achieve multi GPU training. So far, I did not need to send data across GPUs, because I could make use of the fact that in the backward pass the gradients are gathered from all GPUs before updating the models on the different GPUs automatically by the Distributed Data Parallel class.
However, now, I would like to extend / change my loss function with a calculation that requires the data on all GPUs. So, I would like to manually / explicitly exchange tensors between GPUs before calling backward. Let’s say the loss is calculated from two terms:
One term, like a reconstruction loss term, that can be calculated on all GPUs separately and may be treated as normal, or additive in terms of gradients.
A second term, like a statistic on all samples in batches across different GPUs, for which I would like to have the GPUs sync / exchange data, before doing a loss calculation and backward pass.
So, I need to know A) how to achieve this data passing for the second term, but B) also how to compute the loss and perform a backward pass with these two types of losses together.
Any help is much appreciated!
Thanks,
Claartje |
st176425 | You could take a look at the Collective Communication docs 3 which explain how tensors can be scattered, gathered, and reduced in a distributed setup.
Once you’ve made sure the data is available on the desired devices, I assume you can calculate the loss and let DDP compute the gradients etc. |
st176426 | agreed with @ptrblck. @ClaartjeBarkhof You can manually send the data across gpus with the communication ops provide by the distributed package, in this case you might consider using allreduce 1, scatter or reduce_scatter. If you also want to automatically support backward for those data communications, consider using the torch.distributed.nn to do those communications |
st176427 | Hi guys!
I have one PC with two GPUs on board. Below is schematic code for distributed training:
dist.init_process_group(backend="nccl", .....)
.....
model = DistributedDataParallel(model, device_ids=[rank]) # rank is 0 or 1
.....
for i in range(len(batches))
.....
outputs = model(inputs, .....)
loss = criterion(outputs, ....)
loss.backward()
optimizer.step()
print(rank, "-", i)
.....
I expected to see following print outputs (to save space, I quote them horizontally):
0-0 1-0, 0-1 1-1, 0-2 1-2, 0-3 1-3, ..., 0-n 1-n
However I got something unexpected:
0-0 1-0, 0-1 1-1, ..., 0-60 1-62, ..., 0-3125 1-3145, ...
Sometimes gpu_1 is up by 20 iterations ahead of gpu_0!
I see two possibilities here:
my code has error somewhere
it is normal behavior for synchronization
If 2. is the case could you please explain why is it so?
From what I’ve understood in documentation, synchronization between processes happens at each iteration in loss.backward() call. Suppose I have some model parameter w. At each iteration this parameter must be the same in gpu_0 and gpu_1 model replicas.
For example:
at first iteration w is:
w + (upd0_0 + upd1_0)/2
at second iteration:
w + (upd0_0 + upd1_0)/2 + (upd0_1 + upd1_1)/2
at i-th iteration:
w + (upd0_0 + upd1_0)/2 + (upd0_1 + upd1_1)/2 + ... + (upd0_i + upd1_i)/2
where upd0_i and upd1_i- parameter updates calculated during backprop at i-th iteration in gpu_0 and gpu_1 correspondingly.
Thanks! |
st176428 | Solved by wayi in post #4
Realize that the root cause is that, your print statement only requires the input from host, not from device. If you print results from CUDA tensors, you should see synced outputs.
This is because although DDP syncs across devices at each step, the allreduce communication from the host perspective … |
st176429 | yurii:
print(rank, "-", i)
Can you change the print statement as print(rank, “-”, i, flush=True)? I guess some output is buffered, so it gives you the impression that one ranks is behind the other. |
st176430 | Hi, Yi Wang
Thanks for suggestion, I tried flush=True, but situation remains the same. Looks like I have bug in my code… |
st176431 | Realize that the root cause is that, your print statement only requires the input from host, not from device. If you print results from CUDA tensors, you should see synced outputs.
This is because although DDP syncs across devices at each step, the allreduce communication from the host perspective is just a non-blocking enqueue operation. There isn’t anything wrong with your DDP code. Just your print statement gives you an illusion caused by the enqueue operation from the host. |
st176432 | I am training a GAN model right now on multi GPUs using DataParallel, and try to follow the official guidance here 32 for saving torch.nn.DataParallel Models, as I plan to do evaluation on single GPU later, which means I need to load checkpoints trained on multi GPU to single GPU.
The official guidance indicates that, “to save a DataParallel model generically, save the model.module.state_dict() . This way, you have the flexibility to load the model any way you want to any device you want”:
#Save:
torch.save(model.module.state_dict(), PATH)
#Load:
# Load to whatever device you want
And this are my scripts for saving the generator and discriminator respectively:
torch.save(G.module.state_dict(),
'%s/%s_module.pth' % (root, join_strings('_', ['G', name_suffix])))
torch.save(D.module.state_dict(),
'%s/%s_module.pth' % (root, join_strings('_', ['D', name_suffix])))
However, when it comes to saving the checkpoint, I got error:
Traceback (most recent call last):
File "train.py", line 227, in <module>
main()
File "train.py", line 224, in main
run(config)
File "train.py", line 206, in run
state_dict, config, experiment_name)
File "/home/BIGGAN/train_fns.py", line 101, in save_and_sample
experiment_name, None, G_ema if config['ema'] else None)
File "/home/BIGGAN/utils.py", line 721, in save_weights
torch.save(G.module.state_dict(),
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'Generator' object has no attribute 'module'
But the checkpoints can be saved if I use:
torch.save(G.state_dict(),
'%s/%s.pth' % (root, join_strings('_', ['G', name_suffix])))
torch.save(D.state_dict(),
'%s/%s.pth' % (root, join_strings('_', ['D', name_suffix])))
I am using pytorch with version ‘1.5.0a0+8f84ded’.
I am not sure if the error has something to do with my pytorch version, or if I have missed something in my scripts.
Just in case, if there is another way around that can allow me to load checkpoints trained on multi GPU to a single GPU, would also be great.
Any guidance and assistance would be greatly appreciated! |
st176433 | Solved by seungjun in post #9
Hi @Janine,
This is not related to PyTorch version but the DataParallel (also DistributedDataParallel) class wrapper of PyTorch nn class models.
DataParallel encloses the original model as it member variable, self.module.
In case you need both single-GPU and multi-GPU model training, you can chan… |
st176434 | I think the tutorial you linked has a bug when it comes to the loading. You would want to load the state dict back to model.module, i.e.
# Load to whatever device you want 42
might well be amended as
model.module.load_state_dict(torch.load(PATH))
This way, the state dict matches the model without the DataParallel wrapper, and you can also load it to a unwrapped model on a single GPU (use map_location in torch.load if needed).
Best regards
Thomas |
st176435 | @tom Hi, Thomas, thanks a lot for your response.
Actually I am having trouble with saving not the loading, namely:
torch.save(model.module.state_dict(), PATH)
Do you happen to know if this function only apply to the latest pytorch version? Cause I am using version ‘1.5.0a0+8f84ded’ and got the following error when it comes to saving checkpoint:
Traceback (most recent call last):
File "train.py", line 227, in <module>
main()
File "train.py", line 224, in main
run(config)
File "train.py", line 206, in run
state_dict, config, experiment_name)
File "/home/BIGGAN/train_fns.py", line 101, in save_and_sample
experiment_name, None, G_ema if config['ema'] else None)
File "/home/BIGGAN/utils.py", line 721, in save_weights
torch.save(G.module.state_dict(),
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'Generator' object has no attribute 'module'
Or do I need to do some extra configuration when setting up the models if I wanna use torch.save(model.module.state_dict(), PATH) to save torch.nn.DataParallel Models? Thank you! |
st176436 | Ah, yeah, the generator apparently isn’t wrapped in a DataParallel instance. (The error says you’re trying to access an attribute module (which a DataParallel would have) from an object of type Generator.) |
st176437 | @tom Thank you for your kind explanation. Yet as I am confident that I have applied DataParallel to the generator, may I check if this is indeed a version issue? Namely torch.save(model.module.state_dict(), PATH) and model.module.load_state_dict(torch.load(PATH)) are new functions only apply to the latest pytorch version?
As reference just in case, the following is my code for training setup and saving checkpoints:
use_gpu = torch.cuda.is_available()
device = torch.device("cuda" if use_gpu else "cpu")
D = model.DiscriminatorACGAN(x_dim=x_dim, c_dim=c_dim, norm=norm, weight_norm=weight_norm).to(device)
G = model.GeneratorACGAN(z_dim=z_dim, c_dim=c_dim).to(device)
ngpu = 2 # I am using a 2 GPU machine
# Handle multi-gpu if desired
if (device.type == 'cuda') and (ngpu > 1):
G = nn.DataParallel(G, list(range(ngpu))).to(device)
if (device.type == 'cuda') and (ngpu > 1):
D = nn.DataParallel(D, list(range(ngpu))).to(device)
# gan loss function
d_loss_fn, g_loss_fn = model.get_losses_fn(loss_mode)
# optimizer
d_optimizer = torch.optim.Adam(D.parameters(), lr=d_learning_rate, betas=(0.5, 0.999))
g_optimizer = torch.optim.Adam(G.parameters(), lr=g_learning_rate, betas=(0.5, 0.999))
# run
z_sample = torch.randn(c_dim * 10, z_dim).to(device)
c_sample = torch.tensor(np.concatenate([np.eye(c_dim)] * 10), dtype=z_sample.dtype).to(device)
for ep in range(start_ep, epoch):
for i, (x, c_dense) in enumerate(train_loader):
step = ep * len(train_loader) + i + 1
D.train()
G.train()
x = x.to(device)
c_dense = c_dense.to(device)
z = torch.randn(batch_size, z_dim).to(device)
c = torch.tensor(np.eye(c_dim)[c_dense.cpu().numpy()], dtype=z.dtype).to(device)
x_f = G(z, c)
# train D
...
# train G
...
if ep % 10 == 0:
torch.save(G.module.state_dict(), os.path.join(ckpt_dir, 'netG_{}.pth'.format(ep)))
torch.save(D.module.state_dict(), os.path.join(ckpt_dir, 'netD_{}.pth'.format(ep))) |
st176438 | Janine:
orig_G
What’s orig_G/orig_D in torch.save? I don’t think I see these anywhere except in the save. |
st176439 | @tom Sorry, it’s a typo, I have corrected it. It should be
torch.save(G.module.state_dict(), os.path.join(ckpt_dir, 'netG_{}.pth'.format(ep)))
torch.save(D.module.state_dict(), os.path.join(ckpt_dir, 'netD_{}.pth'.format(ep)))
I was just modifying the code when copying it, and forgot to change it back , my fault, the error has nothing to do with the typo. |
st176440 | maybe you can do print(type(G)) at various points in your code to see where is becomes or not becomes a DataParallel. |
st176441 | Hi @Janine,
This is not related to PyTorch version but the DataParallel (also DistributedDataParallel) class wrapper of PyTorch nn class models.
DataParallel encloses the original model as it member variable, self.module.
In case you need both single-GPU and multi-GPU model training, you can change saving/loading behavior with if statements.
For example,
if isinstance(G, nn.DataParallel):
torch.save(G.module.state_dict(), model_save_name)
else:
torch.save(G.state_dict(), model_save_name)
If the current model class is DataParallel, you can save G.module.state_dict() otherwise save G.state_dict()
Also, at loading pretrained parameters, you could perform
if isinstance(G, nn.DataParallel):
G.module.load_state_dict(state_dict)
else:
G.load_state_dict(state_dict)
I would suggest stating a parent class Model that inherits nn.Module that overrides default state_dict function with the above method so that G and D could inherit it and simplify your training code part.
You may want to take a look at my code 30.
state_dict, load_state_dict, save functions are related. |
st176442 | Thank you @seungjun, this is indeed a very neat way to avoid conflicts. I can fully understand the module mechanism and resolve the issue right now. |
st176443 | Thank you @tom for your kind help, I have found the pitfalls in my original code and the problem has been solved. |
st176444 | Is there a way to do the opposite: a model trained on a single GPU being loaded into a multi-GPU inference script? This is how I’ve been doing it, but it doesn’t work as expected:
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model)
print("Using", torch.cuda.device_count(), "GPUs")
model.to(device)
checkpoint = torch.load(args.pretrained_path)
if torch.cuda.device_count() > 1:
model.module.load_state_dict(checkpoint['model_state_dict'])
else:
model.load_state_dict(checkpoint['model_state_dict'])
I’m using 2 GPUs, I’ve set the batch size to 50 (so 25 samples per GPU). The input right before being passed into the model has dimensions torch.Size([50, 3, 2048, 2048]). However, the output returned by the model only has 25 items. Not sure why this is the case. Appreciate any insights into this! |
st176445 | Not sure about the cause of the problem, but I would change the order to:
construct a single-GPU model
load the weights to the single-GPU model
parallelize the model to multi-GPU format (DP or DDP)
In your case, it could be:
model.to(device, ...)
model.load_state_dict(...)
model = nn.DataParallel(model, ...) |
st176446 | Using RPC only decreases “backward time”.
Hello I am using distributed.rpc changing the example in examples/main.py at 01539f9eada34aef67ae7b3d674f30634c35f468 · pytorch/examples · GitHub 2
I have N agents and 1 simulation. I would like to select actions in parallel, if possible in different GPU’s.
However, when I execute my code the only thing that seems to reduce is the update part. Below there is the code for selecting the actions which does not seem to be reduced at all, any ideas?
from torch.distributed.rpc import RRef, rpc_sync, rpc_async, remote
def _call_method(method, rref, *args, **kwargs):
r"""
a helper function to call a method on the given RRef
"""
return method(rref.local_value(), *args, **kwargs)
def _remote_method(method, rref, *args, **kwargs):
r"""
a helper function to run method on the owner of rref and fetch back the
result using RPC
"""
args = [method, rref] + list(args)
return rpc_sync(rref.owner(), _call_method, args=args, kwargs=kwargs)
def select_actions_all_agents(self, state, step):
self.current_actions = np.zeros(self.current_actions.shape, dtype=np.int32) * -1000
futs = []
start_time = time.time()
for ag_rreff in self.ag_rrefs:
# make async RPC to kick off an episode on all observers
futs.append(
rpc_async(
ag_rreff.owner(),
_call_method,
args=(Agent.select_action, ag_rreff, state, step)
)
)
# if step % 50 == 0:
# print("Done", ag_rreff, time.time()-start_time)
# wait until all obervers have finished this episode
for fut in futs:
fut.wait()
class Agent:
def __init__(self):
self.id = rpc.get_worker_info().id
device = (self.id - 1) % args.num_gpus
print("Initialasing Agent ID: {}, Device {}".format(self.id, device))
self.device = ("cuda:" + str(device) if torch.cuda.is_available() else "cpu")
self.rewards = []
self.saved_log_probs = []
# torch.manual_seed(args.seed+self.id)
self.policy = Policy()
self.policy.to(self.device)
self.optimizer = optim.Adam(self.policy.parameters(), lr=1e-2)
self.eps = np.finfo(np.float32).eps.item()
def select_action(self, state, step):
r"""
This function is mostly borrowed from the Reinforcement Learning example.
See https://github.com/pytorch/examples/tree/master/reinforcement_learning
The main difference is that instead of keeping all probs in one list,
the agent keeps probs in a dictionary, one key per observer.
NB: no need to enforce thread-safety here as GIL will serialize
executions.
"""
start_time = time.time()
state = torch.from_numpy(state).float().unsqueeze(0).to(self.device)
start_time_policy = time.time()
probs = self.policy(state)
m = Categorical(probs)
action = m.sample()
self.saved_log_probs.append(m.log_prob(action))
_remote_method(Master.report_action, master_rref, self.id, action.item()) |
st176447 | I am not very familiar with the example code here. Are self.ag_rrefs owned by different devices?
@mrshenli Any idea? |
st176448 | Hello,
The code is a modification of the example.
self.ag_rrefs creates instances of “Agent”. If I have world-size 3 I will create 1 master and 2 Agents. Each agent will be assigned to a certain GPU.
What I would like to do is to have an agent per GPU and then the environment in the CPU. |
st176449 | What I would like to do is to have an agent per GPU and then the environment in the CPU.
rpc_async(
ag_rreff.owner(),
_call_method,
args=(Agent.select_action, ag_rreff, state, step)
)
To have each GPU dedicated to an agent, I think you can somehow create a mapping and replace ag_rreff.owner() by a cuda device here, even if the environment is in the CPU. Did you assign different agents to the same CPU (i.e., did you have the same value of ag_rreff.owner() for different agents)?
Another side-suggestion: you can also try TensorPipe RPC backend to accelerate the RPC performance over GPUs, by modifying rpc.init_rpc lines. |
st176450 | “did you have the same value of ag_rreff.owner() for different agents)?” - Yes, the owner is the same, the environment with the CPU.
However, when I create the agents I make them have their own GPU. (Also when I select the action I send the state to their own GPU)
class Agent:
def __init__(self):
self.id = rpc.get_worker_info().id
device = (self.id - 1) % args.num_gpus
print("Initialasing Agent ID: {}, Device {}".format(self.id, device))
self.device = ("cuda:" + str(device) if torch.cuda.is_available() else "cpu")
self.policy = Policy()
self.policy.to(self.device)
self.optimizer = optim.Adam(self.policy.parameters(), lr=1e-2)
Does that help??
" I think you can somehow create a mapping and replace ag_rreff.owner() by a cuda device here, even if the environment is in the CPU" - How can I do that? I am a bit of newbie with RPC.
" Another side-suggestion: you can also try TensorPipe RPC backend to accelerate the RPC performance over GPUs, by modifying rpc.init_rpc lines." - I do not see any GPU options in TensorPipeRpcBackendOptions. Is there something I am missing here? |
st176451 | The first arg of rpc_async method specifies the device to run _call_method. Since you used CPU as the device (probably the same core), you actually didn’t parallelize the RPCs.
Can you show the code how you created ag_rrefs? I guess you probably just need to rewrite looping ag_rrefs in this way:
for i in range(len(self.ag_rrefs)):
ag_rref = self.ag_rrefs[i]
# the worker name probably is "worker0" if you only have one machine.
destination_worker = "{}/cuda:{}".format(worker_name, i)
futs.append(rpc_async(destination_worker, ...)
Additionally, unlike rref.local_value() used in _call_method, to move a rref value from a different device, you probably need to call rref.to_here() instead.
Regarding the usage of TensorPipe, please check out this example 1. To support GPU, you have to explicitly specify a device map. |
st176452 | First, thanks for your answers you are helping me very much.
“Since you used CPU as the device (probably the same core), you actually didn’t parallelize the RPCs.” - Actually I just used a default gym environment which usually uses CPU, however If I would use an environment which uses a GPU, would that change things?
" Can you show the code how you created ag_rrefs ? I guess you probably just need to rewrite looping ag_rrefs in this way:" - Let me share all the code. ‘ag_rrefs’ is created in Master class init.
import argparse
import gym
import numpy as np
import os
from itertools import count
import torch
import torch.distributed.rpc as rpc
import torch.multiprocessing as mp
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributed.rpc import RRef, rpc_sync, rpc_async, remote
from torch.distributions import Categorical
import time
measure_inference = False
measure_select_action = True
TOTAL_EPISODE_STEP = 5000
MASTER_NAME = "Environment"
AGENT_NAME = "Agent{}"
parser = argparse.ArgumentParser(description='PyTorch RPC RL example')
parser.add_argument('--num-steps', type=int, default=200)
parser.add_argument('--num-gpus', type=int, default=torch.cuda.device_count())
parser.add_argument('--world-size', type=int, default=2, metavar='W',
help='world size for RPC, rank 0 is the agent, others are observers')
parser.add_argument('--gamma', type=float, default=0.99, metavar='G',
help='discount factor (default: 0.99)')
parser.add_argument('--seed', type=int, default=543, metavar='N',
help='random seed (default: 543)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='interval between training status logs (default: 10)')
args = parser.parse_args()
torch.manual_seed(args.seed)
def _call_method(method, rref, *args, **kwargs):
r"""
a helper function to call a method on the given RRef
"""
return method(rref.local_value(), *args, **kwargs)
def _remote_method(method, rref, *args, **kwargs):
r"""
a helper function to run method on the owner of rref and fetch back the
result using RPC
"""
args = [method, rref] + list(args)
return rpc_sync(rref.owner(), _call_method, args=args, kwargs=kwargs)
class Policy(nn.Module):
r"""
Borrowing the ``Policy`` class from the Reinforcement Learning example.
Copying the code to make these two examples independent.
See https://github.com/pytorch/examples/tree/master/reinforcement_learning
"""
def __init__(self):
super(Policy, self).__init__()
self.affine1 = nn.Linear(4, 1024)
self.affine6 = nn.Linear(1024, 2)
self.hiddens_affine = nn.ModuleList([nn.Linear(1024, 1024) for _ in range(50)])
self.saved_log_probs = []
self.rewards = []
def forward(self, x):
if measure_inference:
start_time_inferece = time.time()
x = self.affine1(x)
for i, l in enumerate(self.hiddens_affine):
x = self.hiddens_affine[i](x)
x = F.relu(x)
action_scores = self.affine6(x)
if measure_inference:
print("Inference time", time.time()-start_time_inferece)
return F.softmax(action_scores, dim=1)
class Master:
def __init__(self, world_size):
self.ag_rrefs = []
self.master_rref = RRef(self)
self.rewards = []
self.saved_log_probs = []
self.eps = np.finfo(np.float32).eps.item()
self.running_reward = 0
for ag_rank in range(1, world_size):
ag_info = rpc.get_worker_info(AGENT_NAME.format(ag_rank))
# print("Ag_info", ag_info)
self.ag_rrefs.append(remote(ag_info, Agent))
self.current_actions = np.zeros(world_size-1, dtype=np.int32) * -1000
self.env = gym.make('CartPole-v1')
self.env.seed(args.seed)
self.reward_threshold = gym.make('CartPole-v1').spec.reward_threshold
self.time_update = 0.
self.num_time_update = 0.
self.time_select_action = 0.
self.num_time_select_action = 0.
def select_actions_all_agents(self, state, step):
self.current_actions = np.zeros(self.current_actions.shape, dtype=np.int32) * -1000
futs = []
start_time = time.time()
for ag_rreff in self.ag_rrefs:
# print(ag_rreff.owner())
# make async RPC to kick off an episode on all observers
futs.append(
rpc_async(
ag_rreff.owner(),
_call_method,
args=(Agent.select_action, ag_rreff, self.master_rref, state, step)
)
)
# if step % 50 == 0:
# print("Done", ag_rreff, time.time()-start_time)
# wait until all obervers have finished this episode
for fut in futs:
fut.wait()
if measure_select_action and step % 50 == 0:
print("Total time", time.time() - start_time)
self.time_select_action += (time.time() - start_time)
self.num_time_select_action += 1
# mean of the actions
# print(self.current_actions)
assert (self.current_actions>-2).all()
action = int(np.round(np.mean(self.current_actions)))
# print("Action", action, "All actions", self.current_actions)
return action
def report_action(self, agent_id, action):
self.current_actions[agent_id-1] = action
"""
def report_reward_all_agents(self, reward):
futs = []
for ag_rreff in self.ag_rrefs:
# report the reward to the agent for training purpose
futs.append(
rpc_async(
ob_rref.owner(),
_call_method,
args=(Agent.report_reward, ag_rreff, self.agent_rref, reward)
)
)
# wait until all obervers have finished this episode
for fut in futs:
fut.wait()
"""
def update_all_agents(self):
# calculate the running reward
self.sum_rewards = sum(self.rewards)
self.running_reward = 0.05 * self.sum_rewards + (1 - 0.05) * self.running_reward
start_time = time.time()
futs = []
for ag_rreff in self.ag_rrefs:
# print("ag_rreff", ag_rreff, "owner", ag_rreff.owner())
futs.append(
rpc_async(
ag_rreff.owner(),
_call_method,
args=(Agent.finish_episode, ag_rreff, self.rewards)
)
)
for fut in futs:
fut.wait()
self.time_update += (time.time() - start_time)
self.num_time_update += 1
self.rewards = []
def run_episode(self, n_steps):
r"""
Run one episode of n_steps.
Arguments:
n_steps (int): number of steps in this episode
"""
state, ep_reward = self.env.reset(), 0
for step in range(n_steps):
# send the state to the agents to get an action
action = self.select_actions_all_agents(state, step)
# apply the action to the environment, and get the reward
state, reward, done, _ = self.env.step(action)
# report the reward to the agent for training purpose
self.rewards.append(reward)
# if done:
# break
self.update_all_agents()
return self.sum_rewards, step
class Agent:
def __init__(self):
self.id = rpc.get_worker_info().id
device = (self.id - 1) % args.num_gpus
print("Initialasing Agent ID: {}, Device {}".format(self.id, device))
self.device = ("cuda:" + str(device) if torch.cuda.is_available() else "cpu")
self.rewards = []
self.saved_log_probs = []
# torch.manual_seed(args.seed+self.id)
self.policy = Policy()
self.policy.to(self.device)
self.optimizer = optim.Adam(self.policy.parameters(), lr=1e-2)
self.eps = np.finfo(np.float32).eps.item()
def select_action(self, master_rref, state, step):
r"""
This function is mostly borrowed from the Reinforcement Learning example.
See https://github.com/pytorch/examples/tree/master/reinforcement_learning
The main difference is that instead of keeping all probs in one list,
the agent keeps probs in a dictionary, one key per observer.
NB: no need to enforce thread-safety here as GIL will serialize
executions.
"""
start_time = time.time()
state = torch.from_numpy(state).float().unsqueeze(0).to(self.device)
start_time_policy = time.time()
probs = self.policy(state)
# if step % 50 == 0:
# print(state)
# if step % 50 == 0:
# print("Id{} Step {} Time for policy{}".format(self.id, step, time.time()-start_time_policy))
m = Categorical(probs)
action = m.sample()
self.saved_log_probs.append(m.log_prob(action))
# print(master_rref, self.id, action.item())
# if step % 50 == 0:
# print("Agent {} Time select action {}".format(self.id, time.time() - start_time))
_remote_method(Master.report_action, master_rref, self.id, action.item())
def finish_episode(self, saved_rewards):
r"""
This function is mostly borrowed from the Reinforcement Learning example.
See https://github.com/pytorch/examples/tree/master/reinforcement_learning
The main difference is that it joins all probs and rewards from
different observers into one list, and uses the minimum observer rewards
as the reward of the current episode.
"""
# joins probs and rewards from different observers into lists
# print("SAVED_REWARDS", saved_rewards)
R, probs, rewards = 0, self.saved_log_probs, saved_rewards
policy_loss, returns = [], []
for r in rewards[::-1]:
R = r + args.gamma * R
returns.insert(0, R)
# print("RETURNS BEFORE TORCH", returns)
returns = torch.tensor(returns).to(self.device)
returns = (returns - returns.mean()) / (returns.std() + self.eps)
for log_prob, R in zip(probs, returns):
policy_loss.append(-log_prob * R)
self.optimizer.zero_grad()
policy_loss = torch.cat(policy_loss).sum()
# print("POLICY LOSS", policy_loss)
# print("Saved log probs {}, saved rewards {}".format(len(self.saved_log_probs), len(saved_rewards)))
start_time = time.time()
policy_loss.backward()
self.optimizer.step()
# print("End time {} id {}".format(time.time()-start_time, self.id))
# clear saved probs
self.saved_log_probs = []
def report_statistics(master, i_episode, start_time, total_steps):
if i_episode % args.log_interval == 0:
print('Ep {}\tAvg r: {:.2f}\t Avg t(1000): {:.3f}'
'\tAvg t updt: {:.4f}\tAvg t select act: {:.4f}'.format(
i_episode,
master.running_reward,
1000 * (time.time() - start_time) / total_steps,
master.time_update / master.num_time_update,
master.time_select_action / master.num_time_select_action))
start_time = time.time()
total_steps = 0
master.time_update = 0.
master.num_time_update = 0.
master.time_select_action = 0.
master.num_time_select_action = 0.
return start_time, total_steps
def run_worker(rank, world_size):
r"""
This is the entry point for all processes. The rank 0 is the agent. All
other ranks are observers.
"""
# print("RANK", rank)
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
if rank == 0:
# rank0 is the agent
rpc.init_rpc(MASTER_NAME, rank=rank, world_size=world_size)
master = Master(world_size)
start_time = time.time()
total_steps = 0
for i_episode in count(1):
n_steps = args.num_steps # int(TOTAL_EPISODE_STEP / (args.world_size - 1))
last_reward, steps = master.run_episode(n_steps=n_steps)
total_steps += steps
start_time, total_steps = report_statistics(master, i_episode, start_time, total_steps)
if master.running_reward > master.reward_threshold or i_episode >= 10:
print("Solved! Running reward is now {}!".format(master.running_reward))
break
else:
# other ranks are the observer
rpc.init_rpc(AGENT_NAME.format(rank), rank=rank, world_size=world_size)
# observers passively waiting for instructions from agents
rpc.shutdown()
def main():
print("Number of GPU available", torch.cuda.device_count())
mp.spawn(
run_worker,
args=(args.world_size, ),
nprocs=args.world_size,
join=True
)
if __name__ == '__main__':
main()
When I try to use the example you proposed I get the following error: (Note the workers are called Agent0, Agent1, etc)
Unknown destination worker Agent0/cuda:0
" Additionally, unlike rref.local_value() used in _call_method , to move a rref value from a different device, you probably need to call rref.to_here() instead."
Using to here in the initial code does not seem to improve performance. |
st176453 | Unknown destination worker Agent0/cuda:0
After taking a look at the API, I think rpc_async can only accept str or WorkerInfo as the destination worker, so here AGENT_NAME.format(ag_rank) like Agent0 should meet the syntax requirement here. |
st176454 | Hello!
For development I use local machine with no GPU and have a remote machine with GPU.
I like to debug my code via IDE tools but also want to have access to gpu.
Using something a-la vscode over ssh is kinda slow so I want to run my scripts locally but send some computations to remote machine.
Ideal variant
# pytorch will connect to remote machine and start a process for GPU computation there
rpc_init("server_addr")
# all computations with model.parameters() will automagically execute on remote machine
model = Linear(3, 1).to("remote-gpu")
data = [
(Tensor([1, 2, 3]), 1), # may call .to("remote-gpu") as well
(Tensor([4, 5, 6]), 2), # not too bad
]
# data will be automagically sent to remote machine inside model.__call__()
# or it is already there if used Tensor.to("remote-gpu")
for (sample, label) in :
result = model(sample)
loss = compute_loss(label, result)
# this is done on remote machine as well
optimizer.step()
So I will run python script.py on local machine and use my local debugging tools and all the code will be run locally except somewhere deep tensor operations will do rpc calls to remote gpu to calculate and then execution will be on my machine again.
Is there an easy API in torch.distributed.rpc to achieve this? If not easy, how can achieve this with current API? |
st176455 | Solved by wayi in post #7
Documentation on RemoteModule says RemoteModule is not currently supported when using CUDA tensors, but you said tensors will be automatically placed to the same cuda device. Am I missing something? If CUDA tensors are not supported now, where can I track progress on this?
Thanks for pointing th… |
st176456 | Hi Lain!
meandmymind:
I like to debug my code via IDE tools but also want to have access to gpu.
Using something a-la vscode over ssh is kinda slow
May I recommend that you run “emacs” on your remote machine? If you
don’t have an X Server on your local machine (or you feel that X Display
is too slow), you can run “emacs -nw” in a terminal window.
Best.
K. Frank |
st176457 | There are two options:
A higher-level API RemoteModule (recommended):
Distributed RPC Framework — PyTorch master documentation 16
You don’t need to explicitly write any RPC. Instead, you need to override the forward method. When you construct this nn.Module like module, need to explicitly specify the device placement (on a remote GPU in your case). The input tensors will be automatically placed to the same cuda device.
Another example can be found here: Combining Distributed DataParallel with Distributed RPC Framework — PyTorch Tutorials 1.8.1+cu102 documentation 5
A lower-level API RRef:
Getting Started with Distributed RPC Framework — PyTorch Tutorials 1.8.1+cu102 documentation 3
You need to write your own RPC, and call to_here() to run your remote module in the RPC. |
st176458 | @KFrank, thanks for answering! But… I don’t like this solution for several reasons:
I usually use neovim as an IDE and I tried running it on remote machine and connect via ssh. It was unbearably slow. Maybe upgrading my local connection speed would resolve this problem, I should try.
This way I have to either sync all my dev environment between two machines or migrate to remote machine fully. It’s not a convenient solution because I don’t want to store my personal projects-related code on remote machine. |
st176459 | Also I don’t think TUI IDE over ssh is much better than VSCode over ssh. Maybe something like mosh can help as it is more terminal-application-friendly but last time I used it it messed with syntax highlighting |
st176460 | @wayi Thanks for answering! I guess your first option is what I need but I have several questions about it.
Documentation on RemoteModule says RemoteModule is not currently supported when using CUDA tensors, but you said tensors will be automatically placed to the same cuda device. Am I missing something? If CUDA tensors are not supported now, where can I track progress on this?
I still need to spawn a remote pytorch process manually every time I start my local process, right? Is there a solution to create a long-living remote process that will consume messages from different local processes?
If not, I can automate my local build process to do something like
ssh remote-host 'cd proj-dir; python remote-worker.py';
python train.py
It’s not very elegant but should work. |
st176461 | Documentation on RemoteModule says RemoteModule is not currently supported when using CUDA tensors, but you said tensors will be automatically placed to the same cuda device. Am I missing something? If CUDA tensors are not supported now, where can I track progress on this?
Thanks for pointing this out! The doc is outdated. Actually CUDA tensors are now supported on TensorPipe backend, documented on the same page. I will update the doc soon.
I still need to spawn a remote pytorch process manually every time I start my local process, right? Is there a solution to create a long-living remote process that will consume messages from different local processes?
You have to initiate both local process(es) and remote workers together every time. This is because at the very beginning a static process group needs to be built, and the remote module(s) will be destroyed if the process group is gone.
What you are asking is more like a treating remote module as a server, and a local process can connect to that server whenever it needs to offload some work to the server. This can cause a problem – if multiple local processes offloads some work to the same remote worker, it will slow down the training.
RPC framework usually works in an opposite way: a local process can be viewed as a master process, and you will try to distribute different modules to different remote workers. Note that a remote module does not really have to be allocated to another machine – it can be on a different device of the same machine. The model parallelism idea is distributing different subsets of a module to different devices, which can be on the same machine or different machines. As a user, you shouldn’t feel any difference in the usage though. |
st176462 | ssh remote-host ‘cd proj-dir; python remote-worker.py’;
python train.py
I am not sure this will work in your environment. You still need to make sure different hosts are connected, so you probably need to use sth like SLURM to deploy multi-node training. |
st176463 | wayi:
does
Thanks for your answer! Now I start to see why it’s not that convenient to fulfill my use case with current framework solution.
For me I will have only one “main” and one “remote” worker so it’s not hard. But for general rpc it’s not good. |
st176464 | meandmymind:
For me I will have only one “main” and one “remote” worker so it’s not hard. But for general rpc it’s not good.
RPC framework is mainly used for model parallelism. It seems that your use case is very different from this purpose.
Update: We have an ongoing project called elastic RPC, which should be able to work for your use case. |
st176465 | Hi,
When I wrap my model in DataParallel my forward() method no longer gets input of the right size. For example
class Net(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, y):
return x, y
net = Net()
device = torch.device('cuda')
net = nn.DataParallel(net)
net = net.to(device)
x = torch.ones((1024, 3, 100, 100))
y = torch.ones((1, 3, 100, 100))
x_out, y_out = net(x, y)
print(x_out.shape)
#torch.Size([128, 3, 100, 100])
I have 8 GPUs so it seems like I only get back the result from one of them. If I change the first dimension of y to match x I get back the original result, as expected. |
st176466 | My guess is something is strange about trying to coalesce y across the “batch” dimension when it is getting passed back. Does working around this by removing the first dimension of y (e.g., y = y.squeeze(0)) before passing it to the net work? |
st176467 | If you add some debug statements to the forward method you’ll see that due to the second input, the processing fails, since it cannot be chunked into 8 parts:
class Net(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, y):
print('x: {}, {}'.format(x.device, x.shape))
print('y: {}, {}'.format(y.device, y.shape))
return x, y
Your setup:
x = torch.ones((1024, 3, 100, 100))
y = torch.ones((1, 3, 100, 100))
x_out, y_out = net(x, y)
print(x_out.shape)
x: cuda:0, torch.Size([128, 3, 100, 100])
y: cuda:0, torch.Size([1, 3, 100, 100])
torch.Size([128, 3, 100, 100])
I’m currently unsure what the expected behavior is, as it currently seems to fall back to the smallest possible splitting (I would assume an error would be raised).
If you are using tensors with the same shape in dim0, it’s working as expected:
x = torch.ones((1024, 3, 100, 100))
y = torch.ones((1024, 3, 100, 100))
x_out, y_out = net(x, y)
print(x_out.shape)
x: cuda:0, torch.Size([128, 3, 100, 100])
y: cuda:0, torch.Size([128, 3, 100, 100])
x: cuda:1, torch.Size([128, 3, 100, 100])
y: cuda:1, torch.Size([128, 3, 100, 100])
x: cuda:2, torch.Size([128, 3, 100, 100])
x: cuda:3, torch.Size([128, 3, 100, 100])
y: cuda:2, torch.Size([128, 3, 100, 100])
x: cuda:4, torch.Size([128, 3, 100, 100])
x: cuda:5, torch.Size([128, 3, 100, 100])
y: cuda:4, torch.Size([128, 3, 100, 100])
y: cuda:5, torch.Size([128, 3, 100, 100])
x: cuda:6, torch.Size([128, 3, 100, 100])
x: cuda:7, torch.Size([128, 3, 100, 100])
y: cuda:6, torch.Size([128, 3, 100, 100])
y: cuda:7, torch.Size([128, 3, 100, 100])
y: cuda:3, torch.Size([128, 3, 100, 100])
torch.Size([1024, 3, 100, 100]) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.