id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st177368 | Thanks. Problem solved, but the root cause is unknown. Here is the story for reference. The program first calls train() to do model training, where the model will be built, optimizer will be constructed, and learning rate scheduler will also be initialized. The learning rate scheduler is based on a lambda expression. After train(), supposedly, the learning rate scheduler/model/optimizer should be collected by teh garbage collector, but not. The reason seems to be the lambda expression-based learning rate scheduler. In this case, the model/optimizer also are in the GPU memory without releasing resources. Then, in test(), the model will be re-built, which might have some issues, although the crash should be on data loader (if #workers is 0, also no issues). After refactoring the learning rate scheduler by removing the lambda expression, everything works fine. |
st177369 | Distributed training of the network shown below gives this error, while without distributed works.
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [12, 256, 1, 1]] is at version 2; expected version 1 instead.
[W python_anomaly_mode.cpp:60] Warning: Error detected in CudnnConvolutionBackward. Traceback of forward call that caused the error:
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/net.py", line 51, in forward
out1 = self.conv2(out0.clone())
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 419, in forward
return self._conv_forward(input, self.weight)
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 416, in _conv_forward
self.padding, self.dilation, self.groups)
(function print_stack)
Process Process-1:
Traceback (most recent call last):
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/e2r/Desktop/e2r/train.py", line 51, in init_process
fn(rank, opt)
File "/home/e2r/Desktop/e2r/train.py", line 194, in main_worker
trainer.train()
File "/home/e2r/Desktop/e2r//trainer.py", line 166, in train
self.run_epoch()
File "/home/e2r/Desktop/e2r/trainer.py", line 199, in run_epoch
losses["loss"].backward()
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [12, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
Enabling anamoly detection traces it back to a 1 x 1 convolution layer. I have tried many things to resolve it such as clone() before giving the layer as input and also having inplace=False in Relu layers. I still cannot resolve it. Here’s the network
class Net(nn.Module):
def __init__(self, num_ch_enc, num_input_features, num_frames_to_predict_for=None, stride=1):
super(Net, self).__init__()
self.num_ch_enc = num_ch_enc
self.num_input_features = num_input_features
if num_frames_to_predict_for is None:
num_frames_to_predict_for = num_input_features - 1
self.num_frames_to_predict_for = num_frames_to_predict_for
num_frames_to_predict_for_6 = int(6*self.num_frames_to_predict_for)
self.squeeze = nn.Conv2d(self.num_ch_enc[-1], 256, 1)
self.conv0 = nn.Conv2d(num_input_features * 256, 256, 3, stride, 1)
self.conv1 = nn.Conv2d(256, 256, 3, stride, 1)
self.conv2 = nn.Conv2d(256, num_frames_to_predict_for_6, 1, 1, 0)
self.relu = nn.ReLU(inplace=False)
def forward(self, input_features):
last_features = [f[-1] for f in input_features]
cat_features = [self.relu(self.squeeze(f)) for f in last_features]
cat_features = torch.cat(cat_features, 1)
out = cat_features
out = self.conv0(out)
out = self.relu(out)
out = self.conv1(out)
out = self.relu(out)
out = self.conv1(out)
out0 = self.relu(out)
# gives inplace modification error here for multiprocessing for conv2 layer
out1 = self.conv2(out0.clone())
out2 = out1.mean(3).clone()
out3 = out2.mean(2).clone()
out4 = 0.01 * out3.clone().view(-1, self.num_frames_to_predict_for, 1, 6)
axisangle = out4[..., :3]
translation = out4[..., 3:]
return out4[..., :3], out4[..., 3:]
Edit:
I tried to simplify the forward function as follows but I still get an error at the linear layer
def forward(self, x):
x = x[-1][-1]
x = self.pose3(x)
x = x.view(-1, self.num_frames_to_predict_for * 6* 6* 20)
# Same errror in the linear layer here
x1 = self.linear(x).clone()
x2 = 0.01 * x1.view(-1, self.num_frames_to_predict_for, 1, 6)
return x2[..., :3], x2[..., 3:] |
st177370 | Could the line cat_features = torch.cat(cat_features, 1) be the problem? If you’re just expanding a new column to the end of cat_features you could try .unsqueeze(1) (or whatever the final index is + 1) and see if that works instead?
So replacing,
cat_features = torch.cat(cat_features, 1)
out = cat_features
# gives inplace modification error here for multiprocessing for conv2 layer
out = self.conv0(out)
with
out = self.conv0(cat_features.unsqueeze(1))
Perhaps give that a go? |
st177371 | No, I’m not jsut expanding a new column, cat_features = torch.cat(cat_features, 1) concatenates the features from the list in the channel dimension |
st177372 | We are trying to use distributed data parallel for training with multiple computers and each one is having a GPU on it. They all are connected in LAN.
We used gloo backend and shared file. It worked fine when I used this approach on single machine. When I am trying to use the same thing with multiple computers It is giving some error like “Data is invalid”. The way file is shared among different computers is Home group feature of Windows10.
We tried another approach with TCP, but it is giving deprecated TCP backend error.
Server Code
import os
from datetime import datetime
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=2, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=0, type=int,
help='ranking within the nodes')
parser.add_argument('--epochs', default=2, type=int, metavar='N',
help='number of total epochs to run')
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
os.environ['MASTER_ADDR'] = '10.0.45.44'
os.environ['MASTER_PORT'] = '8888'
mp.spawn(train, nprocs=args.gpus, args=(args,))
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
def train(gpu, args):
rank = args.nr * args.gpus + gpu
dist.init_process_group(backend='gloo', init_method='file:///C:\\Users\\VIT\\BackEnd\\gloofile', world_size=args.world_size, rank=rank)
torch.manual_seed(0)
model = ConvNet()
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 100
# define loss function (criterion) and optimizer C:\Users\VIT\Desktop\BackEnd
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
# Wrap the model
model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])
# Data loading code
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset,
num_replicas=args.world_size,
rank=rank)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
start = datetime.now()
total_step = len(train_loader)
for epoch in range(args.epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, args.epochs, i + 1, total_step,
loss.item()))
if gpu == 0:
print("Training complete in: " + str(datetime.now() - start))
if __name__ == '__main__':
main()
Client code
import os
from datetime import datetime
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=2, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=1, type=int,
help='ranking within the nodes')
parser.add_argument('--epochs', default=2, type=int, metavar='N',
help='number of total epochs to run')
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
mp.spawn(train, nprocs=args.gpus, args=(args,))
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
def train(gpu, args):
#file:///ftp:\\10.0.45.44\\gloo\\gloofile.py
rank = args.nr * args.gpus + gpu
os.environ['MASTER_ADDR'] = '10.0.45.44'
os.environ['MASTER_PORT'] = '8888'
dist.init_process_group(backend='gloo',world_size=args.world_size, rank=rank)
torch.manual_seed(0)
model = ConvNet()
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 100
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
# Wrap the model
model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])
# Data loading code
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset,
num_replicas=args.world_size,
rank=rank)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
start = datetime.now()
total_step = len(train_loader)
for epoch in range(args.epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, args.epochs, i + 1, total_step,
loss.item()))
if gpu == 0:
print("Training complete in: " + str(datetime.now() - start))
if __name__ == '__main__':
main()
Thanks!!! |
st177373 | Hi, TCP initialization should not be deprecated and you should be able to use TCP initialization as per the documentation here: Distributed communication package - torch.distributed — PyTorch 1.7.0 documentation 64.
In order to debug the file initialization issue, could you paste the specific exception stack that you’re running into? Would be very helpful to understand the exact sort of error that is expected.
However, looking at your current code I’m not sure if the process group initialization is set up correctly. The server appears to use a file (which should be a shared networked file handle as you mention) but the client appears to use environment variable initialization with a master address and port. All participating processes should use the same init. |
st177374 | I was reading about data parallel in here. Its mentioned that we can use it by just calling model = nn.DataParallel(model). Then there is a link to another tutorial at the bottom of the page. There it is mentioned to use DataParallel like this self.block2 = nn.DataParallel(self.block2).
My question is what is the difference between them? does the first method do what second method does for every block automatically? if yes Which one is best practice if I want that for every block? Also in second method what happens to the blocks which are not inside DataParallel? |
st177375 | Solved by rvarm1 in post #2
DataParallel can be applied to any nn.Module. For the first example, you are correct in that every nn.Module (i.e. the blocks) will be data parallel as well, since the entire module is.
The second example mostly seeks to show that you can have a regular nn.Module, but have a data parallel module co… |
st177376 | DataParallel can be applied to any nn.Module. For the first example, you are correct in that every nn.Module (i.e. the blocks) will be data parallel as well, since the entire module is.
The second example mostly seeks to show that you can have a regular nn.Module, but have a data parallel module contained within it, and everything should work as expected, and only the data parallel module will be parallelized across the batch dimension. |
st177377 | I wanna generate a batch of mixture distribution of Gaussian from MLP’s output which represents the log of sigma, means, and logits. Are there any APIs in the distribution module work for this? Or just calculate from the formula? |
st177378 | This looks like a distribution instead of distributed question. cc @ptrblck for help |
st177379 | I having problem running training on Multiple Gpu when using Dataparallel. The code works fine when only one Gpu is used for training. I have pasted my code below.
batch_loader.py:
from torch.utils import data
import random
import os
import numpy as np
import torch
class TrainFolder(data.Dataset):
def __init__(self, file):
super(TrainFolder, self).__init__()
self.images = []
fid = file
for x in fid:
labelfile = x.replace("input", "target")
info = (x, labelfile)
self.images.append(info)
random.shuffle(self.images)
def __len__(self):
return len(self.images)
def __getitem__(self, index):
image_file, label_file = self.images[index]
img = np.load(image_file)
lab = np.load(label_file)
img = np.rollaxis(img, 2, 0)
lab = np.rollaxis(lab, 2, 0)
img = torch.from_numpy(img[:, :, :])
lab = torch.from_numpy(lab[:, :, :])
return img, lab
network.py:
import math
import torch
import torch.nn as nn
def gen_initialization(m):
if type(m) == nn.Conv2d:
sh = m.weight.shape
nn.init.normal_(m.weight, std=math.sqrt(2.0 / (sh[0]*sh[2]*sh[3])))
nn.init.constant_(m.bias, 0)
elif type(m) == nn.BatchNorm2d:
nn.init.normal_(m.weight)
nn.init.normal_(m.bias)
class TripleConv(nn.Module):
def __init__(self, in_ch, out_ch):
super(TripleConv, self).__init__()
mid_ch = (in_ch + out_ch) // 2
self.conv = nn.Sequential(
nn.Conv2d(in_ch, mid_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.ReLU(),
nn.Conv2d(mid_ch, mid_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.ReLU(),
nn.Conv2d(mid_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.ReLU()
)
self.conv.apply(gen_initialization)
def forward(self, x):
return self.conv(x)
class Down(nn.Module):
def __init__(self, in_ch, out_ch):
super(Down, self).__init__()
self.triple_conv = TripleConv(in_ch, out_ch)
self.avg_pool_conv = nn.AvgPool2d(2, 2)
self.in_ch = in_ch
self.out_ch = out_ch
def forward(self, x):
self.cache = self.triple_conv(x)
pad = torch.zeros(x.shape[0], self.out_ch - self.in_ch, x.shape[2], x.shape[3], device=x.device)
x = torch.cat((x, pad), dim=1)
self.cache += x
return self.avg_pool_conv(self.cache)
class Center(nn.Module):
def __init__(self, in_ch, out_ch):
super(Center, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.ReLU()
)
self.conv.apply(gen_initialization)
def forward(self, x):
return self.conv(x)
class Up(nn.Module):
def __init__(self, in_ch, out_ch):
super(Up, self).__init__()
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear',
align_corners=True)
self.triple_conv = TripleConv(in_ch, out_ch)
def forward(self, x, cache):
x = self.upsample(x)
x = torch.cat((x, cache), dim=1)
x = self.triple_conv(x)
return x
class UNet(nn.Module):
def __init__(self, in_ch, first_ch=None):
super(UNet, self).__init__()
if not first_ch:
first_ch = 32
self.down1 = Down(in_ch, first_ch)
self.down2 = Down(first_ch, first_ch*2)
self.down3 = Down(first_ch*2, first_ch*4)
self.down4 = Down(first_ch*4, first_ch*8)
self.center = Center(first_ch*8, first_ch*8)
self.up4 = Up(first_ch*8*2, first_ch*4)
self.up3 = Up(first_ch*4*2, first_ch*2)
self.up2 = Up(first_ch*2*2, first_ch)
self.up1 = Up(first_ch*2, first_ch)
self.output = nn.Conv2d(first_ch, in_ch, kernel_size=3, stride=1,
padding=1, bias=True)
self.output.apply(gen_initialization)
def forward(self, x):
x = self.down1(x)
x = self.down2(x)
x = self.down3(x)
x = self.down4(x)
x = self.center(x)
x = self.up4(x, self.down4.cache)
x = self.up3(x, self.down3.cache)
x = self.up2(x, self.down2.cache)
x = self.up1(x, self.down1.cache)
x = self.output(x)
return x
train.py:
from configobj import ConfigObj
from tqdm import tqdm
import os
import network
import glob
import random
import torch
from torch.utils.data import DataLoader
from torch.utils.data.dataset import random_split
from batch_loader import TrainFolder
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
def init_parameters():
tc, vc = ConfigObj(), ConfigObj()
tc.batch_size, vc.batch_size = 20, 4
tc.n_channels, vc.n_channels = 2, 2
tc.image_size, vc.image_size = 256, 256
tc.use_fp16, vc.use_fp16 = False, False # enable to use fp16 float precision instead of fp32
return tc, vc
if __name__ == '__main__':
num_workers = 10
torch.manual_seed(47)
torch.backends.cudnn.benchmark = True
train_samples = glob.glob('/home/data/nas/Processed_Data/training_data/spa_network/npyfiles/train/input/*.npy')
valid_samples = glob.glob('/home/data/nas/Processed_Data/training_data/spa_network/npyfiles/valid/input/*.npy')
random.shuffle(train_samples)
trainData = TrainFolder(train_samples)
validData = TrainFolder(valid_samples)
train_config, valid_config = init_parameters()
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
input = torch.Tensor(train_config.batch_size, train_config.n_channels, train_config.image_size, train_config.image_size).to(device)
input.requires_grad = False
label = torch.Tensor(train_config.batch_size, train_config.n_channels, train_config.image_size, train_config.image_size).to(device)
label.requires_grad = False
valid_input = torch.Tensor(valid_config.batch_size, valid_config.n_channels, valid_config.image_size, valid_config.image_size).to(device)
valid_input.requires_grad = False
valid_label = torch.Tensor(valid_config.batch_size, valid_config.n_channels, valid_config.image_size, valid_config.image_size).to(device)
valid_label.requires_grad = False
train_data_loader = DataLoader(dataset=trainData, num_workers=num_workers, batch_size=train_config.batch_size, shuffle=True, drop_last=False, pin_memory=True)
valid_data_loader = DataLoader(dataset=validData, num_workers=num_workers, batch_size=valid_config.batch_size, shuffle=True, drop_last=False, pin_memory=True)
netG = network.UNet(2, first_ch=32)
if torch.cuda.device_count() > 1 :
print("Using ", torch.cuda.device_count(), "GPUs!")
netG = nn.DataParallel(netG)
netG.to(device)
optimizerG = optim.Adam(netG.parameters(), lr=1e-3, betas=(0.9, 0.999))
# Initialize BCELoss function
criterion = nn.MSELoss().to(device=device)
scalerG = torch.cuda.amp.GradScaler(enabled=train_config.use_fp16)
print('Start training')
niter = 10000
for epoch in range(niter):
netG.train()
train_g_mse_error = 0
for i, data in enumerate(tqdm(train_data_loader)):
input.copy_(data[0])
label.copy_(data[1])
# train the generator over here
netG.zero_grad()
optimizerG.zero_grad()
with torch.cuda.amp.autocast(enabled=train_config.use_fp16):
output = netG(input)
errG_mse = torch.mean(torch.abs(output - label))
scalerG.scale(errG_mse).backward()
train_g_mse_error += errG_mse.mean()
scalerG.step(optimizerG)
scalerG.update()
train_g_mse_error = train_g_mse_error / len(train_data_loader)
netG.eval()
with torch.no_grad():
valid_g_mse_error = 0
for i, batch in enumerate(tqdm(valid_data_loader)):
valid_input.copy_(batch[0])
valid_label.copy_(batch[1])
with torch.cuda.amp.autocast(enabled=valid_config.use_fp16):
G_output = netG(valid_input)
valid_errG_mse = torch.mean(torch.abs(G_output - valid_label))
valid_g_mse_error += valid_errG_mse.mean()
valid_g_mse_error = valid_g_mse_error / len(valid_data_loader)
if epoch % 5 == 0:
torch.save(netG.state_dict(), f'model/network_epoch{epoch}.pth')
Error:
Traceback (most recent call last):
File “train.py”, line 85, in
scalerG.scale(errG_mse).backward()
File “/usr/local/lib/python3.6/dist-packages/torch/tensor.py”, line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py”, line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM
Exception raised from operator() at /pytorch/aten/src/ATen/native/cudnn/Conv.cpp:1141 (most recent call first):
Environment:
Ubuntu. 18.04
Cuda: 10.2
Pytorch- 1.6.0
Cudnn-7.5
Gpu0- Rtx 1080
Gpu1- Rtx 2080 |
st177380 | Could you update to the latest stable PyTorch release (1.7.1) and rerun the code? |
st177381 | I updated to torch 1.7.1 and got this error
Traceback (most recent call last):
File “train.py”, line 90, in
scalerG.scale(errG_mse).backward()
File “/usr/local/lib/python3.6/dist-packages/torch/tensor.py”, line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py”, line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM
You can try to repro this exception using the following code snippet. If that doesn’t trigger the error, please include your original repro script when reporting this issue.
import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([10, 80, 128, 128], dtype=torch.float, device=‘cuda’, requires_grad=True)
net = torch.nn.Conv2d(80, 80, kernel_size=[3, 3], padding=[1, 1], stride=[1, 1], dilation=[1, 1], groups=1)
net = net.cuda().float()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()
ConvolutionParams
data_type = CUDNN_DATA_FLOAT
padding = [1, 1, 0]
stride = [1, 1, 0]
dilation = [1, 1, 0]
groups = 1
deterministic = false
allow_tf32 = true
input: TensorDescriptor 0x7fd7080399b0
type = CUDNN_DATA_FLOAT
nbDims = 4
dimA = 10, 80, 128, 128,
strideA = 1310720, 16384, 128, 1,
output: TensorDescriptor 0x7fd70804b250
type = CUDNN_DATA_FLOAT
nbDims = 4
dimA = 10, 80, 128, 128,
strideA = 1310720, 16384, 128, 1,
weight: FilterDescriptor 0x7fd70808a2f0
type = CUDNN_DATA_FLOAT
tensor_format = CUDNN_TENSOR_NCHW
nbDims = 4
dimA = 80, 80, 3, 3,
Pointer addresses:
input: 0x7fd600000000
output: 0x7fd3fd800000
weight: 0x7fd79b68b800
Additional pointer addresses:
grad_output: 0x7fd3fd800000
grad_weight: 0x7fd79b68b800
Backward filter algorithm: 1 |
st177382 | Thanks for the update. Could you post the input shapes you are using so that I could try to reproduce it locally? |
st177383 | Input Shape: [24,2, 256,256]
Here:
Batch Size: 24
Channels: 2
Image SIze (Height): 256
Image Size (Width): 256 |
st177384 | Thanks. I cannot reproduce the issue using this code:
# your model defnitions
print(torch.cuda.get_device_name(0))
torch.backends.cudnn.benchmark = True
model = UNet(2).cuda()
x = torch.randn(24, 2, 256, 256, device='cuda')
target = torch.randn(24, 2, 256, 256, device='cuda')
out = model(x)
print(out.shape)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3, betas=(0.9, 0.999))
criterion = nn.MSELoss()
scaler = torch.cuda.amp.GradScaler()
print('running amp')
for _ in range(10):
optimizer.zero_grad()
with torch.cuda.amp.autocast():
out = model(x)
loss = torch.mean(torch.abs(out - target))
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
print(out.shape)
using an RTX2080Ti with PyTorch 1.6.0+CUDA10.2, 1.7.1+CUDA10.2, 1.7.1+CUDA11.0.
Are you seeing the crash using a single GPU as well or only in this particular setup with two different GPUs? |
st177385 | There is no crash using single GPU. The crash happens only with this particular setup with two different GPUs |
st177386 | OK, unfortunately I won’t be able to easily reproduce this issue as I don’t have a machine ready with these particular GPUs. |
st177387 | Ok. I also got this warning before training starts. I don’t think that should be a problem, but would like to know your opinion on it.
Using 2 GPUs!
/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py:30: UserWarning:
There is an imbalance between your GPUs. You may want to exclude GPU 0 which
has less than 75% of the memory or cores of GPU 1. You can do so by setting
the device_ids argument to DataParallel, or by setting the CUDA_VISIBLE_DEVICES
environment variable.
warnings.warn(imbalance_warn.format(device_ids[min_pos], device_ids[max_pos]))
I tried to reduce the batch size. But still ended up getting crash. |
st177388 | This warning is raised, since the two GPUs you are using have a different compute performance and the slower one would be the bottleneck of the application and the data parallel approach might not be beneficial, but as always it depends on the used config and use case. |
st177389 | Can you make sure that the cuda version of the pytorch and cuda version in PATH variable (or) /usr/local/cuda are same?
I faced similar issue. I am not sure if this was the root cause. But make sure of this. |
st177390 | I am running this script in the docker container. The docker image was built from
nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04
The cuda version for both the pytorch and docker environment is 10.2.
I still get the same cudnn error and crash. |
st177391 | Hi,
Me and a friend are working on making an RL LSTM for a video game. However, the rounds can take a while and we were hoping to have one of our (mine) machines run 24/7 (using a VM to allow us to still use our computers as it needs to simulate mouse movement and clicking) and his machine to connect sporadically to also run its own version of the game but contribute to the model learning process.
Is the possible? If not, what is the closest alternative approach?
I have looked through the documentation and while I understand how the current PyTorch distribution packages would work with a given dataset and whatnot, I am not sure how that would work for our use case. Since the model would need to update the parameters after each timestep in the game (each timestep is long, 1 second being the fastest but can be longer). Furthermore, since his computer would connect and disconnect, it would need someway of getting the latest model parameters…
Any advice? |
st177392 | thelastspark:
Furthermore, since his computer would connect and disconnect, it would need someway of getting the latest model parameters
Does this mean you would need elasticity (allow worker processes to join/leave dynamically) in torch.distributed.rpc? |
st177393 | mrshenli:
elasticity
Yes, exactly. However, I found only a single mention of the term elasticity on https://pytorch.org/docs/stable/rpc.html...perhaps 1 I am looking in the wrong spot? |
st177394 | Hey @thelastspark, yep, that’s the right doc. Unfortunately, we don’t have elasticity support yet. @H-Huang and @Kiuk_Chung are actively working on that.
Besides allowing nodes to join/leave dynamically, do you have other requirements for RPC? |
st177395 | That was honestly the biggest one because the RL model I am working on will require a single VM per instance because it interacts with a game by simulating mouse movements, as such, I was hoping to have a main trainer instance on my PC running, and then spin up as many VM’s on my local machine as I could during the day, and then down to 1 overnight (just so that the fans aren’t making sleep impossible) and my friend would do the same thing.
Otherwise training would take literal ages as the rounds can take anywhere from 20-45 minutes each and we can speed it each game up so much before there is a hard limit due to the need for image recognition for certain events that the algorithm would miss if the speed scale is to high. |
st177396 | To train a WGAN-GP in DistributedDataParallel,I met several errors in coding.
fake_g_pred=self.model['D'](outputs)
gen_loss=self.criteron['adv'](fake_g_pred,True)
loss_g.backward()
self.optimizer['G'].step()
self.lr_scheduler['G'].step()
#hat grad penalty
epsilon=torch.rand(images.size(0),1,1,1).to(self.device).expand(images.size())
hat=(outputs.mul(1-epsilon)+images.mul(epsilon))
hat=torch.autograd.Variable(hat,requires_grad=True)
dis_hat_loss=self.model['D'](hat)
grad=torch.autograd.grad(
outputs=dis_hat_loss,inputs=hat,
grad_outputs=torch.ones_like(dis_hat_loss),
retain_graph=True,create_graph=True,only_inputs=True
)[0]
grad_penalty=((grad.norm(2,dim=1)-1)**2).mean()
grad_penalty.backward()
While the GANis able to run one iter (only one batch in dataloader),the second iter reports an error.
RuntimeError: Expected to have finished reduction in the prior
iteration before starting a new one.This error indicates that your module
has parameters that were not used in producing loss. You can enable
unused parameter detection by (1) passing the keyword argument
`find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`;
(2) making sure all `forward` function outputs participate in calculating loss.
If you already have done the above two steps, then the distributed data parallel
module wasn't able to locate the output tensors in the return
value of your module's `forward` function.
Please include the loss function and the structure
of the return value of `forward` of your module
when reporting this issue (e.g. list, dict, iterable).
It throws an error at line
fake_g_pred=self.model['D'](outputs)
Seems that’s an intrinsic bug in DDP model,Can anyone tell me how to debug it or any alternatives to
train a WGAN with Gradient Penalty in distributeddataparallel way? |
st177397 | Hi,
I am afraid this is expected as DDP only works with .backward() and does not support autograd.grad.
You will have to use .backward() for it to work properly. |
st177398 | @ptrblck
@smth
Hello everyone.We are trying to implement distributed computing using pytorch DistributedDataParallel.
We have two computers connected via LAN and we are trying to Distribute computation.
Here is a script which i am running on server:
import os
from datetime import datetime
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=2, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=0, type=int,
help='ranking within the nodes')
parser.add_argument('--epochs', default=2, type=int, metavar='N',
help='number of total epochs to run')
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
os.environ['MASTER_ADDR'] = <serverIP>
os.environ['MASTER_PORT'] = <Port>
mp.spawn(train, nprocs=args.gpus, args=(args,))
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
def train(gpu, args):
rank = args.nr * args.gpus + gpu
dist.init_process_group(backend='gloo', init_method='file:\\C:\\Users\\VIT\\Desktop\\test\\glooBackened.py', world_size=args.world_size, rank=rank)
torch.manual_seed(0)
model = ConvNet()
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 100
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
# Wrap the model
model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])
# Data loading code
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset,
num_replicas=args.world_size,
rank=rank)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
start = datetime.now()
total_step = len(train_loader)
for epoch in range(args.epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, args.epochs, i + 1, total_step,
loss.item()))
if gpu == 0:
print("Training complete in: " + str(datetime.now() - start))
if __name__ == '__main__':
main()
Here if we give nodes as 1 then it is executing perfectly.
Here is a script which i am running on client:
import os
from datetime import datetime
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=2, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=0, type=int,
help='ranking within the nodes')
parser.add_argument('--epochs', default=2, type=int, metavar='N',
help='number of total epochs to run')
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
os.environ['MASTER_ADDR'] = <serverIP>
os.environ['MASTER_PORT'] = <Port>
mp.spawn(train, nprocs=args.gpus, args=(args,))
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
def train(gpu, args):
rank = args.nr * args.gpus + gpu
dist.init_process_group(backend='gloo', init_method='file:\\C:\\Users\\VIT\\Desktop\\test\\glooBackened.py', world_size=args.world_size, rank=rank)
torch.manual_seed(0)
model = ConvNet()
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 100
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
# Wrap the model
model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])
# Data loading code
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset,
num_replicas=args.world_size,
rank=rank)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
start = datetime.now()
total_step = len(train_loader)
for epoch in range(args.epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, args.epochs, i + 1, total_step,
loss.item()))
if gpu == 0:
print("Training complete in: " + str(datetime.now() - start))
if __name__ == '__main__':
main()
According to article we read we are having same script and the only change is the rank.
We are setting rank 0 for server and rank 1 for client.Both of them are waiting for each other and not running at all.
I tried ping in Windows commandPrompt and it is working fine.
I also tried to run both server and client script on same system by keeping ip address as localhost.Still it is waiting.
Hoping that someone will solve our proble.Thank you |
st177399 | Please don’t tag specific users, as it might discourage others to post an answer and you might tag a non-expert on this topic. |
st177400 | The nodes become aware of each other through a process called rendezvous, which happens within dist.init_process_group which is a synchronization point for all nodes. Looking at the arguments you’ve passed into init_process_group, if your client is on a different machine and your filesystem is not somehow networked, different files will be used for initialization, so the processes will never come to know about each other.
If you are using windows for DDP training, we only support file-backed initiliazation and single-machine use cases. We have landed TCP-based initialization support in PyTorch master (https://github.com/pytorch/pytorch/pull/47749 3), and you can find docs to use TCP-based init here: https://pytorch.org/docs/stable/distributed.html#tcp-initialization 1 |
st177401 | Sir i tried using Tcp but it is giving me deprecated error.
Code for server is as follows
import os
from datetime import datetime
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--nodes', default=2, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('-g', '--gpus', default=1, type=int,
help='number of gpus per node')
parser.add_argument('-nr', '--nr', default=0, type=int,
help='ranking within the nodes')
parser.add_argument('--epochs', default=2, type=int, metavar='N',
help='number of total epochs to run')
args = parser.parse_args()
args.world_size = args.gpus * args.nodes
os.environ['MASTER_ADDR'] = '10.0.45.47'
os.environ['MASTER_PORT'] = '8888'
torch.cuda.set_device(0)
mp.spawn(train, nprocs=args.gpus, args=(args,))
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
def train(gpu, args):
rank = args.nr * args.gpus + gpu
dist.init_process_group(backend='tcp', init_method='tcp://10.0.45.47:8888', world_size=args.world_size, rank=rank)
torch.manual_seed(0)
model = ConvNet()
torch.cuda.set_device(gpu)
model.cuda(gpu)
batch_size = 100
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(gpu)
optimizer = torch.optim.SGD(model.parameters(), 1e-4)
# Wrap the model
model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])
# Data loading code
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset,
num_replicas=args.world_size,
rank=rank)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
start = datetime.now()
total_step = len(train_loader)
for epoch in range(args.epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.cuda(non_blocking=True)
labels = labels.cuda(non_blocking=True)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0 and gpu == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, args.epochs, i + 1, total_step,
loss.item()))
if gpu == 0:
print("Training complete in: " + str(datetime.now() - start))
if __name__ == '__main__':
main()
Error is as follows:
raise ValueError("TCP backend has been deprecated. Please use "
ValueError: TCP backend has been deprecated. Please use Gloo or MPI backend for collective operations on CPU tensors.
Thank you sir. |
st177402 | Hello, I am trying to train a network using DDP. The architecture of the network is such that it consists of two sub-networks (a, b) and depending on input either only a or only b or both a and b get executed. Things work fine on a single GPU. But when expanding the network to 2 or more GPUS the backward just hangs. Any help is appreciated. Thanks.
Below is the minimal reproducible example.
import numpy as np
import os
import torch
import torch.nn as nn
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.optim as optim
import torch.nn.functional as F
class NetA(nn.Module):
def __init__(self):
super().__init__()
self.a1 = nn.Conv2d(3, 64, kernel_size=3, padding=1)
self.a2 = nn.ReLU(inplace=True)
self.a3 = nn.Conv2d(64, 32, kernel_size=3, padding=1)
self.a4 = nn.MaxPool2d(kernel_size=8, stride=8)
self.a5 = nn.Linear(8192, 1)
def forward(self, data):
if data.shape[0] == 0:
return torch.zeros(1).cuda() #to(data['b'])
x = self.a4(self.a3(self.a2(self.a1(data))))
x = self.a5(torch.flatten(x, start_dim=1))
return x
class NetB(nn.Module):
def __init__(self):
super().__init__()
self.b1 = nn.Conv2d(3, 64, kernel_size=3, padding=1)
self.b2 = nn.ReLU(inplace=True)
self.b3 = nn.Conv2d(64, 32, kernel_size=3, padding=1)
self.b4 = nn.MaxPool2d(kernel_size=4, stride=4)
self.b5 = nn.Linear(2048, 1)
def forward(self, data):
if data.shape[0] == 0:
return torch.zeros(1).cuda() #to(data['b'])
x = self.b4(self.b3(self.b2(self.b1(data))))
x = self.b5(torch.flatten(x, start_dim=1))
return x
def main2():
mp.set_start_method('spawn')
rank = int(os.environ['RANK'])
num_gpus = torch.cuda.device_count()
torch.cuda.set_device(rank % num_gpus)
dist.init_process_group(backend='nccl')
neta = NetA().cuda()
netb = NetB().cuda()
ddp_neta = torch.nn.parallel.DistributedDataParallel(
module=neta,
device_ids=[torch.cuda.current_device()],
broadcast_buffers=False,
find_unused_parameters=True)
ddp_netb = torch.nn.parallel.DistributedDataParallel(
module=netb,
device_ids=[torch.cuda.current_device()],
broadcast_buffers=False,
find_unused_parameters=True)
opt_a = optim.SGD(ddp_neta.parameters(), lr=0.001)
opt_b = optim.SGD(ddp_netb.parameters(), lr=0.001)
print('Finetuneing the network... on gpu ', rank)
for i in range(0,20):
opt_a.zero_grad()
opt_b.zero_grad()
f = np.random.rand()
if f < 0.33:
out_a = ddp_neta(torch.randn(4,3,128,128).to(rank))
out_b = ddp_netb(torch.randn(2,3,32,32).to(rank))
loss_a = F.softplus(out_a).mean()
loss_b = F.softplus(out_b).mean()
#print(i, loss_a, loss_b)
elif f < 0.66 and f > 0.33:
out_b = ddp_netb(torch.randn(0,3, 32, 32).to(rank))
out_a = ddp_neta(torch.randn(6,3,128,128).to(rank))
loss_a = F.softplus(out_a).mean()
loss_b = F.softplus(out_b).mean()
#print(i, ' loss_a ', loss_a)
else:
out_a = ddp_neta(torch.randn(0,3,128,128).to(rank))
out_b = ddp_netb(torch.randn(3,3,32,32).to(rank))
loss_b = F.softplus(out_b).mean()
loss_a = F.softplus(out_a).mean()
#print(i, ' loss_b ', loss_b)
print(i, loss_a, loss_b)
loss_a.backward()
loss_b.backward()
opt_a.step()
opt_b.step()
dist.destroy_process_group()
if __name__ == '__main__':
main2()
Any suggestions how to fix this? Do I need to use dist.all_gather and/or dist.all_reduce to run the above snippet on multiple gpus? I found this link https://github.com/pytorch/pytorch/issues/23425 9 and tried moving the if condition to the forward of wrapper layer containing both NetA and NetB. However, that still seems to hang at the backward step.
Thanks for the help |
st177403 | Did you also use the find_unused_parameters=True argument as described in the issue and the docs 28?
find_unused_parameters (bool 1) – Traverse the autograd graph from all tensors contained in the return value of the wrapped module’s forward function. Parameters that don’t receive gradients as part of this graph are preemptively marked as being ready to be reduced. Note that all forward outputs that are derived from module parameters must participate in calculating loss and later the gradient computation. If they don’t, this wrapper will hang waiting for autograd to produce gradients for those parameters. Any outputs derived from module parameters that are otherwise unused can be detached from the autograd graph using torch.Tensor.detach . (default: False ) |
st177404 | Yes. I tried with find_unused_parameters=True and still see the same hanging behaviour |
st177405 | Hi,
What are you trying to achieve here, train 2 models (a and b) at the same time? You generate value f every time for node 1 and node 2 (f can differ between 1 and 2, right? so the code path chosen by node 1 and 2 would be different) and then you are trying to sync the gradients (via DDP). If so, node 1 can run
out_b = ddp_netb(torch.randn(0,3, 32, 32).to(rank))
out_a = ddp_neta(torch.randn(6,3,128,128).to(rank))
while node 2 could run
out_a = ddp_neta(torch.randn(0,3,128,128).to(rank))
out_b = ddp_netb(torch.randn(3,3,32,32).to(rank))
for dimension with data.shape[0] == 0 you return 0 and none of parameters are being used on one node while they are being used and sync on another. This can lead to issues with DDP I believe.
Did you intend to run the same steps for each iteration of the training loop on node 0 and node 1, e.g. sync value of f between them? |
st177406 | Hi @agolynski,
Thanks for the reply. The goal is not to sync value ‘f’ across gpus and is conditioned upon the input mini-batch.
More context on what we are trying to do: We want to train a network which takes either a single tensor (‘A’ of shape Nx3x128x128) or pair (tensors ‘A’ and ‘B’ of shapes Nx3x128x128 and Mx3x32x32) as input and passed to NetA and NetB respectively. For example, tensor ‘A’ would be images and tensor ‘B’ could correspond to bounding box crops generated from an off the shelf object detector (in our case we do not alter or tune the parameters of the object detector). And if there are no objects detected in tensor ‘A’ then ‘B’ would be empty and we tried setting it to shape 0 zeros along the dimension 0. NetA and NetB do not share any parameters and are independent of each other.
One possibility we are aware of, is to make sure to run NetB only when tensor B can be obtained on all gpus. So, if we are running the model using DDP on 8 gpus, we could do the backward when tensor B is not empty on all 8 gpus, which is less frequent and would cause unreasonable amount of training time. We wanted to find a way to sync gradients only on gpus on which tensor B is not empty. Hope I am clear on what we are trying to do. Thanks. |
st177407 | I use the newest version of yoloV5 to training the coco image, the program successful train when num_worker = 0, if the num_worker = 0, the program will block and spend a lot of time to acquire data. Also, if i only train in one gpu, there has no problem if num_worker > 0.
I train in one machine with four titan xp GPUS and the pytorch == 1.7.1, cuda == 10.1, python == 3.8.5, anaconda virtual environment.
I have the same problem not only in yoloV5 project, sometimes it also happened in other project. The block position shows in figure blow:
image909×333 112 KB
Dose anyone have the same problem ? Thanks a lot. |
st177408 | Description of the problem
Hi,
I tried to train the model with DDP and a strange question occurred to me.
The training process worked really well if DataParallel was used instead of DistributedDataParallel.
When I wrapped the model with DDP, the exception below was raised:
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_para
meters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If
you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return va
lue of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module whe
n reporting this issue (e.g. list, dict, iterable).
I have tried to set “find_unused_parameters=True”, but the training process would get stuck. Specifically, the loss failed to go backward.
I have also checked all the parameters in my model with the method proposed here: How to find the unused parameters in network 79
I am really sure all the parameters have changed after several iterations, but I would still meet the “RuntimeError” as above.
Ps: The output of the model is a list of tensors.
I am really confused and wondering whether there exists any tips to help solve this problem.
Thank you! |
st177409 | Hi,
Could you maybe add a source code with reproduction of the problem?
One more suggestion. I’ve recently looked at a problem
github.com/pytorch/pytorch
Wrong gradients when using DistributedDataParallel and autograd.grad. 62
opened
Nov 7, 2020
closed
Nov 19, 2020
TropComplique
🐛 Bug
The gradient reduction across workers doesn't work when:
gradient penalty is used as a loss,
a bias is used in the last...
oncall: distributed
where loss function didn’t depend on all the parameters and it caused some issues for the gradient computation. What helped is to look at the autograd graph via https://github.com/szagoruyko/pytorchviz 42
Could you create such a graph from your model? |
st177410 | Hi,
Thanks for your help. When I tried to make a toy example for you to reproduce the problem, I found that the toy example really worked well, which was strange.
I finally found that the problem lies in the loss function. Specifically, the output of the model is not fully used when calculating the loss as for the detection or instance segmentation task, only positive samples are used to get the loss. So if a certain branch does not have any corresponding groundtruth in a certain step, the error of unused parameters would be thrown.
To handle this problem, simply set the “find_unused_parameters=True” seems not to work, as least in my case. A simple solution I could find is adding a safety loss, this is, fetch all the outputs of my model and multiply them by 0. This would give all the parameters a 0 gradient but it might waster some cuda memory. |
st177411 | oliver_ss:
To handle this problem, simply set the “find_unused_parameters=True” seems not to work, as least in my case.
Yep, this is true. Because that mode in DDP would require full access to all outputs and then traverse the graph from those outputs to find unused parameters. That’s also the reason that the error message says: “(2) making sure all forward function outputs participate in calculating loss.”
A simple solution I could find is adding a safety loss, this is, fetch all the outputs of my model and multiply them by 0. This would give all the parameters a 0 gradient but it might waster some cuda memory.
Another option might be only returning outputs that participate in computing loss. Other outputs can be stored in some model attributes, and retrieve them separately after the forward pass. |
st177412 | I’m trying to use the distributed data parallel thing with pytorch and I’m running into this error which I haven’t found a solution for elsewhere:
-- Process 2 terminated with the following error: Traceback (most recent call last): File "/afs/csail.mit.edu/u/v/asdf/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap fn(i, *args) File "../multiprocess_model.py", line 5, in train_model model.train(args) File "../train.py", line 227, in train for i, data_dict in enumerate(train_dataloader, 1): File "/afs/csail.mit.edu/u/v/asdf/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 363, in __next__ data = self._next_data() File "/afs/csail.mit.edu/u/v/asdf/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 402, in _next_data index = self._next_index() # may raise StopIteration File "/afs/csail.mit.edu/u/v/asdf/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 357, in _next_index return next(self._sampler_iter) # may raise StopIteration File "/afs/csail.mit.edu/u/v/asdf/miniconda3/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 208, in __iter__ for idx in self.sampler: File "/afs/csail.mit.edu/u/v/asdf/miniconda3/lib/python3.8/site-packages/torch/utils/data/distributed.py", line 84, in __iter__ assert len(indices) == self.num_samples AssertionError
How would I approach fixing this problem?
I can comment out the assertion errors in distributed.py, but I’m hoping there is a cleaner way to do this. |
st177413 | This error occurs in data loader. Could you please share a repro of the data loader?
cc @VitalyFedyunin for data loader questions |
st177414 | This also looks similar to https://github.com/pytorch/pytorch/issues/49172 21 (and proposal https://github.com/pytorch/pytorch/issues/49180 15). Do you use use DistributedSampler? |
st177415 | Yeah! It ended up being a problem with my data loader.
I did something and now it works, although I’m not sure what I did.
Could a possible problem be that the data loader was returning None for the elements occasionally? |
st177416 | On another note, I’m now getting a Exception: process 0 terminated with signal SIGSEGV when I run my model. |
st177417 | I am a bit confused about averaging gradients in distributed data-parallel. It seems there are two examples from the PyTorch documentation that are different. In one example, you create the model and just pass it to the GPU available then create a separate function to average gradients.
def average_gradients(model):
""" Gradient averaging. """
size = float(dist.get_world_size())
for param in model.parameters():
dist.all_reduce(param.grad.data, op=dist.reduce_op.SUM)
param.grad.data /= size
and its executed as follows.
outputs = model(inputs)
loss = loss_function(outputs, labels)
loss.backward()
average_gradients(model)
optimizer.step()
The other approach I have seen doesn’t create a separate function and just calls DPP.
model = ToyModel().cuda(device_ids[0])
ddp_model = DDP(model, device_ids)
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
optimizer.zero_grad()
outputs = ddp_model(torch.randn(20, 10))
labels = torch.randn(20, 5).to(device_ids[0])
loss_fn(outputs, labels).backward()
optimizer.step()
I would like to know whats the difference between the two approaches and which one should one use for distributed training in a HPC cluster. I specifically want to use two nodes, each with 4 GPUs. |
st177418 | Solved by mrshenli in post #2
Hey @ankahira, usually, there are 4 steps in distributed data parallel training:
local forward to compute loss
local backward to compute local gradients
allreduce (communication) to compute global gradients. This would be allreduce with SUM + divide by world size to calculate average
optimizer ste… |
st177419 | Hey @ankahira, usually, there are 4 steps in distributed data parallel training:
local forward to compute loss
local backward to compute local gradients
allreduce (communication) to compute global gradients. This would be allreduce with SUM + divide by world size to calculate average
optimizer step to use global gradients to update parameters
Both examples you mentioned above conduct the same four steps and are mathematically equivalent. The difference is that DDP would allow step 2 (backward computation) and 3 (allreduce communication) to overlap and therefore DDP is expected to be faster than the average_gradients approach.
More specifically, in the first example with average_gradients, there is a hard barrier between backward and allreduce, i.e., no comm can start before computation finishes. In the second example, DDP organizes gradients into buckets, and will launch comm as soon as a bucket of gradients are ready, so that computation and communication can run in parallel. This 72 would help explain that.
I would recommend DDP. |
st177420 | Because in many cases, the loss function computes per-sample loss (e.g., default MSELoss) instead of aggregated loss (e.g., default sum). So, DDP calculates the average and tries to keep the gradients scale consistent with local training. |
st177421 | I’m using DDP(one process per GPU) to training a 3D UNet. I transfered all batchnorm layer inside network to syncbatchnorm with nn.SyncBatchNorm.convert_sync_batchnorm.
When doing validation at the end of every training epoch on rank 0, it always freeze at same validation steps. I think it is because of the syncbatchnorm layer. What is the correct way to do validation when DDP model has syncbatchnorm layer? Should I do validation on all ranks?
Code
for epoch in range(epochs):
model.train()
train_loader.sampler.set_epoch(epoch)
for step, (data, target) in enumerate(train_loader):
# ...training codes
train(model)
if dist.get_rank() == 0:
# ...validation codes
model.eval()
validate(model)
dist.barrier()
Version / os
torch = 1.1.0
ubuntu 18.04
distributed backend: nccl
Similar question:
github.com/pytorch/pytorch
Freeze during validation with distributed training and model with batch normalization layers 29
opened
May 16, 2019
closed
May 21, 2019
SweetVlad
❓ Questions and Help
Hi,
I got unexpected behavior during training with torch.distributed.DistributedDataParallel model on multiple GPUs.
I train my model with DistributedSampler and...
oncall: distributed
triaged |
st177422 | Solved by ptrblck in post #2
Could you update to the latest stable release or the nightly binary and check, if you are still facing the error? 1.1.0 is quite old by now and this issue might have been already fixed. |
st177423 | Could you update to the latest stable release or the nightly binary and check, if you are still facing the error? 1.1.0 is quite old by now and this issue might have been already fixed. |
st177424 | sunshichen:
What is the correct way to do validation when DDP model has syncbatchnorm layer? Should I do validation on all ranks?
Yes, you probably need to do validation on all ranks since SyncBatchNorm has collectives which are expected to run on all ranks. The validation is probably getting stuck since SyncBatchNorm on rank 0 is waiting for collectives from other ranks.
Another option is to convert the SyncBatchNorm layer to a regular BatchNorm layer and then do the validation on a single rank. |
st177425 | Thanks I will try it.
Actually I have another question about v1.1.0 DDP.
I tried to inference the model with syncbatchnorm layer ( Actually, it becomes batchnorm layer after load from checkpoint ). The results turned to be different between:
Only turn on evaluate mode.
model.eval()
# inference...
Manually set track_running_stats of each BN layer to False after model.eval().
model.eval()
set_BN_track_running_stats(model, False)
# do inference..
It is strange that the second one is much better than first one on early epochs. Is this also a version problem?
P.S. However, after more epochs training, results of two inference method are similiar but still has small differences.
Below is sample code of set_BN_track_running_stats():
def set_BN_track_running_stats(module, ifTrack=True):
if isinstance(module, torch.nn.modules.batchnorm._BatchNorm):
module.track_running_stats = ifTrack
for child in module.children():
set_BN_track_running_stats(module = child, ifTrack=ifTrack) |
st177426 | Yes. Validate on all GPUs works. But shouldn’t model.eval() aotomatically disable sync process?
BTW, thanks for your reply. |
st177427 | sunshichen:
Actually I have another question about v1.1.0 DDP.
Are you referring to PyTorch v1.1.0 here? If so, I’d suggest upgrading to PyTorch 1.7 which is the latest version to see if this problem still persists. |
st177428 | sunshichen:
Yes. Validate on all GPUs works. But shouldn’t model.eval() aotomatically disable sync process?
Good point, I’ve opened an issue for this: https://github.com/pytorch/pytorch/issues/48988 26 |
st177429 | I upgrade torch to 1.7 today. The problem is gone. I think it is a 1.1.0 problem. Thanks again for you help. |
st177430 | Actually. I met another problem after I upgrade to V1.7.0. The result cames to be much worse than it on 1.1. Could you help me with that?
After Update pytorch from 1.1.0 to 1.7.0. DDP Model cannot converge distributed
I trained a 3D unet model by Pytorch 1.1.0 Distributed Data Parallel. The test result is good. But after I update Pytorch to V1.7.0. Same code different result. Anyone can help me with that? Is it a syncbatchnorm problem or what?
Even if I finetune the pretrained model(on 1.1.0). Both losses during training and validation increased a lot using v1.7.0.
Below is the training epoch loss of two version. [loss] |
st177431 | Validation hangs up when using DDP and syncbatchnorm distributed
Actually. I met another problem after I upgrade to V1.7.0. The result cames to be much worse than it on 1.1. Could you help me with that? |
st177432 | Hi all, I found logger.info will print additional info when using Distribute training (DDP).
When using pytorch version of 1.6.Calling logger.info may give:
Epoch 1, Node 0, GPU 3, Iter 300, Top1 Accuracy:0.083602, Loss:5.4438, 132 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 6, Iter 300, Top1 Accuracy:0.086197, Loss:5.4218, 129 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 1, Iter 300, Top1 Accuracy:0.084302, Loss:5.4295, 86 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 2, Iter 300, Top1 Accuracy:0.083498, Loss:5.4405, 130 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 0, Iter 300, Top1 Accuracy:0.087469, Loss:5.4242, 131 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 5, Iter 300, Top1 Accuracy:0.083731, Loss:5.4297, 127 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 7, Iter 300, Top1 Accuracy:0.0856, Loss:5.4234, 130 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 4, Iter 300, Top1 Accuracy:0.08355, Loss:5.4322, 86 samples/s. lr: 0.64553.
But when I update pytorch version to 1.7
Epoch 1, Node 0, GPU 3, Iter 300, Top1 Accuracy:0.083602, Loss:5.4438, 132 samples/s. lr: 0.64553.
INFO:Distribute training logs.:Epoch 1, Node 0, GPU 3, Iter 300, Top1 Accuracy:0.083602, Loss:5.4438, 132 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 6, Iter 300, Top1 Accuracy:0.086197, Loss:5.4218, 129 samples/s. lr: 0.64553.
INFO:Distribute training logs.:Epoch 1, Node 0, GPU 6, Iter 300, Top1 Accuracy:0.086197, Loss:5.4218, 129 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 1, Iter 300, Top1 Accuracy:0.084302, Loss:5.4295, 86 samples/s. lr: 0.64553.
INFO:Distribute training logs.:Epoch 1, Node 0, GPU 1, Iter 300, Top1 Accuracy:0.084302, Loss:5.4295, 86 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 2, Iter 300, Top1 Accuracy:0.083498, Loss:5.4405, 130 samples/s. lr: 0.64553.
INFO:Distribute training logs.:Epoch 1, Node 0, GPU 2, Iter 300, Top1 Accuracy:0.083498, Loss:5.4405, 130 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 0, Iter 300, Top1 Accuracy:0.087469, Loss:5.4242, 131 samples/s. lr: 0.64553.
INFO:Distribute training logs.:Epoch 1, Node 0, GPU 0, Iter 300, Top1 Accuracy:0.087469, Loss:5.4242, 131 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 5, Iter 300, Top1 Accuracy:0.083731, Loss:5.4297, 127 samples/s. lr: 0.64553.
INFO:Distribute training logs.:Epoch 1, Node 0, GPU 5, Iter 300, Top1 Accuracy:0.083731, Loss:5.4297, 127 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 7, Iter 300, Top1 Accuracy:0.0856, Loss:5.4234, 130 samples/s. lr: 0.64553.
INFO:Distribute training logs.:Epoch 1, Node 0, GPU 7, Iter 300, Top1 Accuracy:0.0856, Loss:5.4234, 130 samples/s. lr: 0.64553.
Epoch 1, Node 0, GPU 4, Iter 300, Top1 Accuracy:0.08355, Loss:5.4322, 86 samples/s. lr: 0.64553.
INFO:Distribute training logs.:Epoch 1, Node 0, GPU 4, Iter 300, Top1 Accuracy:0.08355, Loss:5.4322, 86 samples/s. lr: 0.64553.
I’m sure I only update pytorch version and do not change any package.
Who print the addition info and how to stop this. |
st177433 | @PistonY Do you mind sharing the part of the training script that prints these logs? I don’t see any code in PyTorch itself that would generate these extra logs. |
st177434 | @osalpekar hi, thx for reply.
I use this 1 script and this 1 line will print the info. |
st177435 | @PistonY Looking at the additional logs, isn’t it coming from here: https://github.com/PistonY/ModelZoo.pytorch/blob/fd85403bb1430ce591a97586a097c989630aa82b/scripts/utils.py#L41 1 instead of PyTorch? |
st177436 | hi, @pritamdamania87 thanks for reply, but this line don’t print anything when using pytorch version <= 1.7.0, I didn’t modify any code but update pytorch version, do you know how to stop it? |
st177437 | I think the upgrade to PyTorch might be unrelated here and something else is changing that is causing additional logging. These are log messages generated by your application and not PyTorch, probably double checking the logging configuration might reveal the reason for this. |
st177438 | I don’t think it’s my issues.
I config logs as bellow.
def get_logger(file_path):
streamhandler = logging.StreamHandler()
logger = logging.getLogger('Distribute training logs.')
logger.setLevel(logging.INFO)
logger.addHandler(streamhandler)
return logger
I tried this still has additional logs.
Could you please give some suggests @osalpekar? |
st177439 | I remove the name of logging.getLogger and no additional logs print.
But print this.
Reducer buckets have been rebuilt in this iteration.
I don’t know what happened, but finally logs is clear, |
st177440 | In the documentation,
pytorch.org
Optional: Data Parallelism — PyTorch Tutorials 1.7.0 documentation 6
It is mentioned that the tensors must be assigned to a new variable when using to device.
mytensor = my_tensor.to(device)
Should we not do the same for the model?
Like,
model = model.to(device)
@sungkim11 @fritzo @neerajprad @alicanb @vishwakftw |
st177441 | .to() modifies the model in-place.
pytorch.org
Module — PyTorch 1.7.0 documentation 30 |
st177442 | I am trying to adapt some existing training code to run on multiple GPUs on the same node using DistributedDataParallel. I have a main_worker_function which sets up the dataset, model, etc then runs training and validation. I run either a single version of this to run on a single GPU, or call it using mp.spawn to run with DDP:
if is_distributed:
print("Using multiprocessing...")
devids = ["cuda:{0}".format(x) for x in config.gpu_list]
print("Using DistributedDataParallel on these devices: {}".format(devids))
mp.spawn(main_worker_function, nprocs=ngpus, args=(ngpus, is_distributed, config))
else:
print("Only one gpu found, not using multiprocessing...")
main_worker_function(0, ngpus, is_distributed, config)
Currently it seems like I obtain no increase in running speed when running on two GPUS with DDP, and upon closer inspection it seems to be because fetching a new validation iterator takes an enormous amount of time. To check this I replaced my training function with:
time0 = time()
print("Fetching iterator")
val_iter = iter(self.data_loaders["validation"])
time1 = time()
print("Done in time: ", time1 - time0)
val_iter = iter(self.data_loaders["validation"])
time2 = time()
print("Done in time: ", time2 - time1)
Running the above code on a single GPU and with num_workers = 4 yields outputs:
Fetching iterator
Done in time: 0.40811872482299805
Done in time: 0.7533595561981201
Running the same code on a single GPU and num_workers=4, but spawned as single subprocess with mp.spawn yields:
Fetching iterator
Done in time: 49.32914686203003
Done in time: 50.69797325134277
As far as I can tell it has something to do with the spawning of the worker functions, because spawning a subprocess with num_workers=2 yields:
Fetching iterator
Done in time: 25.56034803390503
Done in time: 28.712769746780396
But I’m not sure what behaviour is changing between single and multiprocessing mode to cause this increase in time, and I’m not quite sure how to check (I’ve been consulting https://pytorch.org/docs/stable/data.html#single-and-multi-process-data-loading 1 but I’m not sure if it specifies any difference between default and multiprocessing worker spawning). Note that the initialization of the dataset and sampler should currently be performed the same (I was previously using a DistributedSampler in DDP mode, but this problem persists regardless of or not this sampler is used). |
st177443 | @Whisky-Jack Can you share the code you’re using to initialize the DataLoader on each process?
cc @VitalyFedyunin |
st177444 | Note that I’ve made some important progress on this since last week, and so I included that first, with the dataloader information below (you can jump to that part if that’s more important).
I have done more investigation on this since last week, and I have a bit of a better handle on where the problem is occurring. The problem seems to involve differences in spawning the worker functions with different multiprocessing start methods. This is described a little bit in the platform specific behaviours section found in here https://pytorch.org/docs/stable/data.html#single-and-multi-process-data-loading 1, although it doesn’t really discuss differences between running in a single process vs processes launched with spawn.
Calling print(mp.get_context()) in the main_worker_function in single GPU mode yields:
<multiprocessing.context.ForkContext object at 0x7fc14dd64da0>
While in the spawned process
<multiprocessing.context.SpawnContext object at 0x7f8e02fd0ef0>
Consequently, I think the workers processes are being spawned using fork() in the single process case and spawn() in the multiprocessing case.
The documentation states:
On Unix, fork() is the default multiprocessing start method. Using fork(), child workers typically can access the dataset and Python argument functions directly through the cloned address space.
On Windows, spawn() is the default multiprocessing start method. Using spawn(), another interpreter is launched which runs your main script, followed by the internal worker function that receives the dataset, collate_fn and other arguments through pickle serialization.
Since I’m running on linux, I’m guessing that with fork, the cloned address space is working correctly with low overhead, while in the case of the spawn something about pickling the dataset is causing a big overhead when trying to pass it to the workers.
To check that this difference is causing the issue, I tried running in single GPU mode (so NOT spawning any subprocesses using mp.spawn, or wrapping with DistributedDataParallel), but changing the default start method using mp.set_start_method('spawn') in my main function. This results in the same context:
<multiprocessing.context.SpawnContext object at 0x7f5e3b4a29e8>
And produces the same lag as in the multiprocessing case:
Fetching iterator
Done in time: 52.480613708496094
Fetching iterator
Done in time: 52.47240161895752
Fetching iterator
Done in time: 53.32112169265747
I then took a look at my dataset to see what might be causing problems. Our dataset is stored in h5 form, and is loaded into numpy arrays when the dataset is initialized.
The dataset consists of a number of types of data, some of which are too large to load into memory, and so are stored in memmaps (these memmaps are not initialized until the first call to get_item):
with h5py.File(self.h5_path, 'r') as h5_file:
self.dataset_length = h5_file["labels"].shape[0]
hdf5_hit_pmt = h5_file["hit_pmt"]
hdf5_hit_time = h5_file["hit_time"]
hdf5_hit_charge = h5_file["hit_charge"]
# initialize memmap param dict
self.pmt_dict = {'shape':hdf5_hit_pmt.shape, 'offset':hdf5_hit_pmt.id.get_offset(), 'dtype':hdf5_hit_pmt.dtype}
self.time_dict = {'shape':hdf5_hit_time.shape, 'offset':hdf5_hit_time.id.get_offset(), 'dtype':hdf5_hit_time.dtype}
self.charge_dict = {'shape':hdf5_hit_charge.shape, 'offset':hdf5_hit_charge.id.get_offset(),'dtype':hdf5_hit_charge.dtype}
The other data is small enough to be loaded into memory, and is stored as attributes of the dataset:
self.labels = np.array(h5_file["labels"])
self.energies = np.array(h5_file["energies"])
self.positions = np.array(h5_file["positions"])
self.angles = np.array(h5_file["angles"])
This latter data seems to be what is causing the issue. If none of this data is stored as part of the dataset, very little lag is observed:
Fetching iterator
Done in time: 1.6751770973205566
Fetching iterator
Done in time: 2.464519500732422
Fetching iterator
Done in time: 2.2533977031707764
Whereas if only the labels is stored, the lag increases:
Fetching iterator
Done in time: 9.610902309417725
Fetching iterator
Done in time: 10.206432342529297
Fetching iterator
Done in time: 9.656260013580322
With more parameters producing more lag.
I then tried moving this initialization step to the same initialization function in get_item that creates the memmaps (so this information is initialized in the worker functions after they have spawned), and this seems to eliminate most of the lag:
Fetching iterator
Done in time: 1.7485809326171875
Fetching iterator
Done in time: 8.036261796951294
Fetching iterator
Done in time: 3.001110076904297
It does add overhead to the first call to get_item, but not the 50s lag observed previously. To see this I ran:
print("Fetching iterator...")
val_iter = iter(self.data_loaders["validation"])
time1 = time()
print("Done in time: ", time1 - time0)
print("Fetching data...")
data = next(val_iter)
time0 = time()
print("Done in time: ", time0 - time1)
print("Fetching data...")
data = next(val_iter)
time1 = time()
print("Done in time: ", time1 - time0)
Yielding:
Fetching iterator...
Done in time: 1.7992744445800781
Fetching data...
Done in time: 6.465835094451904
Fetching data...
Done in time: 0.10752415657043457
Which is not great, but probably tolerable. It would be nice to know what is going wrong with storing this data in the __init__/passing it to the workers if anyone has an idea why this lag is being produced.
DataLoader
Regarding the instantiation of the dataloader, the process is a little convoluted because we’re running using hydra configs and it is made in a series of calls, but I have tried to pare it down into a reasonable example.
We have an engine class which handles training and so has the dataloaders as attributes. In our main_worker_function, we set this up using the config information using:
# Configure data loaders
for task, task_config in config.tasks.items():
if 'data_loaders' in task_config:
engine.configure_data_loaders(config.data, task_config.data_loaders, is_distributed, config.seed)
def configure_data_loaders(self, data_config, loaders_config, is_distributed, seed):
"""
Set up data loaders from loaders config
"""
for name, loader_config in loaders_config.items():
self.data_loaders[name] = get_data_loader(**data_config, **loader_config, is_distributed=is_distributed, seed=seed)
We use the standard DataLoader class, passing a SubsetRandomSampler on a set of indices and our own dataset.
from hydra.utils import instantiate
from torch.utils.data import DataLoader
from torch.utils.data import SubsetRandomSampler
def get_data_loader(dataset, batch_size, sampler, num_workers, is_distributed, seed, split_path=None, split_key=None, transforms=None):
split_indices = np.load(split_path, allow_pickle=True)[split_key]
sampler = SubsetRandomSampler(split_indices)
dataset = instantiate(dataset, transforms=transforms, is_distributed=is_distributed)
return DataLoader(dataset, sampler=sampler, batch_size=batch_size, num_workers=num_workers)
Hydra instantiate simply returns the object specified in the dataset (which comes from the config). I’ve checked and the dataset at this point in the code is just our dataset object.
For details on the dataset, refer to the code in the section above this one. I can clarify details further if this is too complicated. |
st177445 | The multiprocessing and distributed confusing me a lot when I’m reading some code
#the main function to enter
def main_worker(rank,cfg):
trainer=Train(rank,cfg)
if __name__=='_main__':
torch.mp.spawn(main_worker,nprocs=cfg.gpus,args=(cfg,))
#here is a slice of Train class
class Train():
def __init__(self,rank,cfg):
#nothing special
if cfg.dist:
#forget the indent problem cause I can't make it aligned
dist.init_process_group(backend='gloo',rank=self.rank,world_size=self.config.gpus,init_method="file://./sharefile",timeout=datetime.timedelta(seconds=5))
#init the model
self.model=torch.nn.Parallel.DistributedDataParallel(self.model,device_ids=[self.gpus])
Cause I’m new to the distributed coding,Here are sth I can’t understand.
1
From my perspective, the spawn fcn start a job by initializing a group of procs with arg(rank,cfg), which means a groups of main_worker funtion. so every single main_worker,as a process, maintains a trainer object.But I see insde the trainer object, dist package also init process group to init a group of processes. What is the difference?where lies the true process?are there two hirearchy groups?
2
the dist.init_process_method reports runtime errors.When I choose nccl as backend,it runs and waits until reporting (no nccl module),How to install an nccl module? And I changed it to ‘gloo’,It runs and reports a runtime error with no message,Thus it is not able to run.How to debug?
3
The init_process_group receives an argument init_method, which I can hardly understand,I leave it None but it bugged a message (No rendezvous handler for env://) ,I followed the tutorial to set environments variables MASTER_ADDR MASTER_PORT, it seems no effect.So I have to change it to ‘file://./sharefile’,what is this ?I totoally messed.
I
4
Now I’m running the code on my laptop,a single machine node with 1 GPU.But I need to train my model on a single machine node with multiple GPU cards.Is there any code to change?
Hope Someone can solve my problem or, maybe leave a tutorial or example,something comprehensive,not the simple sample.
Thank u so much. |
st177446 | I also copied the code sample from pytorch official tutorials
#!/usr/bin/env python
import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
def run(rank, size):
""" Distributed function to be implemented later. """
pass
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 2
processes = []
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join()
this tutorial link is here distributed_tutorial_link 2
But, I still got error.
And this code can only run a part by setting the init_method = ‘file://./sharedfile’.
I got the error
Process Process-1:
Traceback (most recent call last):
File "C:\Anaconda3\envs\pt\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\Anaconda3\envs\pt\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Kevin\Documents\Scripts\inpaint\test.py", line 13, in init_process
dist.init_process_group('gloo',rank=rank,world_size=size,init_method='file://./sharedfile')
File "C:\Anaconda3\envs\pt\lib\site-packages\torch\distributed\distributed_c10d.py", line 433, in init_process_group
timeout=timeout)
File "C:\Anaconda3\envs\pt\lib\site-packages\torch\distributed\distributed_c10d.py", line 508, in _new_process_group_helper
timeout=timeout)
RuntimeError
Process Process-2:
Traceback (most recent call last):
File "C:\Anaconda3\envs\pt\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\Anaconda3\envs\pt\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Kevin\Documents\Scripts\inpaint\test.py", line 13, in init_process
dist.init_process_group('gloo',rank=rank,world_size=size,init_method='file://./sharedfile')
File "C:\Anaconda3\envs\pt\lib\site-packages\torch\distributed\distributed_c10d.py", line 433, in init_process_group
timeout=timeout)
File "C:\Anaconda3\envs\pt\lib\site-packages\torch\distributed\distributed_c10d.py", line 508, in _new_process_group_helper
timeout=timeout)
RuntimeError
Process finished with exit code 0
Is there anything wrong with this code sample or my laptop? |
st177447 | Kevinkevin189:
From my perspective, the spawn fcn start a job by initializing a group of procs with arg(rank,cfg), which means a groups of main_worker funtion. so every single main_worker,as a process, maintains a trainer object.But I see insde the trainer object, dist package also init process group to init a group of processes. What is the difference?where lies the true process?are there two hirearchy groups?
torch.mp.spawn spawns the actual processes, init_process_group doesn’t create any new processes but just initializes the distributed communication between spawned processes. For example if you spawn 4 processes using mp.spawn and call init_process_group on those 4 processes, init_process_group would ensure all 4 processes discover each other and now you can run collective calls like allreduce among those processes.
Kevinkevin189:
the dist.init_process_method reports runtime errors.When I choose nccl as backend,it runs and waits until reporting (no nccl module),How to install an nccl module?
You can find the NCCL installation guide here: https://docs.nvidia.com/deeplearning/nccl/install-guide/index.html
Kevinkevin189:
The init_process_group receives an argument init_method, which I can hardly understand,I leave it None but it bugged a message (No rendezvous handler for env://) ,I followed the tutorial to set environments variables MASTER_ADDR MASTER_PORT, it seems no effect.So I have to change it to ‘file://./sharefile’,what is this ?I totoally messed.
This section of our docs goes over details of init_method and how to use it: https://pytorch.org/docs/stable/distributed.html#tcp-initialization 3. In particular setting MASTER_ADDR and MASTER_PORT and leaving init_method=None should work.
Kevinkevin189:
Now I’m running the code on my laptop,a single machine node with 1 GPU.But I need to train my model on a single machine node with multiple GPU cards.Is there any code to change?
You would need some changes where you need to one process per GPU on your single machine node. You can use mp.spawn to create one process per GPU. |
st177448 | Kevinkevin189:
from torch.multiprocessing import Process
def run(rank, size):
""" Distributed function to be implemented later. """
pass
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 2
processes = []
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join()
For this code, what is the error that you encountered?
Kevinkevin189:
And this code can only run a part by setting the init_method = ‘file://./sharedfile’.
Can you try something like file:///tmp/sharedfile instead? It looks like the file:// scheme is not handling ./ correctly. |
st177449 | Thanks for your replies.I now can understand the procs and distributed a little.
I found the true bug when run this tutor code on a linux machine.It seems the windows OS only support sharedfile inter-process communication method.And Because of the file permission issue,it reported a Runtime error with no error messege.It also reported a same Runtime error w/ a ‘Permission denied’ message.
Hope official team would fix this ‘bug’ (cause an error with no explicit messege is really annoying) |
st177450 | I got an error as follows:
RuntimeError: open(/sharedfile): Permission denied
Doesn’t this error message clearly indicate the problem? |
st177451 | Thanks!
Cause I’ve got a blank message on my laotop (Windows 10 Professional).And I’ve got it after I transfer my code to a linux OS machine. |
st177452 | I have a branching model in which several submodules may be executed independently in parallel. Below is the simplest example I could come up with:
class myModule(nn.Module):
def __init__(self, size_x, size_y=5):
super(myModule, self).__init__()
self.f0 = nn.Linear(size_x, size_y)
self.f1 = nn.Linear(size_x, size_y)
self.g = nn.Linear(2*size_y, size_y)
self.h0 = nn.Linear(2*size_y, size_x)
self.h1 = nn.Linear(2*size_y, size_x)
def forward(self, x):
y0 = self.f0(x[:,0,:])
y1 = self.f1(x[:,1,:])
y = torch.cat((y0,y1),dim=1)
yg = self.g(y)
out = torch.zeros(x.shape)
out[:,0,:] = self.h0(torch.cat((y0,yg),dim=1))
out[:,1,:] = self.h1(torch.cat((y1,yg),dim=1))
return x
The functions f0 and f1 act independently on separate channels of the input. Function g combines the outputs of f0 and f1 into a single vector fg which is then used by the h functions (along with the output of the corresponding f function) to attempt to recreate the input. I’ve just used linear layers as submodules in the example, but in my application, these are more complicated, are large in their own right, and there are more than 2 of them.
My goal is to execute f0 and f1 in parallel as well as h0 and h1 in parallel. I’m OK with blocking while g executes. I’m particularly interested in how to make sure that the backward pass executes in parallel. |
st177453 | You can use torch.jit.fork 1 for this, however note that your forward pass needs to run in TorchScript for this to actually work asynchronously. |
st177454 | jtencer:
I’m particularly interested in how to make sure that the backward pass executes in parallel.
If the two branches are on different GPU devices, then the backward pass would run then in parallel on the two different GPUs. If everything is on CPU, currently the backward pass only has single threaded execution and everything would run on a single thread on the CPU. |
st177455 | Hey,
I have big dataset, and it is stored on different machines. I am trying to run DistributedDataParallel to train models on it. So each machine has its own data to train. In each machine I have 3 GPUs, one process for each. I am trying to use DistributedSampler to distribute batches between GPUs in each machine. So I just set world_size to number of GPUs in each machine and also set rank in each process.
For 2 machines, my real world size is 6 and rank from, 0-6. But for DistributedSampler, I set 3 and rank from 0-3.
I am not sure if I am doing the correct way.
Anyway by doing this I got error on this assertion:
assert len(indices) == self.num_samples
Thank you |
st177456 | Solved by pritamdamania87 in post #2
I think this might be the issue, for num_replicas=3, the ranks should be from 0-2 and not 0-3. I was able to reproduce the error you’re seeing if I do something like this (which is invalid):
sampler = DistributedSampler(dataset, num_replicas=3, rank=3) |
st177457 | sakh251:
But for DistributedSampler, I set 3 and rank from 0-3.
I think this might be the issue, for num_replicas=3, the ranks should be from 0-2 and not 0-3. I was able to reproduce the error you’re seeing if I do something like this (which is invalid):
sampler = DistributedSampler(dataset, num_replicas=3, rank=3) |
st177458 | It was correct, I converted global rank to local rank incorrectly. I just send number replica and local rank and it works.
Thank you |
st177459 | @ptrblck
@soumith chintala
Hello everyone.My friend and I are trying to implement multiple cpu distributed training in pytorch using python sockets.we looked at pytorch inbuilt functions but we didn’t get any starter code for detecting whether cpu’s are connected or not.so we decided to implement it in sockets
Main problem:
We distributed data perfectly but I have a theoretical doubt.So my plan is I will compute loss for distributed cpu’s for their batches and send all those losses to main computer and compute average.
Now I will send those average loss to all the distributed cpu’s and update model according to that loss so that model wts will be synchronized.but the avg loss which I am getting from server computer will not no relationship graph wrt to the model stored in the computer.so how can I tackle this??
Main doubt is:
Can we update model wrt to loss that has no relation wrt to model?? |
st177460 | Solved by pritamdamania87 in post #2
Typically you want to run the forward and backward pass on each process separately and then average the gradients across all processes and then run the optimizer independently on each process.
I’m wondering what is the reason you’re trying to build this yourself. PyTorch has a Distributed
Data Par… |
st177461 | Typically you want to run the forward and backward pass on each process separately and then average the gradients across all processes and then run the optimizer independently on each process.
I’m wondering what is the reason you’re trying to build this yourself. PyTorch has a Distributed
Data Parallel module, which does all of this for you. |
st177462 | Hi sir.Thank you for posting.Yesterday we tried it and we forgot to update here.
First problem
Coming to your question we looked at various articles but we are so confused on using this,Like we have to find initially whether pytorch identifes our connected computers.So can you please provide a starter code to check that??
Second problem
So if our main computer runs training code then what our client computers should run in order to accept and process data.We dont have any idea.We connected our computers using LAN cable.Thank you sir for answering…) |
st177463 | Sanjayvarma11:
Coming to your question we looked at various articles but we are so confused on using this,Like we have to find initially whether pytorch identifes our connected computers.So can you please provide a starter code to check that??
You can find a tutorial for this here: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html. In particular the init_process_group part that is called on every node is how PyTorch identifies all the connected nodes.
Sanjayvarma11:
So if our main computer runs training code then what our client computers should run in order to accept and process data.We dont have any idea.We connected our computers using LAN cable.Thank you sir for answering…)
To use DDP, all your nodes would actually run the same code and train the same model on all nodes. DDP would take care of gradient synchronization across all nodes to ensure all the models are in sync. |
st177464 | Is it always recommended to use SYNC BN every time you use DDP? Is there any exception? |
st177465 | Usually SYNC BN is used because for large training runs the batch size per GPU is pretty small and you can’t gather enough statistics independently for batch normalization. If you have a large enough batch size per GPU, you might be able to get away without SYNC BN.
Although, a lot of this depends on your model and you should try with and without SYNC BN to see if your model converges fine in both cases. If your model converges fine without SYNC BN, I’d recommend avoiding SYNC BN since there is a perf overhead while running with SYNC BN due to synchronization among processes. |
st177466 | Hi
I have a large dataset with an iterative dataloader see here (it samples over multiple tasks):
class TaskDataLoader:
"""Wrapper around dataloader to keep the task names."""
def __init__(self, task, dataset, batch_size=8,
collate_fn=None, drop_last=False,
num_workers=0, sampler=None):
self.dataset = dataset
self.task = task
self.batch_size = batch_size
self.data_loader = DataLoader(self.dataset,
batch_size=batch_size,
sampler=sampler,
collate_fn=collate_fn,
drop_last=drop_last,
num_workers=num_workers)
def __len__(self):
return len(self.data_loader)
def __iter__(self):
for batch in self.data_loader:
batch["task"] = self.task
yield batch
# Note not to use itertools.cycle since it is
# doing some caching under the hood, resulting
# in issues in the dataloading pipeline.
# see https://docs.python.org/3.7/library/itertools.html?highlight=cycle#itertools.cycle
def cycle(iterable):
while True:
for x in iterable:
yield x
class MultiTaskDataLoader:
"""Given a dictionary of task: dataset, returns a multi-task dataloader
which uses temperature sampling to sample different datasets."""
def __init__(self, max_steps, tasks_to_datasets, batch_size=8, collate_fn=None,
drop_last=False, num_workers=0, temperature=100.0):
# Computes a mapping from task to dataloaders.
self.task_to_dataloaders = {}
for task, dataset in tasks_to_datasets.items():
dataloader = TaskDataLoader(task, dataset, batch_size,
collate_fn=collate_fn, drop_last=drop_last, num_workers=num_workers)
self.task_to_dataloaders.update({task: dataloader})
self.tasknames = list(self.task_to_dataloaders.keys())
# Computes the temperature sampling weights.
self.sampling_weights = self.temperature_sampling(self.dataloader_sizes.values(), temperature)
self.dataiters = {k: cycle(v) for k, v in self.task_to_dataloaders.items()}
self.max_steps = max_steps
def temperature_sampling(self, dataset_sizes, temp):
total_size = sum(dataset_sizes)
weights = np.array([(size / total_size) ** (1.0 / temp) for size in dataset_sizes])
return weights/np.sum(weights)
@property
def dataloader_sizes(self):
if not hasattr(self, '_dataloader_sizes'):
self._dataloader_sizes = {k: len(v) for k, v in self.task_to_dataloaders.items()}
return self._dataloader_sizes
def __len__(self):
return sum(v for k, v in self.dataloader_sizes.items())
def num_examples(self):
return sum(len(dataloader.dataset) for dataloader in self.task_to_dataloaders.values())
def __iter__(self):
for i in range(self.max_steps):
taskname = np.random.choice(self.tasknames, p=self.sampling_weights)
dataiter = self.dataiters[taskname]
outputs = next(dataiter)
yield outputs
I need to use it for distributed training over GPU/TPUs, for this I shard the data across the cores:
def get_sharded_data(self, num_replicas, rank):
"""Returns the sharded data belonging to the given rank."""
sharded_dataset_names_to_datasets = {}
for dataset_name, dataset in self.train_dataset.items():
sharded_data = dataset.shard(num_replicas, rank)
sharded_dataset_names_to_datasets.update({dataset_name: sharded_data})
self.train_dataset = sharded_dataset_names_to_datasets
return self.train_dataset
def get_train_dataset_shards(self):
"""In case of multiprocessing, returns the sharded data for the given rank."""
if is_torch_tpu_available():
if xm.xrt_world_size() > 1:
return self.get_sharded_data(num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal())
else:
return self.train_dataset
elif self.args.local_rank != -1:
return self.get_sharded_data(num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal())
else:
return self.train_dataset
def get_train_dataloader(self) -> DataLoader:
"""
Returns the training :class:`~torch.utils.data.DataLoader`.
Will use no sampler if :obj:`self.train_dataset` does not implement :obj:`__len__`, a random sampler (adapted
to distributed training if necessary) otherwise.
Subclass and override this method if you want to inject some custom behavior.
"""
train_dataset = self.get_train_dataset_shards()
return MultiTaskDataLoader(
max_steps=self.args.max_steps,
tasks_to_datasets=train_dataset,
batch_size=self.args.train_batch_size,
collate_fn=self.data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers)
but as you realized the single task dataloader does not have any sampler, this does NOT work with distributed training and does not make the program runs faster, could you point me to the possible issues, thanks |
st177467 | Can you use the DistributedSampler 16 instead to shard and train your data in a distributed fashion? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.