id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st49968 | Hi folks!
I have been going through Pytorch documentation in search for a way to do an efficient per-index replacement of values inside a tensor. The problem:
dest - 4D tensor (N, H, W, C1) that I want to update
idxs - 4D tensor of indexes I want to access (N, H, W, C2)
source - 4D tensor of values I want to put in z at idxs (N, H, W, C2).
In practice dest a minibatch of per-pixel probabilities, where C1 is a number of distinct categories and C2 is top 5 updates from source. Note that C1 != C2.
I have been looking at torch.copy_index_ but I canโt seem to find a way to broadcast it along N, H, W dimensions.
My (working) solution with a for loop:
def col_wise_replace(dest, idxs, source):
''' replace probability stacks '''
for k in range(n):
for i in range(h):
for j in range(w):
dest[k,i,j].index_copy_(0, idxs[k,i,j], source[k,i,j])
This solution is slow. Any numpy / torch-like way I can achieve the same? |
st49969 | @albanD
I am wondering how can I make use of torch.scatter_() or any other built-in masking function for this one-hot masking task-
I have two tensors-
X = [batch, 100] and label = [batch]
num_classes = 10
so each label has 10 tensors out of those 100 tensors in โXโ.
For instance X of shape [1x100],
X = ([ 0.0468, -1.7434, -1.0217, -0.0724, -0.5169, -1.7318, -0.1207, -0.8377,
-0.8055, 0.7438, 0.1139, 1.2162, -1.7950, 1.7416, -1.2031, -1.4833,
-0.5454, 0.2466, -1.2303, -0.4257, 0.9873, -1.5905, -1.3950, 0.4013,
-1.0523, 1.4450, 0.6574, 1.5239, -0.3503, -0.1114, 1.8192, -1.7425,
0.4678, 0.4074, 1.7606, -1.0502, 0.0724, 0.1721, 0.1108, 0.4453,
0.2278, -1.5352, -0.1232, 1.1052, 0.2496, 1.2898, -0.4167, -0.8211,
0.2340, -0.3829, -0.1328, 0.1033, 2.8693, -0.8802, -0.0433, 0.5335,
0.0662, 0.4250, 0.2353, -0.1590, 0.0865, 0.6519, -0.2242, 1.5300,
1.7021, -0.9451, 0.5845, -0.7309, 0.7124, 0.6544, -1.4426, -0.1859,
-1.5313, -1.5391, -0.2138, -1.0203, 0.6678, 1.3445, -1.3453, 0.5222,
0.9510, 0.0969, -0.5437, -0.2727, -0.6090, -2.9624, 0.4578, 0.5257,
-0.2866, 0.0818, -1.2454, 1.6511, 0.1634, 1.3720, -0.4222, 0.5347,
0.3586, -0.3506, 2.6866, 0.5084])
label = [3]
I would like to do one-hot masking of โ1โ to tensors 30-40 and rest all the tensors as โ0โ on the tensor โXโ.
so for,
label = 1 -> mask (0 to 10) as โ1โ and rest as โ0โ
label = 2 -> mask (10 to 20) as โ1โ and rest as โ0โ
โฆ so on. |
st49970 | Hi,
You mist likely want to view x as a Tensor of size [batch, 10, 10] before doing the masking to make things clearer.
Then you can full in the values with 0s and 1s as needed. |
st49971 | You can reshape it back when youโre done
Or even better donโt reshape it inplace and just do something like:
x = XXXX
x_view = x.view(batch, 10, 10)
# fill the values corresponding to label idx with 1s
x_view.select(1, label_idx) = 1
# Now you can use x that was changed by the inplace op. |
st49972 | @albanD
The second line gives an error, btw I added an example at the end of my post because I am not able to relate how this will give me the masking I am trying to get.
>>> w_view.select(1, label) = 1
File "<stdin>", line 1
SyntaxError: can't assign to function call |
st49973 | Ho right I forgot this was not valid, my bad. You can do w_view.select(1, label).fill_(1).
But for your case, I guess what you want is:
labels = # Tensor or list of batch integers
res = torch.zeros(batch, 10, 10)
for label in labels:
res.select(1, label).fill_(1)
res = res.view(batch, 100) |
st49974 | @albanD
Thank you for the solution. Although, it still gives me masking of all the labels in all the vectors. Instead, one label should reflect in one vector!
label = ([0, 7])
tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) |
st49975 | I am not sure what you mean.
Can you share an example with input tensors and what you expect as output? |
st49976 | @albanD I found another similar way to do it;
Maybe this will help you understand my goal!
for i in range(batch_size):
lab = torch.arange(elementPerClass*label[i], elementPerClass*(label[i]+1), 1, dtype=torch.long)
if lab.nelement()==0:
print("Label tensor -- empty weight.")
else:
mask[i,lab] = ones |
st49977 | Not really no
Can you share a full code sample of a few lines that I can run to be able to print the input/output? |
st49978 | hello guys,
i had been save a train model ResNet in collab but if i load this model with spyder (ubuntu) i have this error (RuntimeError: [enforce fail at inline_container.cc:144] . PytorchStreamReader failed reading zip archive: failed finding central directory)
this is the save and the load ! |
st49979 | Are you using torchscript model? if so, you should use torch.jit.save for saving a torchscript model. |
st49980 | Besides what @Keyv_Krmn mentioned, check if you are using the same PyTorch versions, since loading a state_dict should be backwards compatible but not necessarily forwards compatible.
I.e. storing a state_dict in an older PyTorch version and loading it with a newer one should work, while the opposite direction could fail. |
st49981 | What I am trying to do right now is to write a multi layer conv2d encoder and freeze the weights from updating for the earlier layers. This hopefully would give me back a similar effect like progressively growing the layers. This way I can initialize the complete network first without worrying about how to mix and match and add new layers to the network. So before I start to write a complex model I thought I would experiment with freezing a small network for testing purpose. The result is already different from what I was expecting. I assumed setting grad to zero would stop the updating of the weights so the end weight would stay the same but I was wrong. Below is my testing code.
encoder = nn.Sequential( nn.Conv2d(1, 4, 1), nn.ReLU() ).cuda()
criterion = torch.nn.BCELoss()
optimizer = torch.optim.Adam(encoder.parameters(), lr=0.001)
for params in encoder.parameters():
print('params:' params)
prams.require_grad = False # Freeze all weights
params: [ [[[ 0.1500]]], [[[0.9332]]], [[[-0.1422]]], [[[-0.7685]] ] ....
epochs = 2
target = torch.randn(32, 1, 4, 4).cuda()
for e in range(epochs):
random_input = torch.randn(32, 1, 4, 4).cuda()
Y_pred = encoder(random_input)
loss = critertion( Y_pred, Y )
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 2 == 0:
print('[Epoch:{} -- Loss:{:.4f}]'.format(epoch, loss.item()))
[Epoch:0 -- Loss: 0.8634] # loss getting updated
[Epoch:2 -- Loss: 0.8574]
for params in encoder.parameters(): #looping through the encoder to see if my weights are still the same
print('params:' params)
params: [ [[[ 0.1433]]], [[[0.9233]]], [[[-0.1333]]], [[[-0.7586]] ] ... # weight value updates?
As you can see if I print out the new param after 2 epochs my weights still got updated. I would like to cancel those update and only use them as input conversion layers like toRGB or fromRGB layers in the progressive gan paper. |
st49982 | Solved by SimonW in post #6
requires_grad
you are missing an โsโ |
st49983 | Hi,
This looks like l2 regularization or similar behaviour of the optimizer: all your weights are slightly closer to 0. |
st49984 | After reading your comment I used SGD instead of Adam and set weight decay to 0 which suppose to stop L2 regularization and my weight still updates? Am I misunderstanding the concept of freezing weights? Are the weights suppose to be updated but never used or their values are not suppose to change? |
st49985 | Weโve had this discussion a while ago and for investigation of this effect I created a gist 291 showing how optim.Adam updates the parameters even without a gradient once the running estimates were set.
Could you compare your code to the example and make sure no momentum etc. is set? |
st49986 | Hi ptrblck I thought ill make it easier by switching to SGD instead and I have set everything to default values and I made sure momentum is zero, I even restarted my computer just to make sure I have a clean slate. I still have this problem, I donโt think its just Adam because of SGD still updates my weights after the require_grad = False. Actually while I was typing this I added a print statement right after require_grad = False and it prints require_grad to True?? Did I write my code wrong?
My code:
import torch
import torch.nn as nn
encoder = nn.Sequential( nn.Conv2d(1, 4, 1), nn.Sigmoid())
for params in encoder.parameters():
params.require_grad = False
print(params.requires_grad)
The print statement comes after I change require_grad to False but if I print out the setting I would get 2 True outputs. |
st49987 | Oh my god, thank you no wonder this doesnโt work I had a typo all along. So what is require_grad then? It doesnโt throw an error. Thank you so much SimonW you must be a very handsome person in real life. |
st49988 | Itโs just assigning a new attribute. For python objects, if you do a.b = c, a.b doesnโt have to exist before this. |
st49989 | Hi @ptrblck ,
I was wondering if you finally found a workaround to this problem. I am doing some Transfer Learning where I initialize a model with the parameters of another previously trained model and freeze the few last fully connected layers. Here is how I typically do it:
for name, param in self.model.named_parameters():
#tells whether we want to use gradients for a given parameter
if freeze:
param.requires_grad = False
print("Freezing parameter "+name)
Transfer looks fine and parameters are initially identical, but when I compare the respective min,max,mean and std values for each parameter of each layer in both models, some of the frozen instances start to vary after a few epochs. See below a case where I froze group1-4:
The model I am transferring the weights from:
Name Min Max Mean Std
---------------------- ----------- ----------- ------------ ------------
module.group1.0.weight -0.13601 0.135853 0.000328239 0.078537
module.group1.0.bias -0.129506 0.13031 -0.000709442 0.0761818
module.group2.0.weight -0.0156249 0.015625 -6.8284e-06 0.00902701
module.group2.0.bias -0.0150966 0.0152359 0.000233364 0.00887235
module.group3.0.weight -0.0110485 0.0110485 8.25103e-06 0.00637962
module.group3.0.bias -0.0109931 0.0109642 -0.000212902 0.00620885
module.group4.0.weight -0.0078125 0.0078125 -1.07069e-06 0.00451099
module.group4.0.bias -0.0077329 0.00775102 -0.000157763 0.00451984
module.fc1.0.weight -0.00195312 0.00195312 -1.05901e-08 0.00112767
module.fc1.0.bias -0.00195279 0.0019521 5.93193e-05 0.00113513
module.fc2.0.weight -0.0312486 0.0312499 -2.94543e-05 0.0180225
module.fc2.0.bias -0.0312394 0.0289709 -0.00238465 0.0186226
module.fc3.0.weight -0.100976 0.0989116 -0.00164936 0.0606025
module.fc3.0.bias -0.059265 -0.059265 -0.059265 nan
The model that I initialized thtough TL:
Name Min Max Mean Std
---------------------- ----------- ----------- ------------ ------------
module.group1.0.weight -0.136078 0.136051 0.00138295 0.0788667
module.group1.0.bias -0.135537 0.135878 0.00912299 0.0691942
module.group2.0.weight -0.0156247 0.0156249 -2.81046e-05 0.00902321
module.group2.0.bias -0.0151269 0.0152803 0.000945539 0.0088397
module.group3.0.weight -0.0110485 0.0110485 -7.81598e-06 0.00637801
module.group3.0.bias -0.0110323 0.0109976 -0.000282283 0.00675859
module.group4.0.weight -0.0078125 0.0078125 -8.4189e-07 0.00451147
module.group4.0.bias -0.00777942 0.00779467 -2.26952e-05 0.00466924
module.fc1.0.weight -0.00195312 0.00195312 1.48078e-07 0.00112768
module.fc1.0.bias -0.00194499 0.00195289 5.32243e-05 0.00112104
module.fc2.0.weight -0.0312488 0.0312494 -5.54657e-06 0.0180232
module.fc2.0.bias -0.0304042 0.0306912 0.00134896 0.018436
module.fc3.0.weight -0.0996469 0.101409 -0.00436459 0.0568807
module.fc3.0.bias -0.0561954 -0.0561954 -0.0561954 nan
Any insight would be appreciated!
Thanks, |
st49990 | Are you freezing the parameters from the beginning and are you using e.g. weight decay?
If so, could only pass the parameters which require grads to the optimizer and run the code again? |
st49991 | I am actually freezing them from the beginning and I do use weight decay.
I believe I am already passing only the parameters that require grads to the optimizer. See below:
self.optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, self.model.parameters()), lr=self.learning_rate, weight_decay=self.penalty) |
st49992 | Your approach looks alright.
Could you post a minimal code snippet to reproduce the issue? |
st49993 | Hi Julien,
Any luck or insights on this? I have similar issues: use transfer learning, freeze some layers, and the weights of those frozen layers still get updated. |
st49994 | Were these parameters trained before and are you using an optimizer with internal states, e.g. Adam?
If so, note that the running internal states might still update the frozen parameters, as seen in this code snippet:
# Setup
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.enc = nn.Linear(64, 10)
self.dec = nn.Linear(10, 64)
def forward(self, x):
x = F.relu(self.enc(x))
x = self.dec(x)
return x
x = torch.randn(1, 64)
y = x.clone()
model = MyModel()
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=1.)
# dummy updates
for idx in range(10):
optimizer.zero_grad()
output = model(x)
loss = criterion(output, y)
loss.backward()
optimizer.step()
print('Iter{}, loss {}'.format(idx, loss.item()))
optimizer.zero_grad()
# Freeze encoder
for param in model.enc.parameters():
param.requires_grad_(False)
# Store reference parameter
enc_weight0 = model.enc.weight.clone()
# Update for more iterations
for idx in range(10):
optimizer.zero_grad()
output = model(x)
loss = criterion(output, y)
loss.backward()
optimizer.step()
print('Iter{}, loss {}'.format(idx, loss.item()))
print('max abs diff in enc.weight {}'.format(
(enc_weight0 - model.enc.weight).abs().max()))
print('sum abs grad in enc.weight {}'.format(
model.enc.weight.grad.abs().sum())) |
st49995 | Hi @ptrblck
Iโm using pre-trained Places365-resnet50 as a base model and added a new fc layer. Only the newly added fc layer is trained to classify sun attributes. So in one pass I can predict both places 365 categories and sun attributes.
Here is my model:
# the architecture to use
arch = 'resnet50'
# load the pre-trained weights
model_file = '%s_places365.pth.tar' % arch
if not os.access(model_file, os.W_OK):
weight_url = 'http://places2.csail.mit.edu/models_places365/' + '%s_places365.pth.tar' % arch
os.system('wget ' + weight_url)
model = models.__dict__[arch](num_classes=365)
checkpoint = torch.load(model_file, map_location=lambda storage, loc: storage)
state_dict = {str.replace(k, 'module.', ''): v for k, v in checkpoint['state_dict'].items()}
model.load_state_dict(state_dict)
class CustomizedResNet(nn.Module):
def __init__(self):
super(CustomizedResNet, self).__init__()
# Resnet 50 as base model
self.base_model = model
def hook_feature(module, input, output):
self.feature = output
self.base_model._modules.get('avgpool').register_forward_hook(hook_feature)
self.scene_attr_fc = nn.Linear(2048, 102)
# freeze weights
for param in self.base_model.parameters():
param.requires_grad = False
for param in self.scene_attr_fc.parameters():
param.requires_grad = True
def forward(self, x):
places365_output = self.base_model(x)
# compute scene attributes
# feed the outputs from avgpool to the new fc layer
attributes_output = self.feature.view(self.feature.size(0), -1)
attributes_output = self.scene_attr_fc(attributes_output)
return places365_output, attributes_output
customized_model = CustomizedResNet()
And I only pass the parameters of the new fc layer to the optimizer.
criterion = torch.nn.BCEWithLogitsLoss()
optimizer = optim.SGD(customized_model.scene_attr_fc.parameters(), lr=learning_rate)
My training parts look like this:
torch.save(customized_model.state_dict(), 'before_training.pth')
for epoch in range(num_epochs):
customized_model.train()
for i, (inputs, labels) in enumerate(dataloader):
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
with torch.set_grad_enabled(True):
_, scene_attr_outputs = customized_model(inputs)
loss = criterion(scene_attr_outputs, labels)
loss.backward()
optimizer.step()
torch.save(customized_model.state_dict(), 'model_saved_at_epoch_%s.pth' % epoch)
And my testing parts look like this:
data_transforms = {
'test': transforms.Compose([
transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
}
customized_model.load_state_dict(torch.load('model_saved_at_epoch_9.pth'))
customized_model.eval()
images_test = ['stone.jpg']
for img_path in images_test:
# load test image
img = Image.open(img_path).convert('RGB')
img = data_transforms['test'](img)
img = img.to(device)
# prediction
places365_outputs, scene_attr_outputs = customized_model.forward(img.unsqueeze(0))
# prediction for places365
# print(places365_outputs.shape) -> torch.Size([1, 365])
h_x = F.softmax(places365_outputs, 1).data.squeeze()
probs, idx = h_x.sort(0, True)
print('places 365 prediction on {}'.format(img_path))
for i in range(0, 5):
# classes stores all 365 labels
print('{:.3f} -> {}'.format(probs[i], classes[idx[i]]))
The problem is, if I use the models saved at different epoch to predict the same image, the prediction of places 365 is changing even if I already freeze all weights for places 365 branch.
For example,
If I use the model saved before any training happens, its prediction is
places 365 prediction on stone.jpg
0.298 -> coast
0.291 -> ocean
0.172 -> beach
0.067 -> ice_floe
0.051 -> sky
If I use model_saved_at_epoch_3.pth, it gives
places 365 prediction on stone.jpg
0.308 -> coast
0.259 -> ocean
0.132 -> beach
0.107 -> sky
0.041 -> cliff
model_saved_at_epoch_13.pth gives:
places 365 prediction on stone.jpg
0.295 -> coast
0.234 -> ocean
0.152 -> sky
0.111 -> beach
0.047 -> cliff
I even compared the weights of the base_model after every epoch to the original weights, and it looks like the weights didnโt change:
# before training occurs
original_weights = []
for name, param in customized_model.base_model.named_parameters():
original_weights.append(param.clone())
for epoch in range(num_epochs):
# training .....
max_abs_diff_sum = 0
idx = 0
for epoch_name, epoch_param in customized_model.base_model.named_parameters():
max_abs_diff_sum += (original_weights[idx] - epoch_param).abs().max()
idx += 1
print(max_abs_diff_sum) # all print tensor(0., device='cuda:0'). So I think the weights of base_model didnt change
Do you have any idea on why the probability distribution is changing even if I froze all weights for the places 365 branch (and it also looks like the base_model weights are the same)?
Thank you. |
st49996 | If you are using batch norm layers in the base model, the running estimates will still be updated even if youโve frozen the affine parameters.
To fix the running stats, you would have to call .eval() on all batch norm layers.
Also, dropout layers might still be active, which could explain the different results.
You could also call .eval() on all dropout layers or alternatively on the self.base_model to disable these effects. |
st49997 | Hi, I found this both interesting and crucial. I want to share my experience here as well and hope it is useful.
Let us assume we have a variable B. I have seen case where I explicitly set B.require_grad = False but B is still updated during fine-tuning. This could happen even when I put operations involved with B within torch.no_grad().
My solution is to modify the code declaration of B in the code. Something like this:
self.register_buffer(โBโ, torch.ones(the shape of B looks like), requires_grad=False))
With that when I load a pretrained model I noticed B is loaded properly. More importantly, when I fine-tune the model I noticed B is indeed frozen. This is a bit manual but it works. |
st49998 | I have created a issue in pytorch repo also: https://github.com/pytorch/pytorch/issues/43467 4
I am using pytorch data loader and runs into some strange error as below, we generate training data in each hour. and I run this experiment to just read training data csv file and parse line by line, no real training stuff is going. we define the dataloader __get_item_ to get a single file out of bunch of files(>5000 files). and each worker dataloader is suppose to read that file and parse csv.
I checked it should not be __get_item index out of boundary, is it because some training file we loaded is too large? it is not always repeatable error, so I am not sure it may be related to some training data? is it possible one training file was too large and basically the multi worker data loader overflow?
Traceback (most recent call last):
File "/home/miniconda/lib/python3.6/multiprocessing/queues.py", line 240, in _feed
send_bytes(obj)
File "/home/miniconda/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/miniconda/lib/python3.6/multiprocessing/connection.py", line 393, in _send_bytes
header = struct.pack("!i", n)
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
looks like multi processing can not handle more than 2G data, anyway we can enlarge multi processing data handle size? |
st49999 | For running mean/variance like tensor, we can use self.register_buffer(name, tensor) to manger it.
However, self.register_buffer must use name to refer to the tensor. In some case, I have a list of buffer tensors and I dnt want to use name to manager these buffers, for instance:
class MyModule(nn.Module):
def __init__(n):
self.params = [tensor(3, 3) for i in range(n)]
for i,p in enumerate(self.params):
self.register_buffer('tensor'+str(i), p)
and then I want to visit self.params. However, once we call MyModule().to('cuda'), the tensor in self.params still point to initial cpu tensor, and we need to use self.tensor0/1/... to refer to cuda tensor. Thatโs not I wanted.
I wish to refer to these tensor by self.params still.
Could anyone help me. |
st175000 | Hi @cindybrain, happy to hear that!
In fact the underlying API will allow you to store arbitrary blobs in shared memory. We deliberately limited the scope in the RFC to have feedback on the proposalโs core functionality. |
st175001 | Hello,
I am training the exact same network in 3 setting: 1GPU (A100), 1GPU using gradient accumulation (Titan), and 2GPUs (V100) via DataParallel.
The problem is that I get 3 fairly different training outcomes with 3 fairly different validation results.
The effective batch size of all 3 experiments is the same. For the 1 GPU and 2 GPU setting without gradient accumulation, it is 6 and for the single GPU + gradient accumulation it is 3 with accumulation step of 2.
Only group norms are used in the network (no batch norms are used).
I use the same learning rate for all the experiments.
For the experiment using a single GPU with gradient accumulation I divide the loss by the accumulation steps as suggested here. (Is this really necessary?)
I would appreciate any hints on what I might be doing wrong? Should not the training be fairly similar in these 3 cases? |
st175002 | For the experiment using a single GPU with gradient accumulation I divide the loss by the accumulation steps as suggested here 1. (Is this really necessary?)
The linked post focuses on DistributedDataParallel (DDP), which is different from DataParallel (DP). For DDP, every process/rank/device compute its own local loss, while for DP, the loss is global across all devices. I am not sure if this contributes to the discrepancy you saw.
The effective batch size of all 3 experiments is the same.
What do you mean by effective batch size? I assume the input batch size used in every iteration should be the same? |
st175003 | Thank you for your reply.
The linked post focuses on DistributedDataParallel (DDP), which is different from DataParallel (DP). For DDP, every process/rank/device compute its own local loss, while for DP, the loss is global across all devices. I am not sure if this contributes to the discrepancy you saw.
It is true that the loss is global across all the GPUs, but I think that should still hold for doing gradient accumulation. I do not divide the loss if I use multiple GPUs, but I should divide the loss if I have gradient accumulation step > 1. Right?
Because if I run the model for a batch and get some loss and some gradients and again run the model for another batch and and get another loss and other gradients and accumulate the gradients for both batches (because I do not run optimizer.zero_grad() after the first backward()), it is like the case that those loss values are summed up, not averaged. Therefore it makes sense to divide the loss by the accumulation steps. Is that not true?
What do you mean by effective batch size? I assume the input batch size used in every iteration should be the same?
Yes. Each time before the optimizer.step() the same number of training samples are fed to the network.
For example: in case of one GPU with gradient accumulation, I set the batch size to 3 and the accumulation step to 2 (so overall I have 6). In the case of 1 GPU setting without gradient accumulation I set the batch size to 6 and for the case of 2 GPUs, I set the overall batch size to 6 (so 3 on each GPU). |
st175004 | I have a single file that contains N samples of data that I want to split into train and val subsets while using DDP. However, I am not entirely sure I am going about this correctly because I am seeing replicated training samples on multiple processes. To reproduce the issue I created a simple dataset class. Hereโs my current way of doing this -
class TrivialDataset(Dataset):
def __init__(self,N):
self.length = N
def __len__(self):
return self.length
def __getitem__(self, index):
return index
The above class is for illustration purposes only - this lets me print the indices retrieved on each DDP process.
dataset = TrivialDataset(20)
train_subset, val_subset = torch.utils.data.random_split(dataset, [16, 4])
train_dataset = Subset(dataset, train_subset)
val_dataset = Subset(dataset, val_subset)
I can confirm that the train and val datasets above have independent indices. Next, I create a different sampler for the train and val subsets :
seed = 10
train_sampler = DistributedSampler(train_dataset, num_replicas=world_size, rank=global_rank, shuffle=False, seed=seed, drop_last=True)
val_sampler = DistributedSampler(val_dataset, num_replicas=world_size, rank=global_rank, shuffle=False, seed=seed, drop_last=True)
Finally, I create the data loaders for each subset
train_loader = DataLoader(train_dataset, batch_size=batch_size, sampler=train_sampler)
val_loader = DataLoader(val_dataset, batch_size=batch_size, sampler=val_sampler)
Now if I iterate over the train_loader on 2 pytorch DDP processes (and print the indices retrieved by each train_loader), I see duplicates on the two processes. I would expect that each process would have independent indices. The expectation is that the train_subset indices and val_subset indices would be independently split across 2 processes, but thatโs not what is happening. The ultimate goal is to scale up across N>>2 processes. Iโm not sure where I am going wrong. If I donโt split the indices into train and val subsets, the code works as expected. Iโve searched this forum but havenโt come across a similar issue.
Thanks! |
st175005 | pysam:
DistributedSampler
cc @VitalyFedyunin for DistributedSampler questions |
st175006 | I wanted to ask, does utilizing DistributedDataParallel impact validation/testing? Iโm wondering if it is still possible to โpauseโ after each epoch and test the model.
Does DistributedDataParallel prevent testing the model during the training loop (after an epoch)?
Since the data and model are trained on different devices, I was unsure if there was an issue with testing the model.
Also, if you can test, should it be done on only a single node, or how should the testing loop be incorporated? For all training agents?
For reference the docs are shown here. |
st175007 | github.com
pytorch/pytorch/blob/d9106116aa5e399f7d63feeb7fc77f92a076dd93/torch/nn/parallel/distributed.py#L939-L949
self.require_forward_param_sync = True
# We'll return the output object verbatim since it is a freeform
# object. We need to find any tensors in this object, though,
# because we need to figure out which parameters were used during
# this forward pass, to ensure we short circuit reduction for any
# unused parameters. Only if `find_unused_parameters` is set.
if self.find_unused_parameters and not self.static_graph:
# Do not need to populate this for static graph.
self.reducer.prepare_for_backward(list(_find_tensors(output)))
else:
self.reducer.prepare_for_backward([])
The code above would make DistributedDataParallel expecting a backward after the current forward. But you can run the validation/testing forward within a with torch.no_grad(): context to force into a different branch. |
st175008 | I try to broadcast a tensor(which is a scalar in practice) from rank0 to other ranks in a machine. But i found the tensor in rank1,2,3 not equals to rank0.
Code
if ddp.get_rank() == 0:
model.eval()
fit_accurary = utils.AverageMeter()
for batch_idx, (inputs, targets) in enumerate(testLoader):
inputs, targets = inputs.to(device), targets.to(device)
with torch.no_grad():
outputs = model(inputs)
predicted = utils.accuracy(outputs, targets,topk=(1,5))
fit_accurary.update(predicted[1], inputs.size(0))
if fit_accurary.avg == 0:
fit_accurary.avg = 0.01
if fit_accurary.avg > best_honey.fitness:
best_honey_state = copy.deepcopy(model.module.state_dict())
best_honey.code = copy.deepcopy(honey)
best_honey.fitness = fit_accurary.avg
avg_acc = float(fit_accurary.avg)
else:
avg_acc = float(0)
avg_acc = (torch.tensor(avg_acc, dtype=torch.float)).to(device)
dist.broadcast(avg_acc, src=0)
#avg_acc = utils.send_and_wait(r=0, data=[avg_acc])
avg_acc = avg_acc.cpu().detach().numpy().item()
print(avg_acc)
return avg_acc
Results
0.009999999776482582 rank0
-0.0011623682221397758 rank1
-0.0011623682221397758 rank2
-0.0011623682221397758 rank3 |
st175009 | Bone:
0.009999999776482582 rank0
-0.0011623682221397758 rank1
-0.0011623682221397758 rank2
-0.0011623682221397758 rank3
Unless avg_acc are already synced on rank1-3, otherwise it looks like rank0 has successfully broadcast to the other three ranks (so that they share the same value) and then the avg_acc was somehow modified afterwards before prints.
Can you try add a stream sync after broadcast and print immediately after that? Sth like:
dist.broadcast(avg_acc, src=0)
torch.cuda.current_stream(device).synchronize()
print(avg_acc)
BTW, would I be correct that each process/rank exclusively occupies one device? |
st175010 | I have encounter RuntimeError when I use DistributedDataParallel:
error
[W python_anomaly_mode.cpp:104] Warning: Error detected in CudnnBatchNormBackward0. Traceback of forward call that caused the error:
File "main.py", line 489, in <module>
img_A_id = decoders[1](encoder(CT_patches))
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 886, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/THL5/home/hugpu1/MR2CT/resnet.py", line 212, in forward
x8 = self.layer4(x7)
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/THL5/home/hugpu1/MR2CT/resnet.py", line 64, in forward
residual = self.downsample(x)
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 168, in forward
return F.batch_norm(
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 2282, in batch_norm
return torch.batch_norm(
(function _print_stack)
0it [00:07, ?it/s]
Traceback (most recent call last):
File "main.py", line 503, in <module>
GAN_total_loss.backward()
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/THL5/home/hugpu1/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [512]] is at version 9; expected version 8 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
However, If I comment out the DistributedDataParallel line, everything works fine.
my code is as follows:
code
def setmodels(device):
opt = Option()
encoder, _ = generate_model(opt)
encoder = encoder.to(device)
decoders = [Gen().to(device), Gen().to(device)]
extractors = [Extractor().to(device), Extractor().to(device)]
Discriminators = [Dis().to(device), Dis().to(device)]
if torch.distributed.is_initialized():#change this line to False then works fine.
encoder = torch.nn.parallel.DistributedDataParallel(encoder, device_ids=[
torch.distributed.get_rank() % torch.cuda.device_count()],
output_device=torch.distributed.get_rank() % torch.cuda.device_count(),
find_unused_parameters=True)
decoders[0] = torch.nn.parallel.DistributedDataParallel(decoders[0], device_ids=[
torch.distributed.get_rank() % torch.cuda.device_count()],
output_device=torch.distributed.get_rank() % torch.cuda.device_count(),
find_unused_parameters=True)
decoders[1] = torch.nn.parallel.DistributedDataParallel(decoders[1], device_ids=[
torch.distributed.get_rank() % torch.cuda.device_count()],
output_device=torch.distributed.get_rank() % torch.cuda.device_count(),
find_unused_parameters=True)
extractors[0] = torch.nn.parallel.DistributedDataParallel(extractors[0], device_ids=[
torch.distributed.get_rank() % torch.cuda.device_count()],
output_device=torch.distributed.get_rank() % torch.cuda.device_count(),
find_unused_parameters=True)
extractors[1] = torch.nn.parallel.DistributedDataParallel(extractors[1], device_ids=[
torch.distributed.get_rank() % torch.cuda.device_count()],
output_device=torch.distributed.get_rank() % torch.cuda.device_count(),
find_unused_parameters=True)
Discriminators[0] = torch.nn.parallel.DistributedDataParallel(Discriminators[0], device_ids=[
torch.distributed.get_rank() % torch.cuda.device_count()],
output_device=torch.distributed.get_rank() % torch.cuda.device_count(),
find_unused_parameters=True)
Discriminators[1] = torch.nn.parallel.DistributedDataParallel(Discriminators[1], device_ids=[
torch.distributed.get_rank() % torch.cuda.device_count()],
output_device=torch.distributed.get_rank() % torch.cuda.device_count(),
find_unused_parameters=True)
return encoder,decoders,extractors,Discriminators
if __name__ == '__main__':
setEnv()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder,decoders,extractors,Discriminators = setmodels(device)
data_loader = setDataloader()
lr={"encoder":0.001,"decoders0":0.01,"decoders1":0.01,"extractors0":0.01,"extractors1":0.01,"Discriminators0":0.01,"Discriminators1":0.01}
encoder_optimizer, decoder_optimizers, extractor_optimizers, Dis_optimizers = setOptimizers(encoder,decoders,extractors,Discriminators,lr)
lambda_cycle = 10
lambda_id = 0.9 * lambda_cycle
loss_hparam = {"lambda_cycle":lambda_cycle,"lambda_id":lambda_id}
writer = setTensorboard("./tb_logs")
hparam = {}
hparam.update(lr)
hparam.update(loss_hparam)
if writer is not None:
writer.add_hparams(hparam,{"accuracy":1})
lossfn = nn.MSELoss()
loss_mae = nn.L1Loss()
loss_binary = nn.BCEWithLogitsLoss()
batch_size = 4
epochs=200
valid = torch.ones((batch_size,1)).to(device)
fake = torch.zeros((batch_size,1)).to(device)
global_step = 0
with torch.autograd.set_detect_anomaly(True):
for epoch in range(epochs):
tk = tqdm(enumerate(data_loader.load_batch(batch_size)),position=0,leave=True)
for i,(CT_patches,MR_patches,CT_para,MR_para) in tk:
CT_patches,MR_patches,CT_para,MR_para = CT_patches.to(device),MR_patches.to(device),CT_para.to(device),MR_para.to(device)
set_require_grad(decoders[0],False)
set_require_grad(decoders[1],False)
d_A_loss_real = loss_binary(Discriminators[0](CT_patches),valid)
Dis_optimizers[0].zero_grad()
d_A_loss_real.backward()
Dis_optimizers[0].step()
#็ๆๆ ทๆฌๆๅคฑ
fake_CT =decoders[1](encoder(MR_patches))
valid_A = Discriminators[0](fake_CT)
d_A_loss_fake = loss_binary(valid_A,fake)
Dis_optimizers[0].zero_grad()
d_A_loss_fake.backward()
Dis_optimizers[0].step()
#่ฎญ็ปmean,std้ขๆตๅจ
false_CT_para = extractors[0](CT_patches)
loss_CTpara = lossfn(false_CT_para,CT_para)
extractor_optimizers[0].zero_grad()
loss_CTpara.backward()
extractor_optimizers[0].step()
false_MR_para = extractors[1](MR_patches)
loss_MRpara = lossfn(false_MR_para,MR_para)
extractor_optimizers[1].zero_grad()
loss_MRpara.backward()
extractor_optimizers[1].step()
#็ๅฎๆ ทๆฌๆๅคฑ
d_B_loss_real = loss_binary(Discriminators[1](MR_patches),valid)
Dis_optimizers[1].zero_grad()
d_B_loss_real.backward()
Dis_optimizers[1].step()
#็ๆๆ ทๆฌๆๅคฑ
fake_MR = decoders[0](encoder(CT_patches))
valid_B = Discriminators[1](fake_MR)
d_B_loss_fake = loss_binary(valid_B,fake)
Dis_optimizers[1].zero_grad()
d_B_loss_fake.backward()
Dis_optimizers[1].step()
#่ฎญ็ป็ๆๅจ
set_require_grad(decoders[0],True)
set_require_grad(decoders[1],True)
del fake_CT,valid_A,fake_MR,valid_B
fake_CT =decoders[1](encoder(MR_patches))
valid_A = Discriminators[0](fake_CT)
fake_MR = decoders[0](encoder(CT_patches))
valid_B = Discriminators[1](fake_MR)
reconstr_CT = decoders[1](encoder(fake_MR))
reconstr_MR = decoders[0](encoder(fake_CT))
img_A_id = decoders[1](encoder(CT_patches))
img_B_id = decoders[0](encoder(MR_patches))
loss1 =loss_binary(valid_A,valid)
loss2 = loss_binary(valid_B,valid)
loss3 =lambda_cycle*loss_mae(reconstr_CT,CT_patches)
loss4 = lambda_cycle*loss_mae(reconstr_MR,MR_patches)
loss5 = lambda_id*loss_mae(img_A_id,CT_patches)
loss6 = lambda_id*loss_mae(img_B_id,MR_patches)
GAN_total_loss = loss1+loss2+loss3+loss4+loss5+loss6
encoder_optimizer.zero_grad()
decoder_optimizers[0].zero_grad()
decoder_optimizers[1].zero_grad()
Dis_optimizers[0].zero_grad()
Dis_optimizers[1].zero_grad()
GAN_total_loss.backward()
decoder_optimizers[0].step()
decoder_optimizers[1].step()
I also put some other unimportant codes:
unimportant code
def setDataloader():
if torch.distributed.is_initialized():
world_size = int(os.environ['SLURM_NTASKS'])
data_loader = DataLoader(torch.distributed.get_rank(), world_size)
else:
data_loader = DataLoader()
return data_loader
def setEnv():
if torch.distributed.is_available() and torch.cuda.device_count()>1:
rank = int(os.environ['SLURM_PROCID'])
local_rank = int(os.environ['SLURM_LOCALID'])
world_size = int(os.environ['SLURM_NTASKS'])
ip = get_ip()
dist_init(ip, rank, local_rank, world_size)
def setOptimizers(encoder,decoders,extractors,Discriminators,lr):
decoder_optimizers = [optim.Adam(decoders[i].parameters(),lr=lr["decoders"+str(i)]) for i in range(2)]
extractor_optimizers = [optim.Adam(extractors[i].parameters(),lr=lr["extractors"+str(i)]) for i in range(2)]
encoder_optimizer = optim.SGD(encoder.parameters(), lr=lr["encoder"], momentum=0.9, weight_decay=1e-3)
Dis_optimizers = [optim.Adam(Discriminators[i].parameters(),lr["Discriminators"+str(i)]) for i in range(2)]
return encoder_optimizer,decoder_optimizers,extractor_optimizers,Dis_optimizers
def setTensorboard(path):
if torch.distributed.is_initialized():
if torch.distributed.get_rank()==0:
writer = SummaryWriter(path)
else:
writer=None
else:
writer = SummaryWriter(path)
return writer
def recordDict(writer,global_step,**kwargs):
for key,value in kwargs.items():
writer.add_scalar(key, value, global_step)
def set_require_grad(model,trainable):
for param in model.parameters():
param.requires_grad = trainable
def get_ip():
slurm_nodelist = os.environ.get("SLURM_NODELIST")
if slurm_nodelist:
root_node = slurm_nodelist.split(" ")[0].split(",")[0]
else:
root_node = "127.0.0.1"
if '[' in root_node:
name, numbers = root_node.split('[', maxsplit=1)
number = numbers.split(',', maxsplit=1)[0]
if '-' in number:
number = number.split('-')[0]
number = re.sub('[^0-9]', '', number)
root_node = name + number
return root_node
def dist_init(host_addr, rank, local_rank, world_size, port=23456):
host_addr_full = 'tcp://' + host_addr + ':' + str(port)
torch.distributed.init_process_group("nccl", init_method=host_addr_full,
rank=rank, world_size=world_size)
num_gpus = torch.cuda.device_count()
torch.cuda.set_device(local_rank)
assert torch.distributed.is_initialized()
Could anyone help me? |
st175011 | Hey @yllgl, this might be caused by a coalesced broadcast on buffers. Does your model contain buffers? If so, do you need DDP to sync buffers for you? If not, you can try setting broadcast_buffers=False in DDP ctor. |
st175012 | Hi, everyone. The configuration I use is Tesla A100, cuda11.0, cudnn8.0.4,pytorch1.7.1. But when I run the following code๏ผ
import torch
a = torch.randn(2, 3, device='cuda:0')
print(a)
b = a.to('cuda:1')
print(b)
The output are:
tensor([[-2.0747, -0.5964, -0.3556],
[-0.5845, 0.3389, -0.3725]], device='cuda:0')
tensor([[0., 0., 0.],
[0., 0., 0.]], device='cuda:1')
I am now trying to confirm if this problem is caused at the code level (pytroch,cuda,cudnn) or if there is something wrong with the gpu installation. Thanks! |
st175013 | Could you update to the latest stable release (1.8.0) and rerun the code?
Such error might be software or hardware related and itโs hard to tell it just from these symptoms. |
st175014 | @ptrblck, thank you for your answer. Now I update to the latest stable release 1.8.0, and I run the following code:
import torch
import torch.utils
import torch.utils.cpp_extension
print(torch.__version__)
print(torch.cuda.is_available())
print(torch.cuda.device_count())
print(torch.cuda.get_device_name(0))
print(torch.utils.cpp_extension.CUDA_HOME)
a = torch.randn(2, 3, device='cuda:0')
print(a)
b = a.to('cuda:1')
print(b)
The output are:
1.8.0+cu111
True
8
A100-PCIE-40GB
/usr/local/cuda-11.1
tensor([[-0.9957, -0.9985, 1.1794],
[-1.4586, -0.0102, -0.0106]], device='cuda:0')
tensor([[0., 0., 0.],
[0., 0., 0.]], device='cuda:1')
Below is my driver configuration๏ผ
1739ร935 169 KB
And now I use cuda11.1, cudnn8.1.0.
Do you have any more suggestions to help me troubleshoot the cause (software and hardware) of this problem? Thank you very much. |
st175015 | I tried it on a system with 8x A100s and get a valid output:
1.8.0
True
8
A100-SXM4-40GB
/usr/local/cuda
tensor([[ 0.0440, 1.6352, -0.7515],
[ 1.6543, -0.5374, -0.8127]], device='cuda:0')
tensor([[ 0.0440, 1.6352, -0.7515],
[ 1.6543, -0.5374, -0.8127]], device='cuda:1')
Used driver: 460.32.03.
Could you check dmesg for any Xids and post them here? |
st175016 | @ptrblck, I donโt know how to check dmesg for any Xids . When I run dmesg, it outputs a lot of information, and Xid is not included in the information. Should I post all of them here? And what command should I run? Thanks! |
st175017 | You can use dmesg -T | grep xid and check for the most recent one (if there are any).
Donโt post the complete log here, as it should be quite long |
st175018 | @ptrblck, after running dmesg -T | grep xid, it outputs nothing . Will the difference between A100-PCIE-40GB and A100-SXM4-40GB lead to this problem?
And I found some new and interesting phenomena. When I run the following code๏ผ
import torch
a = torch.randn(2, 3, device='cuda:0')
print(a)
b = torch.ones(2, 3, device='cuda:1')
print(b)
a1 = a.to('cuda:1')
print(a1)
b1 = b.to('cuda:0')
print(b1)
print(a1 is b)
print(b1 is a)
The output is:
tensor([[-1.7723, -0.0860, -1.6667],
[ 3.1190, -0.2531, -0.7271]], device='cuda:0')
tensor([[1., 1., 1.],
[1., 1., 1.]], device='cuda:1')
tensor([[1., 1., 1.],
[1., 1., 1.]], device='cuda:1')
tensor([[-1.7723, -0.0860, -1.6667],
[ 3.1190, -0.2531, -0.7271]], device='cuda:0')
False
False
The value of a1 is equal to b and the value of b1 is equal to a. I wonder if this phenomenon can provide new clues. |
st175019 | @ptrblck , I just run the cuda-sample p2pBandwidthLatencyTest 2 which demonstrates Peer-To-Peer (P2P) data transfers between pairs of GPUs and computes latency and bandwidth., and it outputs:
[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]
Device: 0, A100-PCIE-40GB, pciBusID: 1, pciDeviceID: 0, pciDomainID:0
Device: 1, A100-PCIE-40GB, pciBusID: 24, pciDeviceID: 0, pciDomainID:0
Device: 2, A100-PCIE-40GB, pciBusID: 41, pciDeviceID: 0, pciDomainID:0
Device: 3, A100-PCIE-40GB, pciBusID: 61, pciDeviceID: 0, pciDomainID:0
Device: 4, A100-PCIE-40GB, pciBusID: 81, pciDeviceID: 0, pciDomainID:0
Device: 5, A100-PCIE-40GB, pciBusID: a1, pciDeviceID: 0, pciDomainID:0
Device: 6, A100-PCIE-40GB, pciBusID: c1, pciDeviceID: 0, pciDomainID:0
Device: 7, A100-PCIE-40GB, pciBusID: e1, pciDeviceID: 0, pciDomainID:0
Device=0 CAN Access Peer Device=1
Device=0 CAN Access Peer Device=2
Device=0 CAN Access Peer Device=3
Device=0 CAN Access Peer Device=4
Device=0 CAN Access Peer Device=5
Device=0 CAN Access Peer Device=6
Device=0 CAN Access Peer Device=7
Device=1 CAN Access Peer Device=0
Device=1 CAN Access Peer Device=2
Device=1 CAN Access Peer Device=3
Device=1 CAN Access Peer Device=4
Device=1 CAN Access Peer Device=5
Device=1 CAN Access Peer Device=6
Device=1 CAN Access Peer Device=7
Device=2 CAN Access Peer Device=0
Device=2 CAN Access Peer Device=1
Device=2 CAN Access Peer Device=3
Device=2 CAN Access Peer Device=4
Device=2 CAN Access Peer Device=5
Device=2 CAN Access Peer Device=6
Device=2 CAN Access Peer Device=7
Device=3 CAN Access Peer Device=0
Device=3 CAN Access Peer Device=1
Device=3 CAN Access Peer Device=2
Device=3 CAN Access Peer Device=4
Device=3 CAN Access Peer Device=5
Device=3 CAN Access Peer Device=6
Device=3 CAN Access Peer Device=7
Device=4 CAN Access Peer Device=0
Device=4 CAN Access Peer Device=1
Device=4 CAN Access Peer Device=2
Device=4 CAN Access Peer Device=3
Device=4 CAN Access Peer Device=5
Device=4 CAN Access Peer Device=6
Device=4 CAN Access Peer Device=7
Device=5 CAN Access Peer Device=0
Device=5 CAN Access Peer Device=1
Device=5 CAN Access Peer Device=2
Device=5 CAN Access Peer Device=3
Device=5 CAN Access Peer Device=4
Device=5 CAN Access Peer Device=6
Device=5 CAN Access Peer Device=7
Device=6 CAN Access Peer Device=0
Device=6 CAN Access Peer Device=1
Device=6 CAN Access Peer Device=2
Device=6 CAN Access Peer Device=3
Device=6 CAN Access Peer Device=4
Device=6 CAN Access Peer Device=5
Device=6 CAN Access Peer Device=7
Device=7 CAN Access Peer Device=0
Device=7 CAN Access Peer Device=1
Device=7 CAN Access Peer Device=2
Device=7 CAN Access Peer Device=3
Device=7 CAN Access Peer Device=4
Device=7 CAN Access Peer Device=5
Device=7 CAN Access Peer Device=6
***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.
So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.
P2P Connectivity Matrix
D\D 0 1 2 3 4 5 6 7
0 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 1154.84 14.60 14.67 14.33 15.10 15.62 15.94 17.48
1 21.10 1290.26 15.47 15.56 15.70 15.75 16.02 17.27
2 15.24 15.12 1177.47 14.75 15.01 15.67 15.89 17.55
3 15.13 14.92 15.05 1167.79 16.10 16.14 16.37 17.47
4 15.11 15.22 15.12 21.26 1292.39 17.51 18.07 15.18
5 15.15 15.27 15.11 18.93 21.74 1292.39 18.05 16.26
6 15.02 15.60 19.99 21.53 21.61 21.34 1291.32 15.86
7 14.74 17.48 21.25 21.47 21.58 21.34 21.59 1295.61
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 1156.55 2.11 2.78 2.78 2.78 2.78 2.78 2.78
1 1.24 1291.32 1.62 1.63 1.63 1.33 1.62 1.62
2 2.40 2.78 1293.46 2.78 2.78 2.78 2.78 2.78
3 2.41 2.17 2.78 1293.46 2.78 2.78 2.78 2.78
4 2.14 2.44 2.78 2.78 1293.46 2.78 2.78 2.78
5 2.33 2.17 2.78 2.78 2.78 1291.32 2.78 2.78
6 2.40 2.44 2.78 2.78 2.44 2.78 1293.46 2.78
7 2.14 2.78 2.78 2.78 2.78 2.78 2.78 1293.46
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 1180.58 20.51 19.38 19.44 20.11 20.03 20.32 23.85
1 23.52 526.72 19.13 19.57 19.82 19.82 20.03 29.41
2 19.88 18.65 1204.24 18.97 19.56 19.41 19.60 29.57
3 19.53 20.43 19.69 1291.32 20.39 20.29 20.53 23.41
4 19.79 20.58 21.06 22.56 1306.98 21.44 21.89 20.29
5 19.89 20.85 20.41 22.24 30.66 1308.08 21.96 21.02
6 19.60 21.33 19.79 22.55 30.23 30.73 1308.63 20.59
7 20.16 29.87 21.30 23.69 22.56 25.27 23.09 1306.98
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 1184.16 3.25 5.43 4.79 4.28 4.83 4.80 4.81
1 3.25 487.44 2.66 3.24 3.25 3.25 1.87 3.24
2 4.70 3.24 1309.17 4.82 4.28 4.32 4.80 4.83
3 3.87 3.23 5.57 1304.26 4.28 4.83 4.29 3.87
4 3.87 2.65 4.23 5.57 1308.63 4.28 5.45 4.81
5 4.73 2.65 4.77 4.28 3.90 1308.63 4.80 4.24
6 4.20 3.23 4.79 4.24 4.28 4.28 1309.72 4.79
7 4.28 2.65 4.29 5.43 4.83 4.79 4.29 1304.80
P2P=Disabled Latency Matrix (us)
GPU 0 1 2 3 4 5 6 7
0 4.34 23.57 23.59 23.67 22.00 21.84 22.72 21.99
1 23.53 2.31 21.76 21.65 15.04 18.91 19.76 18.93
2 23.57 14.54 2.37 20.29 21.53 15.93 18.08 15.33
3 23.58 21.56 21.56 2.33 16.20 20.58 21.47 20.71
4 22.91 15.80 15.57 19.92 2.38 20.55 20.56 18.27
5 22.50 17.00 19.01 17.91 20.55 2.38 20.58 20.54
6 22.93 14.91 20.48 12.59 20.56 20.58 2.30 19.28
7 22.30 21.55 13.68 21.55 21.55 21.55 21.55 2.50
CPU 0 1 2 3 4 5 6 7
0 3.79 11.84 12.26 11.93 10.96 10.78 10.57 11.15
1 12.22 3.54 11.48 11.29 10.21 10.22 9.91 10.45
2 12.05 11.31 3.59 11.17 10.09 10.12 9.83 10.39
3 11.93 11.12 11.06 3.53 10.00 10.05 9.78 10.21
4 11.18 10.38 10.46 10.22 3.24 9.45 9.19 9.56
5 11.12 10.40 10.50 10.28 9.38 3.15 9.21 9.59
6 10.95 10.18 10.29 10.10 9.18 9.25 3.08 9.46
7 11.62 10.76 10.86 10.51 9.54 9.53 9.33 3.77
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1 2 3 4 5 6 7
0 4.35 49204.95 49204.80 49204.80 49204.78 49204.77 49204.79 49204.79
1 49205.36 2.31 49204.98 49204.97 49205.17 49204.88 49205.20 49205.15
2 49204.99 49204.85 2.34 49204.81 49204.84 49204.82 49204.76 49204.83
3 49205.01 49204.90 49204.84 2.33 49204.84 49204.89 49204.86 49204.79
4 49204.87 49204.77 49204.73 49204.75 2.39 49204.68 49204.75 49204.73
5 49204.92 49204.78 49204.81 49204.81 49204.78 2.38 49204.80 49204.76
6 49205.02 49204.82 49204.87 49204.86 49204.85 49204.83 2.29 49204.85
7 49205.04 49204.88 49204.90 49204.86 49204.82 49204.83 49204.85 2.47
CPU 0 1 2 3 4 5 6 7
0 5.10 2.95 3.02 3.08 3.01 3.03 3.01 2.97
1 2.98 3.78 2.75 2.85 2.87 2.86 2.86 2.91
2 2.96 2.77 3.81 2.81 3.05 2.83 2.98 2.72
3 3.02 2.84 2.77 3.72 2.83 2.86 2.83 2.80
4 2.82 2.60 2.53 2.62 3.38 2.59 2.63 2.53
5 2.53 2.56 2.53 2.57 2.43 3.46 2.49 2.38
6 2.66 2.66 2.55 2.51 2.74 2.67 6.27 4.33
7 2.70 2.62 2.56 2.58 2.67 2.66 2.57 4.59
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
I wonder if this results can provide new clues. |
st175020 | Thanks for the input.
Iโll follow up with internal teams on what the next debug steps are.
It seems that the communication is somehow failing, but Iโm not sure what might be causing this.
Could you post more information about your system? I.e. is it a custom workstation or a specific pre-configured server?
EDIT:
Are you seeing the same issue using different devices?
E.g. cuda:0 to cuda:2 etc.?
Also, for the failing case, could you add torch.cuda.synchronize() before printing b? |
st175021 | I also cannot reproduce this issue on a node with 8x A100 PCIE.
Could you check, if ACS is turned off via:
lspci -vvvv | grep ACSCtl:
It should return a lot of ACSCtl: SrcValid-, where SrcValid- shows that ACS is indeed turned off. |
st175022 | @ptrblck, Thank you very much for your kind help .
1.For the following question:
Could you post more information about your system? I.e. is it a custom workstation or a specific pre-configured server?
It 's a custom workstation, which has 2 AMD cpu (EPYC 7742) supporting PCIE4 and 1T memory. lnk and sta info are below:
11031ร348 8.4 KB
2.For the following question:
Are you seeing the same issue using different devices? E.g. cuda:0 to cuda:2 etc.?
When I run the following code:
import torch
a = torch.randn(2, 3, device='cuda:0')
print(a)
b = a.to('cuda:2')
print(b)
The outputs are:
tensor([[-0.2846, 0.5795, 1.2842],
[ 0.3382, -0.4902, -0.8187]], device='cuda:0')
tensor([[0., 0., 0.],
[0., 0., 0.]], device='cuda:2')
3.For the following question:
Also, for the failing case, could you add torch.cuda.synchronize() before printing b ?
When I run the following code:
import torch
a = torch.randn(2, 3, device='cuda:0')
print(a)
b = a.to('cuda:2')
torch.cuda.synchronize()
print(b)
The outputs are:
tensor([[ 1.3674, -1.1252, -0.1123],
[ 0.4165, 0.7612, 0.4003]], device='cuda:0')
tensor([[0., 0., 0.],
[0., 0., 0.]], device='cuda:2')
4.When I run lspci -vvvv | grep ACSCtl:, the outputs are:
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
pcilib: sysfs_read_vpd: read failed: Input/output error
pcilib: sysfs_read_vpd: read failed: Input/output error
pcilib: sysfs_read_vpd: read failed: Input/output error
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
pcilib: sysfs_read_vpd: read failed: Input/output error
ACSCtl: SrcValid+ TransBlk- ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans-
ACSCtl: SrcValid+ TransBlk- ReqRedir+ CmpltRedir+ UpstreamFwd+ EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
There are some pcilib: sysfs_read_vpd: read failed: Input/output error message. |
st175023 | Sorry for the late reply, but I just revisited this topic by chance and took another look at the last outputs.
Could you check the NCCL troubleshooting 3 and disable ACS? |
st175024 | When I tried DDP to train a model, I found itโs not difficult. For testing, I found different code examples. The simplest is only test your model in GPU 0 and stop all other processes. But I found an error like this:
[E ProcessGroupNCCL.cpp:566] [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1806103 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:325] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
And I think only testing on the main GPU is a waste of computing resources. So I change my code like this:
for epoch_id in range(max_epoch):
train_loader.sampler.set_epoch(epoch_id)
for it, (input, category) in enumerate(train_loader):
model.train()
optim.zero_grad()
logits = model(input)
loss = criterion(logits, category)
loss.backward()
optim.step()
iter += 1
if iter % 1000 == 0:
dist.barrier()
acc = test(test_loader, model) # dist.reduce here
if dist.get_rank() == 0:
if acc > best_acc:
save(model)
I donโt want to test the model at the end of each epoch because the dataset is really large. I use a counter to test the model every 1000 iterations. So I set dist.barrier() to keep multi-processed in sync for testing. And I use dist.reduce() to collect the results.
My question is: Is this the right way to test a model in DDP? When my code runs into test() function, are the weights of models in different processes the same or not? Am I using dist.barrier() in the right way? |
st175025 | I think your example should works as expected whatever there is dist.barrier() or not. DDP are synced at loss.backward().
P.S.
Testing/inference with DDP is somewhat more tricky than training. If using DistributedSampler to scatter your data, You should ensure the number of testing data is divisible to the number of your GPUs, otherwise the results might be incorrect.
Say, thereโre 100 batches in your testing set, while there are 8 GPUs (100 % 8 = 4). The DistributedSampler will repeat part of the data and expand it to 104 (104 % 8 = 0) such that the data could be evenly loaded into each GPU. |
st175026 | Got it. I noticed this special difference between training and testing and I set the GPUs and batch_size divisible. It has worked well so far. Thank you! |
st175027 | Hello, I am wondering how these lines of code exactly work ,
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset, num_replicas=args.world_size, rank=rank)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=False, num_workers=0, pin_memory=True, sampler=train_sampler)
if the world_size = 8 having 8 gpus, so this dataloader takes 8 batches at 1 time, giving each GPU 1 distinct batch , like for example we have 16 batches [0,1,2,3,4,5,6,7,8,9โฆ], so GPU 1 take batch 0 , GPU 2 take batch 1, GPU 3 take batch 2 and so on (assuming no shuffling). If not can someone tell me how does it work exactly ? |
st175028 | Hi, I am using the distributed data parallel as shown in the turtorial 10. I have 2 GPUs in a single machine. I want to train the model on all the nodes but evaluate the model on the node with rank 0. I set up a barrier during the evaluation. But the node rank 1 cannot go out from the barrier. My code flow is like this:
def train( self, resume=False ):
for i in range( self._epoch+1, self._niter+1 ):
self._train_sampler.set_epoch(i)
self._train()
if self._testset is not None and i%1 == 0 :
if not self.distributed or self.rank==0:
print('rank {} go to validation'.format(self.rank))
self._validate()
if self.distributed:
print('rank {} go to barrier'.format(self.rank))
dist.barrier()
print('rank {} go out of barrier'.format(self.rank))
self._epoch = i
self.save_training(self._cfg.path.CHECKPOINT.format(self._epoch))
if hasattr(self, '_scheduler'):
self._scheduler.step()
However, the rank 1 will freeze after validation. The output of is like:
trainingโฆ
start to validate rank 1 go to barrier
validatingโฆ
rank 0 go to barrier
rank 0 go out of barrier
checkpoint is saved in xxxโฆ
Then it just freeze for the next epoch training
The validation code is like:
def _validate( self ):
if isinstance(self._model, DDP):
if self.rank != 0:
return
print('start to validate')
self._model.eval()
results = []
with torch.no_grad() :
for idx,(inputs, targets) in enumerate(tqdm.tqdm(self._testset, 'evaluating')):
inputs = self._set_device( inputs )
output = self._model( inputs)
batch_size = len(output['boxes'])
for i in range(batch_size):
if len(output['boxes'][i]) == 0:
continue
# convert to xywh
output['boxes'][i][:,2] -= output['boxes'][i][:,0]
output['boxes'][i][:,3] -= output['boxes'][i][:,1]
for j in range(len(output['boxes'][i])):
results.append({'image_id':int(targets[i]['image_id']),
'category_id':output['labels'][i][j].cpu().numpy().tolist(),
'bbox':output['boxes'][i][j].cpu().numpy().tolist(),
'score':output['scores'][i][j].cpu().numpy().tolist()})
with open('temp_result.json','w') as f:
json.dump(results,f)
self.eval_result(dataset=self._dataset_name) # use coco tool to evaluate the output file
If I remove the evaluation code, the barrier works as expected and the rank 1 can go out from the barrier.
Does anyone know how to solve the problem? |
st175029 | Solved by pritamdamania87 in post #11
@Euruson I think Iโve figured out the problem here. You are still using DDP for the validation phase even though it runs only on one rank. Even though you might not run the backward pass for DDP during eval phase, the forward pass for DDP might still invoke some collective operations (ex: syncing buโฆ |
st175030 | Does your model forward pass contains any communication? (e.g., SyncBatchNorm layers)
BTW, how long does the validation take? Would I be correct if I assume the log shows that rank 0 finishes barrier successfully but rank 1 doesnโt? |
st175031 | I used the resnet_fpn backbone from torchvision and I am trying to implement faster rcnn FPN by myself. I didnโt see any SyncBatchNorm layers in torchvision resnet. So I guess there is no communication.
I just validated 15 samples just to debug, it took less than 1 mins.
Yes, the log shows that the rank 0 finishes barrier successfully but rank 1 doesnโt. |
st175032 | I changed the enviroment from pytorch 1.6 to latest nightly pytorch and the old problem start to change to new one. I used the same code to run the experiments, but barrier doesnโt work at all. The output is like:
Training: 0%| | 0/5647 [00:18<?, ?it/s]Average loss : 2.0772855
rank 0 go to validation
start to validate
Training: 0%| | 0/5647 [00:20<?, ?it/s]Average loss : 2.2598593
rank 1 go to barrier
rank 1 go out of barrier
Training: 0%| | 0/5647 [00:00<?, ?it/s]
rank 0 go to barrier
rank 0 go out of barrier
The checkpoint has been saved to /home/dsv/qida0163/Vision/data/embedding/frcnn_coco_person/checkpoints/checkpoints_20201011_dis_1.pkl
Epoch 2/13
Training: 0%|
Then freezing.
Which means the barrier did work since the rank 1 go out before the validation and started the new batch training before rank 1 finish the validation.
I also tried to remove the barrier before, but it just goes freezing in the end.
Just want to provide more information. |
st175033 | So did you sovle this problem?
I met the same problem while validating the trainning model during the epoch interval. It seems that dist.barrier() didnโt work.
Just as @mrshenli said:
rank 0 finishes barrier successfully but rank 1 doesnโt
Rank 0 finished validation and crossed the barrier while Rank 1 didnโt. And everything works fine after removing the validation code.
BTW, the validation was only on rank 0. |
st175034 | @Euruson Do you have a minimal repro we can try out on our end to reproduce this problem? |
st175035 | @pritamdamania87 Yep, I just modified this official tutorial 7 with DDP.
The code is here:
gist.github.com
https://gist.github.com/Euruson/d9bae6490e8b101e0ff9b37c4af1bc76 48
ddp_val.py
# https://discuss.pytorch.org/t/distributeddataparallel-barrier-doesnt-work-as-expected-during-evaluation/99867/3
from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
This file has been truncated. show original
The result is like this:
Rank:0 - Epoch 0/24
Rank:1 - Epoch 0/24
----------
Rank: 0 - train Loss: 0.5293 Acc: 0.7459
Rank: 1 - train Loss: 0.4891 Acc: 0.7623
Rank:1 - Epoch 1/24
Rank: 0 - val Loss: 0.2841 Acc: 0.8889
Rank:0 - Epoch 1/24
----------
It seems that dist.barrier() doesnโt work as rank 1 just goes to the next epoch without waiting rank 0โs validating. And then the program just freezes |
st175036 | Note that output to the terminal is not always guaranteed to be in the order of the actual operations. Due to things like buffering it is possible that output to stdout is in a different order from the order that the actual operations were executed in (especially in a multiprocess/multithreaded environment).
To verify this, can you add timestamps to each output line and also print something after the barrier call is done? |
st175037 | Sure. And the minimal repro 6 is updated at the same time.
The result, got frozen after validation:
16:18:13 723869 | Rank:1 - Epoch 0/24
16:18:13 723866 | Rank:0 - Epoch 0/24
16:18:13 723900 | ----------
16:18:16 888776 | Rank: 0 - train Loss: 0.5663 Acc: 0.6885
16:18:16 896916 | Rank: 1 - train Loss: 0.5002 Acc: 0.7705
16:18:16 896992 | Rank:1 waiting before the barrier
16:18:17 383175 | Rank:1 left the barrier
16:18:17 383215 | Rank:1 - Epoch 1/24
16:18:17 829886 | Rank: 0 - val Loss: 0.2327 Acc: 0.9150
16:18:17 829934 | Rank:0 waiting before the barrier
16:18:17 830029 | Rank:0 left the barrier
16:18:17 830044 | Rank:0 - Epoch 1/24
16:18:17 830051 | ----------
I change the validation function to time.sleep and the barrier works fine:
16:40:57 446421 | Rank:1 - Epoch 0/24
16:40:57 446420 | Rank:0 - Epoch 0/24
16:40:57 446456 | ----------
16:41:00 635462 | Rank:1 - train Loss: 0.5516 Acc: 0.6885
16:41:00 635536 | Rank:1 waiting before the barrier
16:41:00 635599 | Rank:0 - train Loss: 0.4810 Acc: 0.7705
16:41:00 635663 | Rank:0 sleeping
16:41:05 640713 | Rank:0 awaken
16:41:05 640734 | Rank:0 waiting before the barrier
16:41:05 640875 | Rank:0 left the barrier
16:41:05 640890 | Rank:0 - Epoch 1/24
16:41:05 640882 | Rank:1 left the barrier
16:41:05 640912 | ----------
16:41:05 640935 | Rank:1 - Epoch 1/24
16:41:08 641714 | Rank:1 - train Loss: 0.3519 Acc: 0.8279
16:41:08 641790 | Rank:1 waiting before the barrier
16:41:08 651248 | Rank:0 - train Loss: 0.4229 Acc: 0.8156
16:41:08 651340 | Rank:0 sleeping
16:41:13 656394 | Rank:0 awaken |
st175038 | @Euruson I think Iโve figured out the problem here. You are still using DDP for the validation phase even though it runs only on one rank. Even though you might not run the backward pass for DDP during eval phase, the forward pass for DDP might still invoke some collective operations (ex: syncing buffers or syncing indices when it rebuilts buckets the first time). As a result, what is happening is that your collective ops are mismatched and some of the collective ops for DDPโs forward pass on rank 0 match up with the barrier() call on rank 1 leading it to leave the barrier.
If you make the following code change, your script seems to be working as expected:
if phase == "val":
outputs = model.module(inputs)
else:
outputs = model(inputs)
model.module retrieves the underlying non-replicated model which you can use for validation. The output on my local machine is as follows with this change:
19:39:05 071604 | Rank:0 - Epoch 0/24
19:39:05 071607 | Rank:1 - Epoch 0/24
19:39:05 071672 | ----------
19:39:08 620338 | Rank: 1 - train Loss: 0.4468 Acc: 0.7787
19:39:08 620479 | Rank:1 waiting before the barrier
19:39:08 651507 | Rank: 0 - train Loss: 0.5222 Acc: 0.7623
19:39:10 524626 | Rank: 0 - val Loss: 0.2312 Acc: 0.9281
19:39:10 524726 | Rank:0 waiting before the barrier
19:39:10 524973 | Rank:0 left the barrier
19:39:10 524994 | Rank:1 left the barrier
19:39:10 525106 | Rank:1 - Epoch 1/24
19:39:10 525123 | Rank:0 - Epoch 1/24
19:39:10 525156 | ----------
19:39:13 735254 | Rank: 1 - train Loss: 0.3994 Acc: 0.8197
19:39:13 735366 | Rank:1 waiting before the barrier
19:39:13 739752 | Rank: 0 - train Loss: 0.4128 Acc: 0.8197
19:39:15 298398 | Rank: 0 - val Loss: 0.2100 Acc: 0.9216
19:39:15 298483 | Rank:0 waiting before the barrier
19:39:15 298672 | Rank:0 left the barrier
19:39:15 298702 | Rank:0 - Epoch 2/24
19:39:15 298716 | ----------
19:39:15 298728 | Rank:1 left the barrier
19:39:15 298811 | Rank:1 - Epoch 2/24
19:39:18 586375 | Rank: 0 - train Loss: 0.4336 Acc: 0.8156
19:39:18 605651 | Rank: 1 - train Loss: 0.3094 Acc: 0.8893
19:39:18 605791 | Rank:1 waiting before the barrier
19:39:20 199963 | Rank: 0 - val Loss: 0.2205 Acc: 0.9216
19:39:20 200061 | Rank:0 waiting before the barrier
19:39:20 200296 | Rank:0 left the barrier
19:39:20 200329 | Rank:0 - Epoch 3/24 |
st175039 | @pritamdamania87 It works! Thanks a lot!
Any references for more details? The tutorial on the official website just mentions that the backward would trigger the barrier. |
st175040 | Euruson:
Any references for more details? The tutorial on the official website just mentions that the backward would trigger the barrier.
I donโt think this is documented, but under certain conditions there might be a sync during the forward pass: https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/distributed.py#L675 16. In addition to this, we rebuild buckets once: https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/distributed.py#L671 5, which triggers a sync across ranks: https://github.com/pytorch/pytorch/blob/master/torch/lib/c10d/reducer.cpp#L1377 4 |
st175041 | Is there any documentation discussing the above fix? dist.barrier()'s behaviour is a bit mysterious but this seems to fix most issues related to silent hanging. Thanks again, this fix worked for me. |
st175042 | Hi, I am new to the concept of DDP. I am currently training my model on two GPUs.
If I train on a single GPU with a batch size of b, do I need to divide this batch size by the number of GPUs available for training in DDP.
How can I calculate F1 score, Precision, Recall for a model being trained in DDP?
If I store local loss of two GPUs in two arrays. Is okay if I add them and divide by number of GPUs to get an average? |
st175043 | Solved by mrshenli in post #10
I see. In that case, DDP alone wonโt be sufficient, as DDPโs output and loss are local to each process. If you only need to calculate the globally loss, one option is to gather the outputs instead of loss, and then calculated loss on the gathered outputs. If you also need back propagation from the gโฆ |
st175044 | If I train on a single GPU with a batch size of b, do I need to divide this batch size by the number of GPUs available for training in DDP.
Yep, you can start with dividing the batch size. But depending on the loss function and whether each process is consuming the same number of samples per iteration, DDP may or may not give you the exactly same result as local training. See this discussion: Should we split batch_size according to ngpu_per_node when DistributedDataparallel 4
How can I calculate F1 score, Precision, Recall for a model being trained in DDP?
DDP does not change the behavior of the forward pass. So, these metrics can be calculated similar to local training. But since now the outputs and loss locate on multiple GPUs, you might need to gather 6/allgather 8 them first if you need global numbers.
If I store local loss of two GPUs in two arrays. Is okay if I add them and divide by number of GPUs to get an average?
Similar to the 1st bullet, this depends on your loss function. If itโs sth like MSE, then yes, the average of two local loss should be the sane of the global one. But other loss functions might not have this property. |
st175045 | Since I am using multiple GPUs with nccl backend gather does not work so I tried using all_gather and below is a part of my code
code:
loss = criterion(output, targets)
tmp = [torch.empty_like(loss).cuda(rank) for _ in range(2)]
dist.all_gather(tmp, loss)
print(tmp[0].data.item())
loss.backward()
optimizer.step()
print(f'Model on GPU {rank}, Epoch: {epoch}--{i}/{len(dataloader)}], Loss: {loss.data.item()}')
output:
0.9214748740196228
0.9214748740196228
Model on GPU 0, Epoch: 1--0/28125], Loss: 0.9214748740196228, Batch_time_Average - 4.840
Model on GPU 1, Epoch: 1--0/28125], Loss: 0.9064291715621948, Batch_time_Average - 4.781
0.7848501801490784
0.7848501801490784
Model on GPU 0, Epoch: 1--1/28125], Loss: 0.7848501801490784, Batch_time_Average - 6.893
Model on GPU 1, Epoch: 1--1/28125], Loss: 0.6567432880401611, Batch_time_Average - 6.798
0.7931838035583496
0.7931838035583496
Model on GPU 0, Epoch: 1--2/28125], Loss: 0.7931838035583496, Batch_time_Average - 8.924
Model on GPU 1, Epoch: 1--2/28125], Loss: 0.825346052646637, Batch_time_Average - 8.835
1.0175780057907104
1.0175780057907104
Model on GPU 0, Epoch: 1--3/28125], Loss: 1.0175780057907104, Batch_time_Average - 10.966
Model on GPU 1, Epoch: 1--3/28125], Loss: 0.5258045196533203, Batch_time_Average - 10.868
tmp is just printing loss value at GPU 0. I am using Categorical cross entropy. I am not sure if this is the required output. |
st175046 | Hey @Saurav_Gupta1, the usage on torch.distributed.all_gather looks correct to me. One thing I wanna mention is that torch.distributed.all_gather is not an autograd function, so running backward on gathered tensors (i.e., tmp in your code) wonโt reach the autograd graph prior to the all_gather operation. If you really need autograd to extend beyond comm ops, you can try this experimental autograd-powered all_gather 1. |
st175047 | My intension here is only to gather information regarding loss, accuracy and calculate F1 score, precision. I donโt want to run backward on gathered data. Am I gathering loss correctly? I am asking because in the above example tmp contains loss from GPU:0 but in some other run tmp contains loss of GPU:1 |
st175048 | Saurav_Gupta1:
Am I gathering loss correctly?
It looks correct to me.
I am asking because in the above example tmp contains loss from GPU:0 but in some other run tmp contains loss of GPU:1
Could you please share a repro where tmp[0] contains loss from rank 1? The above output seems only shows loss from rank 0? |
st175049 | Code:
loss = criterion(output, targets)
tmp = [torch.empty_like(loss).cuda(rank) for _ in range(2)]
dist.all_gather(tmp, loss)
print(tmp)
loss.backward()
optimizer.step()
print(f'Model on GPU {rank}, Epoch: {epoch}--{i}/{len(dataloader)}], Loss: {loss.data.item()}')
Output:
[tensor(0.6518, device='cuda:0'), tensor(0.7940, device='cuda:0')]
[tensor(0.6518, device='cuda:1'), tensor(0.7940, device='cuda:1')]
Model on GPU 0, Epoch: 1--0/28125], Loss: 0.6518099904060364
Model on GPU 1, Epoch: 1--0/28125], Loss: 0.7939583659172058
[tensor(0.7865, device='cuda:1'), tensor(0.7719, device='cuda:1')]
[tensor(0.7865, device='cuda:0'), tensor(0.7719, device='cuda:0')]
Model on GPU 0, Epoch: 1--1/28125], Loss: 0.7865331172943115
Model on GPU 1, Epoch: 1--1/28125], Loss: 0.7718786001205444
[tensor(0.7348, device='cuda:1'), tensor(0.8238, device='cuda:1')]
[tensor(0.7348, device='cuda:0'), tensor(0.8238, device='cuda:0')]
tmp right now contains two tensors(loss from each GPU).
On a single GPU I train with a batch size of 32. Since, I have a very deep model and huge dataset I thought of using multiple GPUs in order to increase training speed. After reading this 3 I came to know I need to divide my batch size and train model with a batch size of 16 for two GPUs. Gradient is computed on batch of 16 on each GPU and average of gradient is applied to the models which gives an effect as in one iteration a batch of 32 is processed by GPUs and gradient is applied. I also want to gather the loss which will be equivalent of this batch of 32. |
st175050 | The output looks as-expected to me: the tmp[0] contains rank0 loss and tmp[1] contains rank1 loss. Did I miss anything? |
st175051 | Okay Let me try to rephrase my question, gather has given me loss calculated by each GPU. My batch size for model being trained on 2 GPUs is 16 which means at a time 32 images will be processed. The gradient step that was taken was with respect to 32 images. torch.distributed.gather returns loss with respect to 16 images on each GPU in a list. I want to get the loss with respect to 32 images that were processed in one iteration in total. |
st175052 | I see. In that case, DDP alone wonโt be sufficient, as DDPโs output and loss are local to each process. If you only need to calculate the globally loss, one option is to gather the outputs instead of loss, and then calculated loss on the gathered outputs. If you also need back propagation from the global loss, there are at least two options:
combine DDP with RPC: Use DDP to compute local output and use RPC to collect them into one process and compute global loss, and then use distributed autograd to start the backward pass from global loss. This tutorial 10 shows how to combine DDP with RPC.
Use the autograd-enabled collective communications in torch.distributed.nn.functional 1. Tests in this PR 4 can serve as examples. (This feature is not officially released yet.) |
st175053 | I have another question is that what would be the best way to do all_gather when the batch size isnโt fixed? E.g. when drop_last=False in the Dataloader so the last batch size will be different? |
st175054 | Hi,
Iโm having issues with SummaryWriter.add_audio() when using DDP. Only when I use one GPU do all images get logged, if not they are processed but donโt appear in the tensorboardโs events file output.
I think the problem is that the operation isnโt thread-safe or atomic. Iโve tried by forcing a flush() but didnโt work.
Here is the relevant pice of code
for logger in self.trainer.logger:
if isinstance(logger, TensorBoardLogger):
speaker_ids = (
speaker_ids if type(speaker_ids) is list else [speaker_ids]
)
utt_ids = utt_ids if type(utt_ids) is list else [utt_ids]
for i, (speaker_id, utt_id) in enumerate(zip(speaker_ids, utt_ids)):
if run_type == "Test" or (
run_type == "Validation" and utt_id in self.trainer.examples
):
print(f"Device: {self.device} Spk: {speaker_id} Utt: {utt_id}")
if self.global_step == 0:
logger.experiment.add_audio(
f"Speaker_{speaker_id}/{utt_id}_original",
y_audio[i].squeeze().detach(),
self.global_step,
self.trainer.mel_spec.sample_rate,
)
logger.experiment.add_audio(
f"Speaker_{speaker_id}/{utt_id}_generated",
y_hat_audio[i].squeeze().detach(),
self.global_step,
self.trainer.mel_spec.sample_rate,
)
logger.experiment.flush()
This is the output:
INFO:trainer.dataset:Validation set has 600 examples
Validation sanity check: 23%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 14/60 [00:04<00:08, 5.17it/s]
Device: cuda:4 Spk: 0 Utt: LJ012-0149
Validation sanity check: 33%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 20/60 [00:06<00:09, 4.44it/s]
Device: cuda:0 Spk: 8 Utt: dartagnan01_24_dumas_0149
Validation sanity check: 35%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 21/60 [00:06<00:08, 4.39it/s]
Device: cuda:2 Spk: 8 Utt: dartagnan01_24_dumas_0149
Validation sanity check: 38%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 23/60 [00:06<00:07, 5.22it/s]
Device: cuda:3 Spk: 1 Utt: littleminister_20_barrie_0001
Validation sanity check: 65%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 39/60 [00:09<00:03, 6.47it/s]
Device: cuda:0 Spk: 4 Utt: bambatse_21_haggard_0013
Validation sanity check: 78%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 47/60 [00:10<00:02, 6.03it/s]
Device: cuda:6 Spk: 2 Utt: widowbarnaby_25_trollope_0117
Validation sanity check: 83%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 50/60 [00:11<00:01, 6.23it/s]
Device: cuda:3 Spk: 4 Utt: bambatse_21_haggard_0013
Validation sanity check: 88%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 53/60 [00:11<00:01, 6.50it/s]
Device: cuda:1 Spk: 4 Utt: bambatse_21_haggard_0013
Validation sanity check: 92%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 55/60 [00:12<00:00, 5.06it/s]
Device: cuda:3 Spk: 3 Utt: bigbluesoldier_04_hill_0187
Validation sanity check: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 60/60 [00:14<00:00, 2.76it/s]Device: cuda:8 Spk: 7 Utt: annualreportseducation_11_mann_0203
Device: cuda:7 Spk: 5 Utt: internationalshortstories1_08_patten_0564
Device: cuda:3 Spk: 7 Utt: annualreportseducation_11_mann_0203
Device: cuda:2 Spk: 7 Utt: annualreportseducation_11_mann_0203
`` |
st175055 | Not sure how you set up your training script, but the common idiom is to use rank 0 for TensorBoard logging; usually after syncโing workers either implicitly (e.g. start/end of epoch) or explicitly (barrier() call) |
st175056 | Iโm using PytorchLightning which takes care of most of the configuration code.
I know that is usual to just log rank0 outputs but I want to log one audio per speaker or specific audios in the dev set. Hence, I need to save audios processed on different GPUs.
I thought that workers syncโing was for example to combine all models losses (one per GPU) and run backpropagation on the total amount.
I will check how to synchronize the logger. |
st175057 | Problem description
Hi, When I am testing a simple example with DistributedDataParallel, using a single node with 4 gpus, I found that when I used the GRU or LSTM module was taking additional processes and more memory on GPU 0. while using the Linear was not gotten these problems. The test code snippets are as follows:
def run_demo(demo_fn, world_size):
mp.spawn(demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
dist.init_process_group("nccl", rank=rank, world_size=world_size)
# The GRU or LSTM gets additional processes on GPU 0.
ToyModel = nn.GRU(10, 10, 1)
# The Linear does not get these problems.
# ToyModel = nn.Linear(10,1)
model = ToyModel.to(rank)
ddp_model = DDP(model, device_ids=[rank])
pbar_len = int(1e10 / 2)
for _ in range(pbar_len):
input_seq = torch.randn(4, 20,10)
input_seq = input_seq.float().to(rank)
ddp_model(input_seq)
dist.destroy_process_group()
if __name__ == "__main__":
world_size = torch.cuda.device_count()
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
run_demo(demo_basic, world_size)
I called the script like python XX.py. And I got the two results as follows:
GPU_results1002ร705 20.2 KB
Linear_results969ร680 19.6 KB
Versions
Collecting environment informationโฆ
PyTorch version: 1.10.0
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 7.7.1908 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: 3.4.2 (tags/RELEASE_34/dot2-final)
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-514.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P100-PCIE-16GB
GPU 2: Tesla P100-PCIE-16GB
GPU 3: Tesla P100-PCIE-16GB
Nvidia driver version: 460.27.04
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] torch==1.10.0
[pip3] torch-tb-profiler==0.1.0
[pip3] torchaudio==0.10.0
[pip3] torchinfo==1.5.4
[pip3] torchvision==0.11.1
[conda] blas 1.0 mkl defaults
[conda] cudatoolkit 10.2.89 hfd86e86_1 defaults
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640 defaults
[conda] mkl-service 2.4.0 py39h7f8727e_0 defaults
[conda] mkl_fft 1.3.1 py39hd3c417c_0 defaults
[conda] mkl_random 1.2.2 py39h51133e4_0 defaults
[conda] mypy_extensions 0.4.3 py39h06a4308_0 defaults
[conda] numpy 1.21.2 py39h20f2e39_0 defaults
[conda] numpy-base 1.21.2 py39h79a1101_0 defaults
[conda] pytorch 1.10.0 py3.9_cuda10.2_cudnn7.6.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-tb-profiler 0.1.0 pypi_0 pypi
[conda] torchaudio 0.10.0 py39_cu102 pytorch
[conda] torchinfo 1.5.4 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.11.1 py39_cu102 pytorc |
st175058 | Solved by Chenguang_Wan in post #3
I find the solution by the help with @ngimel. This is the solution learned from her.
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
ToyModel = nnโฆ |
st175059 | I met the same problem when training my ResNet101 on CIFAR100. Do you figure it out? |
st175060 | I find the solution by the help with @ngimel 1. This is the solution learned from her.
def demo_basic(rank, world_size):
print(f"Running basic DDP example on rank {rank}.")
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
ToyModel = nn.GRU(10, 10, 1)
model = ToyModel.cuda()
ddp_model = DDP(model, device_ids=[rank])
pbar_len = int(1e10 / 2)
for _ in range(pbar_len):
input_seq = torch.randn(4, 20,10)
input_seq = input_seq.float().cuda()
ddp_model(input_seq)
dist.destroy_process_group() |
st175061 | Hi,
I did read that PyTorch is not supporting the so called sync BatchNorm. This is needed to
train on multi GPU machines. By question is: Are there any plans to implement sync BatchNorm
for PyTorch and when will it be released?
An other question: What is the best workaround when you want to train with images and need
large batch sizes?
Thanks
Philip |
st175062 | Hi @ptrblck ,
thanks for the answer.
The documentation sais:
Currently SyncBatchNorm only supports DistributedDataParallel with single GPU per process.
This โsingle GPUโ kind of irritates me. What does it mean?
I am also asking because detectron2 still uses โFrozenBatchNorm2dโ: https://github.com/facebookresearch/detectron2/blob/master/detectron2/modeling/backbone/resnet.py#L50 10 |
st175063 | DistributedDataParallel can be used in two different setups as given in the docs 34.
Single-Process Multi-GPU and
Multi-Process Single-GPU, which is the fastest and recommended way.
SyncBatchNorm will only work in the second approach.
Iโm not sure, if you would need SyncBatchNorm, since FrozenBatchNorm seems to fix all buffers:
BatchNorm2d where the batch statistics and the affine parameters are fixed.
It contains non-trainable buffers called
โweightโ and โbiasโ, โrunning_meanโ, โrunning_varโ,
initialized to perform identity transformation. |
st175064 | Hi @ptrblck,
How do I create my DDP model if Iโm working on a cluster with multiple nodes and each node may have multiple GPUs ? |
st175065 | I think this tutorial 134 might be a good introduction to the different backends etc. |
st175066 | SyncBatchNorm synchronizes the statistics during training in a DistributedDataParallel setup as given in the docs 8 and can optionally be used. |
st175067 | Hi everyone, just wondering why do we need to expand the tensor to get access to grad_fn as described here 1? Can we replace the expand_as with a view operation instead? Thanks in advance! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.