id
stringlengths
3
8
text
stringlengths
1
115k
st48368
Thanks so much. I literally just modified a very small part of the original example code main.py --- a/word_language_model/main.py +++ b/word_language_model/main.py @@ -57,8 +57,9 @@ def batchify(data, bsz): nbatch = data.size(0) // bsz data = data.narrow(0, 0, nbatch * bsz) data = data.view(bsz, -1).t().contiguous() - if args.cuda: - data = data.cuda() + + # if args.cuda: + # data = data.cuda() return data eval_batch_size = 10 @@ -103,6 +104,9 @@ def get_batch(source, i, evaluation=False): seq_len = min(args.bptt, len(source) - 1 - i) data = Variable(source[i:i+seq_len], volatile=evaluation) target = Variable(source[i+1:i+1+seq_len].view(-1)) + if args.cuda: + data = data.cuda() + target = target.cuda() return data, target
st48369
I’ve started a script running @Morpheus_Hsieh’s script, I’ll try the demo later. Thanks!
st48370
for the code snippet that you provided. class MyDataset(torch.utils.Dataset): def __init__(self): self.data_files = os.listdir('data_dir') sort(self.data_files) def __getindex__(self, idx): return load_file(self.data_files[idx]) def __len__(self): return len(self.data_files) Are the file paths stored in self.data_files suppose to represent each batch of data (or data per loop) returned by iterating loader?
st48371
I have a directory with huge parquet files and have been using fastparquet to read in the files, which works fine. I want to extend the Dataset class to read them lazily and hope to have a better GPU utilisation. Here are a few questions regarding the Dataset class: The len method: Should it return the number of training instances or the number of parquet files in the directory? The getitem method: Should it return just a single training row (ndarray of shape (1, num_features)) or can it also return a matrix of rows (ndarray of shape (num_rows_in_a_parquetfile, num_features))? If it can return a matrix, how to feed it into Dataloader class? Almost all that I have understood reading the forums is that people use it to load images (one instance per file). Not sure how does this work out for reading huge files with N number of rows. Thanks
st48372
the answer to both questions is: up to you. read through the data loading tutorial: http://pytorch.org/tutorials/beginner/data_loading_tutorial.html#sphx-glr-beginner-data-loading-tutorial-py 678 Also read on how to give a custom collate function to the DataLoader (some threads on the forums cover it)
st48373
Regarding the parquet files, in my case the major problem is that pyarrow.Table objects are not serializable between threads of the data loader in case if you want use num_workers>=2. So probably it is something to consider for anyone who’s trying to wrap the parquet files with the dataset interface.
st48374
adityagourav: can it also return a matrix of rows Did you find a workaround for this bit?
st48375
Hi, we are using multi worker dataloader to read parquet file, right now our design is each __get_index__ will get all data from one file, but somehow if the file size is large(2G level), it will have some issue like File "/home/zhrui/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 761, in _try_get_data data = self._data_queue.get(timeout=timeout) File "/home/miniconda/lib/python3.6/multiprocessing/queues.py", line 104, in get if not self._poll(timeout): File "/home/miniconda/lib/python3.6/multiprocessing/connection.py", line 257, in poll return self._poll(timeout) File "/home/miniconda/lib/python3.6/multiprocessing/connection.py", line 414, in _poll r = wait([self], timeout) File "/home/miniconda/lib/python3.6/multiprocessing/connection.py", line 911, in wait ready = selector.select(timeout) File "/home/miniconda/lib/python3.6/selectors.py", line 376, in select fd_event_list = self._poll.poll(timeout) File "/home/zhrui/.local/lib/python3.6/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 90300) is killed by signal: Killed. is it because per worker dataloader has memory limit or computation time limit? @smth
st48376
We try to directly return the dataframe but it is not allowed in pytorch data loader, if we try to convert to numpy array or dict it will be the above problem
st48377
I have checked the the source code for dataloader, basically we could set a timeout def _try_get_data(self, timeout=_utils.MP_STATUS_CHECK_INTERVAL): # Tries to fetch data from `self._data_queue` once for a given timeout. # This can also be used as inner loop of fetching without timeout, with # the sender status as the loop condition. # # This raises a `RuntimeError` if any worker died expectedly. This error # can come from either the SIGCHLD handler in `_utils/signal_handling.py` # (only for non-Windows platforms), or the manual check below on errors # and timeouts. # # Returns a 2-tuple: # (bool: whether successfully get data, any: data if successful else None) try: data = self._data_queue.get(timeout=timeout) the default is 5s, MP_STATUS_CHECK_INTERVAL = 5.0 r"""Interval (in seconds) to check status of processes to avoid hanging in multiprocessing data loading. This is mainly used in getting data from another process, in which case we need to periodically check whether the sender is alive to prevent hanging.""" what is this interval means? I try to increase timeout in multi worker definition but get same error. only difference is final error msg will be like RuntimeError: DataLoader worker (pid 320) is killed by signal: Segmentation fault.
st48378
Hello, I am working with a network made of two models: Model1: Data Parallel model parallelized with DDP Model2: Model Parallel model (huge weight matrix) parallelized manually with a sub-part on each DDP process/GPU Model1 can be easily saved from any process as it is identical on each GPU. But, Model2 is distributed/split across GPUs and must be synchonized somehow. Question : what would be an elegant way to save Model2 ? Remarks: gathering of Model2 must be done on CPU due to its size distributed.gather() is not available with NCCL backend anyways I could save each part on disk in each process, wait in rank0 process (distributed.barrier()), reload everything on CPU, merge and save in dict, but… Thanks !
st48379
Hi, Any suggestion to display an image having a dimension of (128, 3, 32, 32) in Pytorch, It generates the following error. I think batch size (128) should be removed but how? Invalid shape (128, 3, 32, 32) for image data
st48380
I m using b = images[0] axes.append( fig.add_subplot(rows, cols, idx+1) ) plt.imshow(b) Now problem is the shape of image[0]= 128,3,32,32 So how to discard 128 (bsz).
st48381
what is the shape of image? usually its batch =128, rgb =3, width=32, height =32. Not sure what your 5th dimension is (shape of image) could you post the error log and shape of image (image.shape in case of numpy). also try image[0][0]
st48382
The following code snippet could help to understand: def train(train_loader, model, criterion, optimizer, epoch ): model.train() losses = AverageMeter() rows = 2 cols = 2 fig=plt.figure() axes = [] for idx, (images, labels) in enumerate(train_loader): data_time.update(time.time() - end) print('.........Size of image before unsqueeze : ',images[0].shape) b = images[0].reshape(3,32,32) npimg = b.numpy() axes.append( fig.add_subplot(rows, cols, idx+1) ) plt.imshow(npimg) plt.show() exit() The error is:
st48383
The image[0] should be a single item, not a batch of items. If the shape of that is [128,3,32,32] then that would be the shape of your data. If you print out image.shape you will get the shape of the data in the batch. If you would like to change the shape of your data you should probably look at the construction of the __getitem__ in your dataset and see what the shape of the data that is being put in there.
st48384
If image[0] shape is (128, 3, 32, 32) then you have another dimension and each batch has the shape (batch_dim, 128, 3, 32, 32) i.e. every batch has n 128x3x32x32-dimensional Tensors. Are you sure this is what you want? If so, you can plot image[0, i, ...] for i = 0, 1, ..., 127 And remember that if you want to plot a 3-channel image with matplotlib you need it in shape (x, y, channels) e.g. # First image of the first batch img_plot = image[0, 0, ...].permute(1, 2, 0) plt.imshow(img_plot)
st48385
Is there a time limit when we should consider a request to join the slack channel as denied? I imagine the channel receives a lot of requests and it’s hard to manage…so is there a better more automated way to join? While I don’t use this forum for general help (there are plenty of other resources available), I’ve submitted two PR’s to the torchvision repo and would like to discuss future enhancements.
st48386
I’m trying to develop a text detection application with Pytorch and Opencv in Python. I can use Pytorch tensor with Opencv like below. val = y[0,:,:,0].data.cpu().numpy() cv2.threshold(val , 0.4, 1, 0) But it takes a lot of time. I need to do this operation by using tensor object. How can I do that?
st48387
Hi, Give that you use .cpu(), I guess you have a cuda Tensor? Unfortunately, I don’t think opencv support gpu? So you will have to move the Tensor back to CPU to use it with opencv. Note that the conversion to numpy itself is almost free as we share memory with the numpy array. If you use operations that are available on pytorch, I would advise using pytorch’s gpu version of these ops to keep best performances !
st48388
I already used .cpu() but I got Expected Ptr<cv::UMat> for argument '%s' error. I know I should GPU version but OpenCV does not support gpu and convert from torch.tensor() numpy array is slowing the process. So I want to give image to OpenCV as torch.tensor()(CPU version) but I got this error Expected Ptr<cv::UMat> for argument '%s'
st48389
Hi, The conversion from the torch Tensor to the numpy array cannot be the slow part as it is doing almost nothing. It might be the conversion done by openCV from a numpy array to an open cv matrix?
st48390
I have the same question. In my case, I have a tensor which is the output of my model with requires_grad = True and I want to do some process on it using OpenCV in my custom loss. I’m trying to do it on GPU while keeping the requires_grad = True. Do you have any suggestions? I found kornia and it seems they are adding some OpenCV functions, but not the ones I need.
st48391
Hi, If you want to use OpenCV, you won’t be able to use autograd (just like if you want to use numpy). You will need to write a custom autograd Function to tell the autograd what the backward should be: https://pytorch.org/docs/stable/notes/extending.html 31
st48392
Thanks. The link was very helpful. I can avoid using opencv at the first step. but there is another problem. Let me explain my problem a little bit more. The outputs of my network are n (x,y) positions, for a car, and they are in the world coordinate frame. I do some transformation on the output to bring them in the image coordinate frame by using pytorch and kornia and keep the gradient. I wanted to draw some rectangles or ellipses for these points on a mask but for now I just want to consider them as a single point in the mask tensor. Now I have a target mask (an rgb image with zero and one elements) and want to calculate a metric like IoU between this target mask and my n points which are transformed into image coordinate frame. I wanted to create a mask for these points and easily calculate the IoU with the target mask, but I don’t know how to do it to be able to use backprop and keep gradient. Can I attach the created mask tensor from my points to the graph and do backpropagation?
st48393
Hi, Few things that jump to my mind: IoU is not differentiable AFAIK If you create a mask containing binary values, you won’t be able to backprop as gradients only exist for continuous values.
st48394
You are right. I think I need to have a head to output a mask directly. Thanks for the help.
st48395
Can we save intermediate results in checkpoint? Intermediate results are important to save in checkpoint?
st48396
Which intermediate results do you mean? Also, do you mean torch.utils.checkpoint or creating a checkpoint by serializing the model?
st48397
gnadaf: esults are important to save in checkpoint torch.save() Does it save only learnable parameters or intermediate results also?
st48398
For example, in the transformer model https://pytorch.org/tutorials/beginner/transformer_tutorial.html 1 outputs are calculated in forward pass, and layer weights and bias are intermediate results. correct me if I am wrong
st48399
gnadaf: outputs are calculated in forward pass, and layer weights and bias are intermediate results. The weight and bias would be parameters and I wouldn’t call them intermediate results. I think the intermediate activations could be called intermediate results, but I’m still unsure if that’s really what you are looking for as I haven’t heard this naming so far. By intermediate activations I mean e.g.: def forward(self, x): intermediate_act1 = self.layer1(x) intermediate_act2 = self.layer2(intermediate_act1) ...
st48400
I have a neural network divided into two parts. One part computes the first n layers and the other part computes the rest of the layers. I can obtain the gradients from a single model this way: grads = torch.autograd.grad(loss, model.parameters()) The thing is I need the gradients of the two parts separately. The main issue I am facing seems to be continuing the first part’s gradients backwards from the second part. I would appreciate any help as to how I can achieve this. Thanks
st48401
Hi, I am not sure to understand the issue. Can you extract the corresponding gradients from grads? Or provide only the subset of the parameters you are interested in to autograd.grad()?
st48402
Hi, thanks for the quick reply. Say I have a single model and want to obtain gradient values from it. I can do it as I showed in main post general. Now I have the model as two nn.Module instances, one computing the first part and one computing the second part. I want to obtain the gradient values from both parts and combine them. Bit of a weird problem but should be possible in some way.
st48403
If your full model has model.part1 and model.part2, then you can do: grad_part1 = torch.autograd.grad(loss, model.part1.parameters(), retain_graph=True) grad_part2 = torch.autograd.grad(loss, model.part2.parameters()) Does that match what you want?
st48404
In my forward function: def __call__(self, train=True): if train: predicted = self.forward(...) loss = .... return loss # return a single value that's fine # loss.size() = the number of my GPUs. else: predicted = self.forward(...) return predicted # expected sequence object with len >= 0 or a single integer # In validation step I want to return the whole predict labels for other purpose # predicted with shape [16, 1] on each device and I have 4 GPU My code works before model=nn.DataParallel(model).
st48405
Solved by albanD in post #23 From the stack trace it looks like the problem is with the outputs no? Maybe your forward returns Tensors that are not on the right device?
st48406
Hi, This is hard to say without more context. Can you share the stack trace for your functoin as well as where this call function is defined?
st48407
The code is here. But the code is not quite organized. It has other problems prevent me from using nn.DataParallel only this one I cannot solve. github.com lifanchen-simm/transformerCPI/blob/f9301880740975ddc1d56ce19f9eb52a6ad75933/Kinase/model.py#L315 3 # protein = torch.unsqueeze(protein, dim=0) # protein =[ batch size=1,protein len, protein_dim] enc_src = self.encoder(protein) # enc_src = [batch size, protein len, hid dim] out = self.decoder(compound, enc_src, compound_mask, protein_mask) # out = [batch size, 2] # out = torch.squeeze(out, dim=0) return out def __call__(self, data, train=True): compound, adj, protein, correct_interaction ,atom_num,protein_num = data # compound = compound.to(self.device) # adj = adj.to(self.device) # protein = protein.to(self.device) # correct_interaction = correct_interaction.to(self.device) #scale = torch.tensor([1.0,4.0], device=self.device) Loss = nn.CrossEntropyLoss() if train:
st48408
The first issue is that you should never redefine the __call__ method on a Module. Just the forward. This is going to provent it from working nicely with other parts of pytorch. More generally, the error most likely refers to the creation of the DataParallel where the device argument does not have the right type.
st48409
I’m trying to remove the __call__(). But I don’t understand which part of the device is not right? To be honest, I don’t know which device should input.to(device) and model.to(device) use, when using nn.DataParallel. I just use device_ids[0]. You mean the bug is here? Thank you very much! github.com lifanchen-simm/transformerCPI/blob/f9301880740975ddc1d56ce19f9eb52a6ad75933/Kinase/main.py#L68 3 batch = 64 lr = 1e-4 weight_decay = 1e-4 iteration = 300 kernel_size = 9 encoder = Encoder(protein_dim, hid_dim, n_layers, kernel_size, dropout, device) decoder = Decoder(atom_dim, hid_dim, n_layers, n_heads, pf_dim, DecoderLayer, SelfAttention, PositionwiseFeedforward, dropout, device) model = Predictor(encoder, decoder, device) # model.load_state_dict(torch.load("output/model/lr=0.001,dropout=0.1,lr_decay=0.5")) model.to(device) trainer = Trainer(model, lr, weight_decay, batch) tester = Tester(model) """Output files.""" file_AUCs = 'output/result/AUCs--lr=1e-4,dropout=0.1,weight_decay=1e-4,kernel=9,n_layer=3,batch=64,balance,lookaheadradam'+ '.txt' file_model = 'output/model/' + 'lr=1e-4,dropout=0.1,weight_decay=1e-4,kernel=9,n_layer=3,batch=64,balance,lookaheadradam' AUC = ('Epoch\tTime(sec)\tLoss_train\tAUC_dev\tPRC_dev') with open(file_AUCs, 'w') as f: f.write(AUC + '\n')
st48410
As mentioned in the DataParallel doc 4: " The parallelized module must have its parameters and buffers on device_ids[0] before running this DataParallel 4 module." I can’t find any reference to DataParallel in the repo so not sure where you do that. By I was talking about the place where you wrap your module in DataParallel.
st48411
Sorry, just the line before referenced. model = Predictor(encoder, decoder, device) # model.load_state_dict(torch.load("output/model/lr=0.001,dropout=0.1,lr_decay=0.5")) model = nn.DataParallel(model, device_ids=[0,1,2,3]) # I add the code here model.to(device) github.com lifanchen-simm/transformerCPI/blob/f9301880740975ddc1d56ce19f9eb52a6ad75933/Kinase/main.py#L68 1 batch = 64 lr = 1e-4 weight_decay = 1e-4 iteration = 300 kernel_size = 9 encoder = Encoder(protein_dim, hid_dim, n_layers, kernel_size, dropout, device) decoder = Decoder(atom_dim, hid_dim, n_layers, n_heads, pf_dim, DecoderLayer, SelfAttention, PositionwiseFeedforward, dropout, device) model = Predictor(encoder, decoder, device) # model.load_state_dict(torch.load("output/model/lr=0.001,dropout=0.1,lr_decay=0.5")) model.to(device) trainer = Trainer(model, lr, weight_decay, batch) tester = Tester(model) """Output files.""" file_AUCs = 'output/result/AUCs--lr=1e-4,dropout=0.1,weight_decay=1e-4,kernel=9,n_layer=3,batch=64,balance,lookaheadradam'+ '.txt' file_model = 'output/model/' + 'lr=1e-4,dropout=0.1,weight_decay=1e-4,kernel=9,n_layer=3,batch=64,balance,lookaheadradam' AUC = ('Epoch\tTime(sec)\tLoss_train\tAUC_dev\tPRC_dev') with open(file_AUCs, 'w') as f: f.write(AUC + '\n')
st48412
Another question,(sorry I’m new to pytorch) image1080×353 130 KB By this image in some blog about nn.DataParallel. The first step in Backward(Compute loss gradient on GPU-1) results in imbalanced GPU usage. Is that means in DataParallel loss.backward() only happens in GPU-1 not other GPUs, but optimizer.step(),optimizer.zero_grad() are parallel(step 2,3,4 in backward)? Thank you very much.
st48413
What DataParallel does is more version 3 of this image: split the input on each GPUs and run on each of them independently. Then accumulate. Note that the backward will run on the same device as the forward. whatever the device of the Tensor on which you call .backward().
st48414
Then where does the imbalanced GPU usage come from? You means loss.backward() is also parallel right? I’m a little confused.
st48415
It depends if the loss is inside the DataParallel or not. If it is, then there won’t be any imbalance. If it is outside and just computed on one GPU, then this GPU will do a bit more work indeed.
st48416
It depends if the loss is inside the DataParallel or not. In DataParallel you mean inside the forward function? But most time the forward function won’t contain loss computation right ? I’m also confused about imbalance come from loss.backward() or from loss=criterion(ture, pred) ? Thank you for your patience!
st48417
The DataParallel takes a Module as input so it can contain anything you want And yes what is executed is what is in the forward function of your Module. The imbalance won’t come from the loss.backward() because it runs at the same place as the forward. So if the forward is balanced, the backward will be as well.
st48418
Another weird problem in nn.DataParallel in my main.py I put the model to device encoder = Encoder(protein_dim, hid_dim, n_layers, kernel_size, dropout) decoder = Decoder(atom_dim, hid_dim, n_layers, n_heads, pf_dim, DecoderLayer, SelfAttention, PositionwiseFeedforward, dropout) model = Predictor(encoder, decoder) # model.load_state_dict(torch.load("output/model/lr=0.001,dropout=0.1,lr_decay=0.5")) model = nn.DataParallel(model) model.to(device) trainer = Trainer(model, lr, weight_decay, scaler) tester = Tester(model) loss_train = trainer.train(train_dl, device=device) # This line throw errors But I got the following error assert all(map(lambda i: i.is_cuda, inputs)) AssertionError I have test all model.parameters() and inputs in train(): def train(self, dataloader, device): self.model.train() if self.scaler is None: for i, data_pack in enumerate(dataloader): data_pack = to_cuda(data_pack, device=device) assert (all(map(lambda i: i.is_cuda, self.model.parameters()))) assert (all(map(lambda i: i.is_cuda, data_pack))) loss, _, _ = self.model(data_pack) # This line throw errors self.optimizer.zero_grad() loss.sum().backward() self.optimizer.step() The results are all True. But I still get this error in the third line loss, _, _ = self.model(data_pack) What happened? This is my forward function: def forward(self, data): compound, adj, protein, correct_interaction, atom_num, protein_num = data # compound = [batch,atom_num, atom_dim] # adj = [batch,atom_num, atom_num] # protein = [batch,protein len, 100] compound_max_len = compound.shape[1] protein_max_len = protein.shape[1] compound_mask, protein_mask = self.make_masks(atom_num, protein_num, compound_max_len, protein_max_len) compound = self.gcn(compound, adj) # compound = torch.unsqueeze(compound, dim=0) # compound = [batch size=1 ,atom_num, atom_dim] # protein = torch.unsqueeze(protein, dim=0) # protein =[ batch size=1,protein len, protein_dim] enc_src = self.encoder(protein) # enc_src = [batch size, protein len, hid dim] predicted_interaction = self.decoder(compound, enc_src, compound_mask, protein_mask) # out = [batch size, 2] # out = torch.squeeze(out, dim=0) loss = self.Loss(predicted_interaction, correct_interaction.view(-1, 1)) return torch.unsqueeze(loss, 0), predicted_interaction.cpu().detach().view(-1, 1), correct_interaction.cpu().detach().view(-1, 1) Thank you very much !!!
st48419
From the DataParallel doc, you should send your model to the device before wrapping it in DataParallel!
st48420
You mean model = nn.DataParallel(model) # The order is wrong? model.to(device) model.to(device) # This is right? model = nn.DataParallel(model) But in the doc image891×342 35.2 KB BTW, where is the complete doc of nn.DataParallel??
st48421
Here: https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html#torch.nn.DataParallel 20
st48422
No. I just use the assert to verify but all the inputs and parameters are on cuda. So I’m confused. image695×71 11.9 KB image1256×433 76.6 KB
st48423
As far as I understand, nn.relu() is a layer that has weights and bias whereas F.relu is just a activation function. Doesn’t that make nn.relu() a bit more computationally heavy than F.relu because optimizer has to update the redundant weights and bias for that layer too?
st48424
Solved by albanD in post #2 Hi, nn.ReLU() is a layer, but it has not weights or bias. The two are exactly the same. The version as a nn.Module is convenient to be able to add it directly into a nn.Sequential() construct for example. The functional version is useful when you write a custom forward and you just want to apply…
st48425
Hi, nn.ReLU() is a layer, but it has not weights or bias. The two are exactly the same. The version as a nn.Module is convenient to be able to add it directly into a nn.Sequential() construct for example. The functional version is useful when you write a custom forward and you just want to apply a relu.
st48426
Be warned, Google shows " relu() is a layer that has weights and bias whereas F. relu is just a activation function." instead of @Huy_Ngo s answer to my research “nn.ReLU or F.relu ?” so if someone does not click on the page he/her will see a bad answer
st48427
Dear all, I have read many tutorials on how to use PyTorch to make a regression over a data set using for instance a model composed of several linear layers and the MSE loss. Well, imagine that a known function F which depends on a variable x and some unknown parameters (p_j: j=0,…, P-1) with P relatively small, is a composition of special functions that cannot be modelized by the usual layers. So, my problem is a classical minimization problem knowing the data {x_i,y_i}_i<=N C(p_0,p_1,...,p_P) = Sum_i (F(x_i;{p_j}) - y_i)^2 The “loss” C({p_j}) that I want to minimize is the only function that I can call with a set of parameters. Is there a way to use the PyTorch minimizers to get the minimum of C({p_j}) ?
st48428
Hi, If F is implemented with pytorch Tensors and is differentiable. Yes you will be able to do it.
st48429
So, if I have just access to C({p_i}) through a python function, I cannot use the optimizers?
st48430
You don’t have access to {p_i} ? You can do : params = [p_0, p_1...] opt = optim.SGD(params, lr=0.1) for e in range(10): loss = C(params) opt.zero_grad() loss.backward() opt.step()
st48431
yes yes I can feed numerical values to C(ie the parameters to determine). So, in your code snippet the params is a list of initial values eg. params = [0.1, 0.0001, -2., 1e3,...] ok?
st48432
No, as I mentioned above, the function must work with pytorch Tensors. So params = torch.tensor([0.1, 0.0001, -2., 1e3, ...], requires_grad=True) (or a list of Tensors as in my example. Also, C must work with Tensors, if it converts it to python numbers or numpy arrays, gradients cannot be computed.
st48433
ha yes, I understand, in F or C, the parameters must appear as torch.tensors. ok.
st48434
Hi @albanD and @Jean-Eric_Campagne , i am facing the same problem here, but the parameters are not being updated although i used requires grad = True. def minimize(): xi = torch.tensor([1e-3, 1e-3, 1e-3, 1e-3, 1e-3, 1e-3], requires_grad=True) optimizer = torch.optim.Adam([xi], lr=0.1) for i in range(400): loss = self.f(xi) optimizer.zero_grad() loss.backward() optimizer.step() return xi self.f(xi) is implemented in pytorch Tensors. Do you have any suggestions?
st48435
Hello, I guess the experts will ask you to provide a complete working snippet of your example. try to keep it as simple as possible, ie. no class, just functions (ie. I see self.f so I imagine that you have extracted the code from a class). Then, expose your problem with some outputs for instance. Then, I can run your example and try to reproduce your problem.
st48436
I am using the Transformer module provided by the PyTorch for training a model for text generation. I am using NLLLoss() for measuring the quality of reconstruction. After a certain number of iterations, the loss explodes and changes all weights to nan. This is a log generated by the training program. root - WARNING - Loss: 203.81146240234375 root - WARNING - Loss: 124.32596588134766 root - WARNING - Loss: 62.59440612792969 root - WARNING - Loss: 59.84109115600586 root - WARNING - Loss: 59.247005462646484 root - WARNING - Loss: 48.832725524902344 root - WARNING - Loss: 57.592288970947266 root - WARNING - Loss: 50.18443298339844 root - WARNING - Loss: 46.474849700927734 root - WARNING - Loss: 52.12908172607422 root - WARNING - Loss: 50.090736389160156 root - WARNING - Loss: 66.04253387451172 root - WARNING - Loss: 49.094024658203125 root - WARNING - Loss: 36.69044494628906 root - WARNING - Loss: 48.54591369628906 root - WARNING - Loss: 60.71137237548828 root - WARNING - Loss: 40.35478591918945 root - WARNING - Loss: 49.070556640625 root - WARNING - Loss: 54.33742141723633 root - WARNING - Loss: 47.14014434814453 root - WARNING - Loss: 55.043060302734375 root - WARNING - Loss: 47.63726043701172 root - WARNING - Loss: 46.314571380615234 root - WARNING - Loss: 41.330291748046875 root - WARNING - Loss: 48.85242462158203 root - WARNING - Loss: 50.59345245361328 root - WARNING - Loss: 48.508975982666016 root - WARNING - Loss: 43.35681915283203 root - WARNING - Loss: 45.875431060791016 root - WARNING - Loss: 51.701438903808594 root - WARNING - Loss: 39.1783561706543 root - WARNING - Loss: 30.14274024963379 root - WARNING - Loss: 44.33928680419922 root - WARNING - Loss: 40.88005447387695 root - WARNING - Loss: 62.682804107666016 root - WARNING - Loss: 45.18329620361328 root - WARNING - Loss: 39.7137451171875 root - WARNING - Loss: 47.31813049316406 root - WARNING - Loss: 50.755348205566406 root - WARNING - Loss: 40.52918243408203 root - WARNING - Loss: 49.48160934448242 root - WARNING - Loss: 58.29778289794922 root - WARNING - Loss: 45.660675048828125 root - WARNING - Loss: 55.13115692138672 root - WARNING - Loss: 50.72150421142578 root - WARNING - Loss: 33.377098083496094 root - WARNING - Loss: 48.404151916503906 root - WARNING - Loss: 60.24494934082031 root - WARNING - Loss: 46.290470123291016 root - WARNING - Loss: 9.493173539216099e+24 As you can see, the loss goes down for some time as it should and spikes up. I have tried using gradient clipping to mitigate the issue but it did not solve the problem. criterion_1 = nn.NLLLoss() y_hat = model(X_train) y_hat = y_hat.transpose(0,1) mask = (tgt!=pad_idx).bool() y_hat = nn.functional.log_softmax(y_hat, dim = -1) cel = criterion_1(y_hat.reshape(-1,vocab_size), tgt.reshape(-1)) loss = cel.masked_select(mask.reshape(-1)).sum() loss.backward() torch.nn.utils.clip_grad_value_(model.parameters(), 100) optimizer.step() The above given is the code I am using for calculating the loss.
st48437
perhaps perfect predictors exist and training reaches (1,0,0,…) state. y_hat = y_hat.clamp(-b,b) should solve that (with b like 10…20, before softmax)
st48438
For some reason, clamping the predictions is causing the loss to increase after a certain point. This continues until some of model weights becomes nan.
st48439
Actually, I suggested early clamping, and that’s tricky with log_softmax. Post log_softmax clamping (-20.,-1e-6) or an additional loss mask may work instead. Or it is something else, I’d place a breakpoint and inspect problematic network output.
st48440
Few things before trying gradient clipping: What does your input data look like? Make sure it’s in the correct form, you would expect. Sometimes unnormalized input can cause huge loss values. What optimizer and lr are you using? Not sure if it’s a good idea to sum the loss values before loss.backward() Hari_Krishnan: loss = cel.masked_select(mask.reshape(-1)).sum()
st48441
Input data is in the shape (batch_size, max_len) and output is in the shape (batch_size, max_len, vocab_size) I am using Adam optimizer with lr of 0.001 I tried training the model taking the mean loss instead of the sum, I am still getting a spike in the loss.
st48442
I tried clamping the output post log softmax, this is the log generated root - WARNING - Loss: 7753.49169921875 root - WARNING - Loss: 6186.9287109375 root - WARNING - Loss: 5434.07861328125 root - WARNING - Loss: 6422.82568359375 root - WARNING - Loss: 6344.4873046875 root - WARNING - Loss: 5779.78515625 root - WARNING - Loss: 5681.9140625 root - WARNING - Loss: 5288.10498046875 root - WARNING - Loss: 5314.443359375 root - WARNING - Loss: 4506.3115234375 root - WARNING - Loss: 5896.3134765625 root - WARNING - Loss: 6842.0830078125 root - WARNING - Loss: 9111.4599609375 root - WARNING - Loss: 7685.61328125 root - WARNING - Loss: 8802.61328125 root - WARNING - Loss: 11280.5126953125 root - WARNING - Loss: 14238.529296875 root - WARNING - Loss: 13673.314453125 root - WARNING - Loss: 13150.68359375 root - WARNING - Loss: 13360.0 root - WARNING - Loss: 13180.0 root - WARNING - Loss: nan The NLLLoss becomes nan after few batches
st48443
A few things you can check: Ensure that this output is like what you would expect (ie the scale is same as tgt). Hari_Krishnan: y_hat = nn.functional.log_softmax(y_hat, dim = -1) I’m not sure what this part below is doing. Hari_Krishnan: loss = cel.masked_select(mask.reshape(-1)).sum() Instead can this variable loss be removed and simply cel.backward() be used. I usually go for criterion = nn.CrossEntropyLoss() to avoid confusion.
st48444
masked_select is for removing the loss corresponding to the <pad> token. I’m building an architecture similar to a variational autoencoder which uses log-likelihood for the loss which is why I used NLLLoss over CrossEntropyLoss
st48445
Then it is something else, probably. If you use sampling from trainable distributions, the issue can be there. Generally autograd.set_detect_anomaly(True) should show the problematic part (NaN inducing). As it slows down training, it is better to enable it late.
st48446
Hi, Let us say I have a simple model as follows: auto sequential = nn::Sequential( nn::Linear(4, 4), nn::ReLU(), nn::GroupNorm(1, 4), nn::Linear(4, 4) ); sequential.to(torch::kFloat16); torch::load(sequential, "./sequential-fp16.pt"); Instead of defining model (with dtype torch::kFloat32 by default) and converting it to torch::kFloat16, how do I go about defining it as a torch::kFloat16 model in the first place? I have a large model and would like to avoid large (fp32) memory allocation before whittling it down to fp16 and loading. Thanks!
st48447
I’m adapting a 10-class classification model to an 11-class classification model where I add one extra class. As well as keeping the prior layer weights, I want to make use of as much of the final classifier weights as possible, so I do something like this: def make_backbone(self, load=''): self.backbone = SFNetV1() if len(load): self.backbone.load(load) # replace fully connected layer if len(load): prior_weight = self.backbone.fc1.weight prior_bias = self.backbone.fc1.bias prior_in_features = self.backbone.fc1.in_features prior_out_features = self.backbone.fc1.out_features # add "empty" token self.backbone.fc1 = nn.Linear(prior_in_features, prior_out_features+1) print(self.backbone.fc1.weight.is_leaf) print(prior_weight.is_leaf) # reuse whatever weights and biases we still can if len(load): self.backbone.fc1.weight[:prior_out_features, :] = prior_weight self.backbone.fc1.bias[:prior_out_features] = prior_bias print(self.backbone.fc1.weight.is_leaf) This outputs True True False So my problem is that last False. So how do I “reset” self.backbone.fc1.weight to be a leaf node (also bias)? Bonus side question: Is there a better way to do what I’m trying to do?
st48448
You can wrap the assignment operation into a with torch.no_grad() block to make sure Autograd doesn’t record this copy as a differentiable operation: fc1 = nn.Linear(10, 10) prior_weight = fc1.weight prior_bias = fc1.bias prior_in_features = fc1.in_features prior_out_features = fc1.out_features # add "empty" token fc2 = nn.Linear(prior_in_features, prior_out_features+1) print(fc1.weight.is_leaf) print(prior_weight.is_leaf) # reuse whatever weights and biases we still can with torch.no_grad(): fc2.weight[:prior_out_features, :] = prior_weight fc2.bias[:prior_out_features] = prior_bias print(fc2.weight.is_leaf) out = fc2(torch.randn(1, 10)) print(out.shape) out.mean().backward() print(fc2.weight.grad)
st48449
Hello everyone! I’m writing this post because I am having trouble implementing a neural network in Pytorch being used to Keras. Considering the input has dimension (6,3,1) - I am trying to work with time series forecasting, I would like to implement the following network: model = Sequential() model.add(Conv1D(filters=64, kernel_size=2, activation='relu', input_shape=(3, 1))) model.add(MaxPooling1D(pool_size=2)) model.add(Flatten()) model.add(Dense(50, activation='relu')) model.add(Dense(1)) I tried with: MyNet = nn.Sequential( nn.Conv1d(3,1,2), nn.MaxPool1d(2,stride=None), nn.Flatten(start_dim=1), nn.Linear(64,50), nn.Linear(50, 1) ) but it doesn’t work. I believe the problem is related to Conv1d. Can you please explain me my mistakes? Thank you!
st48450
Hi, Something that I am sure is wrong is that in your Keras code you have 64 as number of filters in Conv layer but I cannot see this number in PyTorch equivalent. I think you might need to replace it with nn.Conv1d(in_channel=6, out_channel=64, kernel_size=2). In PyTorch, we just need to determine in/out channels (number of input and output filters), shapes will be captured by PyTorch itself. Another issue is that you need to add nn.ReLU() between two linear layers. Activation functions in PyTorch can be used like a separate layer. Furthermore, I am not sure about your data, but you might need to check which dimension corresponds to features or temporal/spatial dims. For this, you can follow this thread: Understanding Convolution 1D output and Input - PyTorch Forums 8 Last 4 replies literally discusses a numerical example. Bests
st48451
Hi, according to Understanding Convolution 1D output and Input 1 my input shape [6, 3, 1] corresponds to [batch_size, in_channel, len]. In addition, out_channel defines the number of the kernels. As a consequence, MyNet should become: MyNet = nn.Sequential( nn.Conv1d(in_channels = 3, out_channels = 64, kernel_size = 2), nn.MaxPool1d(2), nn.Flatten(start_dim=1), nn.Linear(64,50), nn.ReLU(), nn.Linear(50, 1) ) Is it correct? It doesn’t work, I get the following error: Calculated padded input size per channel: (1). Kernel size: (2). Kernel size can't be greater than actual input size Please, let me know what I am doing wrong!
st48452
You only have 6 samples? What I mean is that you need to define which dimension is corresponding to temporal dim. Based on your explanation, it seems you have only 1 timestamp with 3 features for each sample (=6). In this case, your input data has length of 1, but you have defined kernel size = 2, so it’s not valid. Can you elaborate your data? Because if in_channel=3 and len=1, then you can only define kernel size =1. In the reference I provided, notice the permute of channels where temporal dimension is moved to last channel.
st48453
Besides what @Nikronic already explained, I think TF uses the channels_last format by default, so I assume the input shape corresponds to [batch_size=6, seq_len=3, channels=1] and has to be permuted to fit the expected input for nn.Conv1d as [batch_size, channels, seq_len]. Also, you are still missing another nn.ReLU after the conv layer.
st48454
@ptrblck, you are right about the input shape correspondece to [batch_size = 6, seq_len = 3, channels = 1]! @Nikronic, yes I only have 6 samples. I permuted the input so to get one of dimensions (6,1,3) = [batch_size, channels, seq_len]. Yes, I was missing another nn.ReLU() after the Conv1d layer. I modified MyNet as following: f = nn.Sequential( nn.Conv1d(1,64,2), nn.ReLU(), nn.MaxPool1d(2), nn.Flatten(start_dim=1), nn.Linear(64,50), nn.ReLU(), nn.Linear(50, 1) ) again, it does not work. I get the following error: Given groups=1, weight of size [64, 1, 2], expected input[6, 6, 3] to have 1 channels, but got 6 channels instead Please, Can you give me any clue about what it’s wrong?
st48455
Mirage: input[6, 6, 3] It says your input data is not [6, 1, 3] as you have mentioned. Your network architecture is correct for this input data. Literally, f is getting an input with shape [6, 6, 3]. Can you show how you use your inputs?
st48456
I start with X being the input and y being the ground truth: print(X.shape) for i in range(len(X)): print(X[i], y[i]) (6, 3, 1) [[10] [20] [30]] 40 [[20] [30] [40]] 50 [[30] [40] [50]] 60 [[40] [50] [60]] 70 [[50] [60] [70]] 80 [[60] [70] [80]] 90 where [6,3,1]=[batch_size, seq_len, channels]. Then I permute the dimensions of X: x_train = torch.FloatTensor(X).permute(0,2,1) print(x_train.shape) print(x_train) torch.Size([6, 1, 3]) tensor([[[10., 20., 30.]], [[20., 30., 40.]], [[30., 40., 50.]], [[40., 50., 60.]], [[50., 60., 70.]], [[60., 70., 80.]]]) Then: train = data.TensorDataset(x_train, y_train) trainloader = data.DataLoader(train, batch_size=len(x_train), shuffle=False) Is there something I’m missing?
st48457
Actually, no! everything looks fine to me. I even ran your code and it works fine: x = torch.tensor([[[10., 20., 30.]], [[20., 30., 40.]], [[30., 40., 50.]], [[40., 50., 60.]], [[50., 60., 70.]], [[60., 70., 80.]]]) y = torch.ones(6, 1) train = data.TensorDataset(x, y) trainloader = data.DataLoader(train, batch_size=len(x), shuffle=False) model = nn.Sequential( nn.Conv1d(1,64,2), nn.ReLU(), nn.MaxPool1d(2), nn.Flatten(start_dim=1), nn.Linear(64,50), nn.ReLU(), nn.Linear(50, 1) ) for batch in trainloader: x_batch, y_batch = batch print(x_batch.shape) print(model(x_batch).shape) You may have missed something? like the way you use x after batching to feed to model? Can you show your train loop? PS. when you set batch_size=len(your whole data), you will get only 1 batch which contains a tensor [6, 1, 3]. You need to set batch_size=1 if you want 6x[1, 1, 3] tensors.
st48458
When it comes to the train loop, first I define the following: device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') class Learner(pl.LightningModule): def __init__(self, model:nn.Module): super().__init__() self.model = model def forward(self, x): return self.model(x) def training_step(self, batch, batch_idx): x, y = batch y_hat = self.model(x) loss = nn.MSELoss()(y_hat, y) logs = {'train_loss': loss} return {'loss': loss, 'log': logs} def configure_optimizers(self): return torch.optim.Adam(self.model.parameters(), lr=0.005) def train_dataloader(self): return trainloader Then I define the model in the following way: f = nn.Sequential( nn.Conv1d(1,64,2), nn.ReLU(), nn.MaxPool1d(2), nn.Flatten(start_dim=1), nn.Linear(64,50), nn.ReLU(), nn.Linear(50, 1) ) model = NeuralDE(f, sensitivity='adjoint', solver='dopri5').to(device) and finally: learn = Learner(model) trainer = pl.Trainer(min_epochs=200, max_epochs=300) trainer.fit(learn)
st48459
Ok, you are using PyTorch Lightning not PyTorch itself. PyTorch Lightning uses different approach to achieve same goal. Also, You are feeding your f which so far, I assumed that it is the entire model to another nn module called NeuralDE. This NeuralDE is apparently using different structure for model which I don’t know how it works. For instance, if you just define model = f, then your code should work, I think.
st48460
I believe the problem is related to NeuralDE bacause I defined model = f as you suggested and it worked! Also, I succeded in implementing a neural network with 2 inputs. It turns out I have to look deeply into the NeuralDE nn module! Thank you for your time and patience!
st48461
I’m getting this error: The size of tensor a (200) must match the size of tensor b (3) at non-singleton dimension 1 What does it mean?
st48462
Please print full stack trace for error, and the line that causes it. But I think it’s related to a mathematical operation which dimension mismatch is happening e.g. matrix multiplication of two matrices where dimensions does not match.
st48463
The thing is that I am trying to implement a Neural ODE (NODE) in Pytorch. I’m following this example: https://github.com/DiffEqML/torchdyn/blob/master/tutorials/02_classification.ipynb 1 If you know a simple implementantion of NODE, please share it with me! Regarding the error: f = nn.Sequential( nn.Conv1d(3,64,kernel_size=2), nn.ReLU(), nn.MaxPool1d(2), nn.Flatten(start_dim=1), nn.Linear(576,50), nn.ReLU(), nn.Linear(50, ph) ) model = NeuralDE(f, solver='rk4', sensitivity='autograd', s_span=torch.linspace(0, 1, 10)) learningRate = 0.01 criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=learningRate) for epoch in range(180): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 200 == 199: #if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % #(epoch + 1, i + 1, running_loss / 2000)) (epoch + 1, i + 1, running_loss / 200)) running_loss = 0.0 print('Finished Training') --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-27-15bbd244efe3> in <module>() 10 11 # forward + backward + optimize ---> 12 outputs = model(inputs) 13 loss = criterion(outputs, labels) 14 loss.backward() 6 frames /usr/local/lib/python3.6/dist-packages/torchdiffeq/_impl/rk_common.py in rk4_alt_step_func(func, t, dt, y, k1) 98 if k1 is None: 99 k1 = func(t, y) --> 100 k2 = func(t + dt * _one_third, y + dt * k1 * _one_third) 101 k3 = func(t + dt * _two_thirds, y + dt * (k2 - k1 * _one_third)) 102 k4 = func(t + dt, y + dt * (k1 - k2 + k3)) RuntimeError: The size of tensor a (3) must match the size of tensor b (200) at non-singleton dimension 1 If you know any simple implementation of NODE, please share it. Thank you!
st48464
Hello , I submitted a 4-node task with 1GPU for each node. But it exit with exception. Some of the log information is as follows: NCCL WARN Connect to 10.38.10.112<21724> failed : Connection refused The strange thing is that none of the 4 nodes’s ip is 10.38.10.112<21724>. I don’t know why it will try to connect the ip and the port . Besides, I have set the NCCL_SOCKET_IFNAME to “^lo,docker” self.dist_backend: nccl self.dist_init_method: file:///home/storage15/huangying/tools/espnet/egs2/voxforge/asr1/vox.init self.dist_world_size: 4 self.dist_rank: 1 auto allocate gpu device: 0 devices ids is 0 ----------------------------------------------------details------------------------------------------------------------- tj1-asr-train-v100-13:941227:943077 [0] NCCL INFO Call to connect returned Connection refused, retrying tj1-asr-train-v100-13:941227:943077 [0] NCCL INFO Call to connect returned Connection refused, retrying tj1-asr-train-v100-13:941227:943077 [0] NCCL INFO Call to connect returned Connection refused, retrying tj1-asr-train-v100-13:941227:943077 [0] include/socket.h:390 NCCL WARN Connect to 10.38.10.112<21724> failed : Connection refused tj1-asr-train-v100-13:941227:943077 [0] NCCL INFO bootstrap.cc:100 -> 2 tj1-asr-train-v100-13:941227:943077 [0] NCCL INFO bootstrap.cc:326 -> 2 tj1-asr-train-v100-13:941227:943077 [0] NCCL INFO init.cc:695 -> 2 tj1-asr-train-v100-13:941227:943077 [0] NCCL INFO init.cc:951 -> 2 tj1-asr-train-v100-13:941227:943077 [0] NCCL INFO misc/group.cc:69 -> 2 [Async thread] /home/storage15/huangying/tools/anaconda3/envs/py36/lib/python3.6/site-packages/librosa/util/decorators.py:9: NumbaDeprecationWarning: An import was requested from a module that has moved location. Import of 'jit' requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0. from numba.decorators import jit as optional_jit /home/storage15/huangying/tools/anaconda3/envs/py36/bin/python3 /home/storage15/huangying/tools/espnet/espnet2/bin/asr_train.py --use_preprocessor true --bpemodel none --token_type char --token_list data/token_list/char/tokens.txt --non_linguistic_symbols none --train_data_path_and_name_and_type dump/fbank_pitch/tr_en/feats.scp,speech,kaldi_ark --train_data_path_and_name_and_type dump/fbank_pitch/tr_en/text,text,text --valid_data_path_and_name_and_type dump/fbank_pitch/dt_en/feats.scp,speech,kaldi_ark --valid_data_path_and_name_and_type dump/fbank_pitch/dt_en/text,text,text --train_shape_file exp/asr_stats/train/speech_shape --train_shape_file exp/asr_stats/train/text_shape.char --valid_shape_file exp/asr_stats/valid/speech_shape --valid_shape_file exp/asr_stats/valid/text_shape.char --resume true --fold_length 800 --fold_length 150 --output_dir exp/asr_train_asr_transformer_fbank_pitch_char_normalize_confnorm_varsFalse --ngpu 1 --dist_init_method file:///home/storage15/huangying/tools/espnet/egs2/voxforge/asr1/vox.init --multiprocessing_distributed false --dist_launcher queue.pl --dist_world_size 4 --config conf/train_asr_transformer.yaml --input_size=83 --normalize=global_mvn --normalize_conf stats_file=exp/asr_stats/train/feats_stats.npz --normalize_conf norm_vars=False Traceback (most recent call last): File "/home/storage15/huangying/tools/anaconda3/envs/py36/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/storage15/huangying/tools/anaconda3/envs/py36/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/storage15/huangying/tools/espnet/espnet2/bin/asr_train.py", line 23, in <module> main() File "/home/storage15/huangying/tools/espnet/espnet2/bin/asr_train.py", line 19, in main ASRTask.main(cmd=cmd) File "/home/storage15/huangying/tools/espnet/espnet2/tasks/abs_task.py", line 842, in main cls.main_worker(args) File "/home/storage15/huangying/tools/espnet/espnet2/tasks/abs_task.py", line 1174, in main_worker distributed_option=distributed_option, File "/home/storage15/huangying/tools/espnet/espnet2/train/trainer.py", line 163, in run else None File "/home/storage15/huangying/tools/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 303, in __init__ self.broadcast_bucket_size) File "/home/storage15/huangying/tools/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 485, in _distributed_broadcast_coalesced dist._broadcast_coalesced(self.process_group, tensors, buffer_size) RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:410, unhandled system error, NCCL version 2.4.8 Setting NCCL_SOCKET_IFNAME Finished NCCL_SOCKET_IFNAME ^lo,docker self.dist_backend: nccl self.dist_init_method: file:///home/storage15/huangying/tools/espnet/egs2/voxforge/asr1/vox.init self.dist_world_size: 4 self.dist_rank: 1 auto allocate gpu device: 0 devices ids is 0 # Accounting: time=107 threads=1 # Finished at Mon May 18 14:45:28 CST 2020 with status 1
st48465
Besides, If I use only 2 nodes, each with 1 GPU, it works well with following log. But if 3 or 4 nodes, above error accurs. 451 tj1-asr-train-v100-11:41449:41449 [6] NCCL INFO Bootstrap : Using [0]eth0:10.38.10.4<0> 452 tj1-asr-train-v100-11:41449:41449 [6] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so). 453 tj1-asr-train-v100-11:41449:41449 [6] NCCL INFO NCCL_IB_DISABLE set by environment to 1. 454 tj1-asr-train-v100-11:41449:41449 [6] NCCL INFO NET/Socket : Using [0]eth0:10.38.10.4<0> 455 NCCL version 2.4.8+cuda10.0 456 tj1-asr-train-v100-11:41449:43711 [6] NCCL INFO Setting affinity for GPU 6 to 03ff,f0003fff 457 tj1-asr-train-v100-11:41449:43711 [6] NCCL INFO CUDA Dev 6[6], Socket NIC distance : PHB 458 tj1-asr-train-v100-11:41449:43711 [6] NCCL INFO Channel 00 : 0 1 459 tj1-asr-train-v100-11:41449:43711 [6] NCCL INFO Ring 00 : 1 -> 0 [receive] via NET/Socket/0 460 tj1-asr-train-v100-11:41449:43711 [6] NCCL INFO NET/Socket: Using 1 threads and 1 sockets per thread 461 tj1-asr-train-v100-11:41449:43711 [6] NCCL INFO Ring 00 : 0 -> 1 [send] via NET/Socket/0 462 tj1-asr-train-v100-11:41449:43711 [6] NCCL INFO Using 256 threads, Min Comp Cap 7, Trees disabled 463 tj1-asr-train-v100-11:41449:43711 [6] NCCL INFO comm 0x2b5210001cf0 rank 0 nranks 2 cudaDev 6 nvmlDev 6 - Init COMPLETE 464 tj1-asr-train-v100-11:41449:41449 [6] NCCL INFO Launch mode Parallel 465 [tj1-asr-train-v100-11:0/2] 2020-05-18 15:18:44,949 (trainer:201) INFO: 1/200epoch started 466 Setting NCCL_SOCKET_IFNAME 467 Finished NCCL_SOCKET_IFNAME 468 ^lo,docker 469 self.dist_backend: nccl 470 self.dist_init_method: file:///home/storage15/huangying/tools/espnet/egs2/voxforge/asr1/vox.init 471 self.dist_world_size: 2 472 self.dist_rank: 0 473 auto allocate gpu device: 6 474 devices ids is 6
st48466
This reminds me of a previous discussion here 58. Can you check in the program immediately before init_process_group if NCCL_SOCKET_IFNAME env var contains the correct value?
st48467
Below is copied from https://github.com/pytorch/pytorch/issues/38702 21, as we closed that issue and moved the discussion here. The strange thing is that none of the 4 nodes’s ip is 10.38.10.112<21724>. I don’t know why it will try to connect the ip and the port. Could you please check immediately before init_process_group in the code to confirm that MASTER_ADDR , MASTER_PORT , and NCCL_SOCKET_IFNAME are configured properly? Sometimes these can be different from what you set in command line especially when you are using notebook. You can do so by calling os.getenv(). Sharing the code will also be helpful. If the code is confidential, we can start by sharing how you invoke init_process_group .