id
stringlengths
3
8
text
stringlengths
1
115k
st49468
Hi, sorry for my delayed answer, but if it can help you, there is a new paper 47 that can help you in your work. They are even available in the Transformers 46 framework. But this work related to QA, not text generation. Quick googling gives me with a pre-trained model from torch hub But I kept my project with LSTM as a solution with the transformer failed due to my incorrect usage of the idea of the transformer architecture.
st49469
I’m getting this warning only after first epoch /usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimizer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning) I’m unable to understand this. What does this mean?
st49470
@fadetoblack Yeah sure. Here is my model and dataset file https://gist.github.com/saahiluppal/e1ffeac4c6c6c3da045cf07d8f8df37e 6 And here is how i run it model = ModelOne(config) data = DataTypeOne(config) trainer = pl.Trainer(gpus=1, max_epochs=config.epoch) trainer.fit(model, data)
st49471
I had a similar discussion on huggingface: github issue 144 This happens when optimizer state dict is accessed. See here: pytorch repo 111
st49472
I tried to find a solution to that in other threads but I cannot find a problem like mine. I am training a feed-forward NN and once trained save it using: torch.save(model.state_dict(),model_name) Then I get some more data points and I want to retrain the model on the new set, so I load the model using: model.load_state_dict(torch.load(‘file_with_model’)) When i start training the model again, the error increases a lot. To check if it was a problem of the new points or the way I’m loading the model, I saved a trained model and load it again to retrain over the same set of points. When doing this, the error on the very first epoch increases a lot with respect to the error on the trained model. Is this normal? Should I do anything more when loading a model for retrain? Thank you very much
st49473
If you trained your model using Adam, you need to save the optimizer state dict as well and reload that. Also, if you used any learning rate decay, you need to reload the state of the scheduler because it gets reset if you don’t, and you may end up with a higher learning rate that will make the solution state oscillate. Finally, if you have any dropout or batch norm in your model architecture, and you saved your model after a test loop (in which case model.eval() was called), make sure to call model.train() before the training loop.
st49474
What @kevinzakka said. After saving using something like state = {'epoch': epoch + 1, 'state_dict': model.state_dict(), 'optimizer': optimizer.state_dict(), 'losslogger': losslogger, } torch.save(state, filename) (losslogger is just something I use to keep track of the loss history; you can replace it with a tensorboard session or remove it) …you then can re-load the model weights and the state of your optimizer and other things by calling something like def load_checkpoint(model, optimizer, losslogger, filename='checkpoint.pth.tar'): # Note: Input model & optimizer should be pre-defined. This routine only updates their states. start_epoch = 0 if os.path.isfile(filename): print("=> loading checkpoint '{}'".format(filename)) checkpoint = torch.load(filename) start_epoch = checkpoint['epoch'] model.load_state_dict(checkpoint['state_dict']) optimizer.load_state_dict(checkpoint['optimizer']) losslogger = checkpoint['losslogger'] print("=> loaded checkpoint '{}' (epoch {})" .format(filename, checkpoint['epoch'])) else: print("=> no checkpoint found at '{}'".format(filename)) return model, optimizer, start_epoch, losslogger
st49475
Wait, uh oh. What I said is no longer true. This worked for me with earlier versions of PyTorch, but now in PyTorch 0.4, this has stopped working. It appears to work, but later when you’re training, you get an error from the optimizer’s (Adam in my case) optimizer.step() method: exp_avg.mul_(beta1).add_(1 - beta1, grad) RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #4 'other' Can anyone describe how to load properly with 0.4, so this doesn’t happen? Does one need to do optimizer.to(device) now, or something like that? My code is the same, and everything is .cuda()'d before the model is saved, so I don’t see why it’s expecting a non-cuda Tensor. UPDATE: Found an answer in this issue 528. So after you load from the checkpoint, when you move your model to cuda, you need to move the optimizer values as well, like so: model, optimizer, start_epoch, losslogger = load_checkpoint(model, optimizer, losslogger) model = model.to(device) # now individually transfer the optimizer parts... for state in optimizer.state.values(): for k, v in state.items(): if isinstance(v, torch.Tensor): state[k] = v.to(device) This works. Is there a more elegant solution, @apaszke?
st49476
Ok I see. I think my problem is that I change optimizers when a certain training error is reached (change from Rprop to LBFGS by the way), so when I retrain i start again with Rprop. I checked starting with LBFGS for retraining and the error seems to behave well. Thank you very much!
st49477
I see you also recover information from the optimizer. Is that because you use Adam? Is that general to other optimizers?
st49478
It depends. Vanilla SGD doesn’t use previous states, so there’d be no point recovering optimizer info for that. I’d say, if restarting the optimizer isn’t having an averse effect (e.g. you’re not noticing a giant jump when you restart), then you can get by without worrying about it.
st49479
I dont think there is an issue in using Scott’s old code. I dont get any errors and the loss seems to pick up from where it left off
st49480
what is losslogger? I notice that in my code there are: criterion = nn.CrossEntropyLoss() and loss = criterion(output, target) I am wondering if the losslogger is criterion in my case. Thank you
st49481
Hello everyone ! I tried to retrain a model i’ve already trained myself on 7 epochs, by using your method @drscotthawley. But by using the code below, I’ve got a decrease in my accuracy at the 8th epoch. I don’t think this is normal and I don’t know where my error is. ps : i used exactly the same parameters for the lr_scheduler, for training my 7 first epochs and the following Thanks ! model_ft = get_instance_segmentation_model(num_classes) # construct an optimizer params = [p for p in model_ft.parameters() if p.requires_grad] optimizer = torch.optim.SGD(params, lr=0.001, momentum=0.9, weight_decay=0.0005) data_loader = torch.utils.data.DataLoader( dataset_train, batch_size=4, shuffle=True, num_workers=8, collate_fn=lambda x: tuple(zip(*x))) data_loader_test = torch.utils.data.DataLoader( dataset_test, batch_size=2, shuffle=False, num_workers=8, collate_fn=lambda x: tuple(zip(*x))) def load_checkpoint(model, optimizer, filename): # Note: Input model & optimizer should be pre-defined. This routine only updates their states. start_epoch = 0 if os.path.isfile(filename): print("=> loading checkpoint '{}'".format(filename)) checkpoint = torch.load(filename) start_epoch = checkpoint['epoch'] model.load_state_dict(checkpoint['state_dict']) optimizer.load_state_dict(checkpoint['optimizer']) print("=> loaded checkpoint '{}' (epoch {})" .format(filename, checkpoint['epoch'])) else: print("=> no checkpoint found at '{}'".format(filename)) return model, optimizer, start_epoch model_ft , optimizer, epoch = load_checkpoint(model_ft, optimizer, "path/my_model") model_ft = model_ft.to(device) lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) # now individually transfer the optimizer parts... for state in optimizer.state.values(): for k, v in state.items(): if isinstance(v, torch.Tensor): state[k] = v.to(device)
st49482
Try if this works state_dict = model.state_dict() checkpoint = torch.load(filename) avoid = ['fc.weight', 'fc.bias'] for key in checkpoint.keys(): if key in avoid or key not in state_dict.keys(): continue if checkpoint[key].size() != state_dict[key].size(): continue state_dict[key] = checkpoint[key] model.load_state_dict(state_dict)
st49483
Hi, I am saving SummaryWriter() object to save loss history. It is showing this error: TypeError: cannot serialize '_io.BufferedWriter' object. torch.save() doesn’t allow to save the python objects? How to resolve this error? Thanks.
st49484
I have an ANN model (for a classification task) below: import torch import torch.nn as nn # Setting up artifical neural net model which separates out categorical # from continuous features, so that embedding could be applied to # categorical features class TabularModel(nn.Module): # Initialize parameters embeds, emb_drop, bn_cont and layers def __init__(self, emb_szs, n_cont, out_sz, layers, p=0.5): super().__init__() self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni, nf in emb_szs]) self.emb_drop = nn.Dropout(p) self.bn_cont = nn.BatchNorm1d(n_cont) # Create empty list for each layer in the neural net layerlist = [] # Number of all embedded columns for categorical features n_emb = sum((nf for ni, nf in emb_szs)) # Number of inputs for each layer n_in = n_emb + n_cont for i in layers: # Set the linear function for the weights and biases, wX + b layerlist.append(nn.Linear(n_in, i)) # Using ReLu activation function layerlist.append(nn.ReLU(inplace=True)) # Normalised all the activation function output values layerlist.append(nn.BatchNorm1d(i)) # Set some of the normalised activation function output values to zero layerlist.append(nn.Dropout(p)) # Reassign number of inputs for the next layer n_in = i # Append last layer layerlist.append(nn.Linear(layers[-1], out_sz)) # Create sequential layers self.layers = nn.Sequential(*layerlist) # Function for feedforward def forward(self, x_cat_cont): x_cat = x_cat_cont[:,0:cat_train.shape[1]].type(torch.int64) x_cont = x_cat_cont[:,cat_train.shape[1]:].type(torch.float32) # Create empty list for embedded categorical features embeddings = [] # Embed categorical features for i, e in enumerate(self.embeds): embeddings.append(e(x_cat[:,i])) # Concatenate embedded categorical features x = torch.cat(embeddings, 1) # Apply dropout rates to categorical features x = self.emb_drop(x) # Batch normalize continuous features x_cont = self.bn_cont(x_cont) # Concatenate categorical and continuous features x = torch.cat([x, x_cont], 1) # Feed categorical and continuous features into neural net layers x = self.layers(x) return x I am trying to use this model with skorch’s GridSearchCV, as below: from skorch import NeuralNetBinaryClassifier # Random seed chosen to ensure results are reproducible by using the same # initial random weights and biases, and applying dropout rates to the same # random embedded categorical features and neurons in the hidden layers torch.manual_seed(0) net = NeuralNetBinaryClassifier(module=TabularModel, module__emb_szs=emb_szs, module__n_cont=con_train.shape[1], module__out_sz=2, module__layers=[30], module__p=0.0, criterion=nn.CrossEntropyLoss, criterion__weight=cls_wgt, optimizer=torch.optim.Adam, optimizer__lr=0.001, max_epochs=150, device='cuda' ) from sklearn.model_selection import GridSearchCV param_grid = {'module__layers': [[30], [50,20]], 'module__p': [0.0, 0.2, 0.4], 'max_epochs': [150, 175, 200, 225] } models = GridSearchCV(net, param_grid, scoring='roc_auc').fit(cat_con_train.cpu(), y_train.cpu()) models.best_params_ But I am getting this error message below: /usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Expected module output to have shape (n,) or (n, 1), got (128, 2) instead FitFailedWarning) /usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Expected module output to have shape (n,) or (n, 1), got (128, 2) instead FitFailedWarning) /usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Expected module output to have shape (n,) or (n, 1), got (128, 2) instead FitFailedWarning) /usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Expected module output to have shape (n,) or (n, 1), got (128, 2) instead FitFailedWarning) /usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: ValueError: Expected module output to have shape (n,) or (n, 1), got (128, 2) instead FitFailedWarning) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-86-c408d65e2435> in <module>() 98 ---> 99 models = GridSearchCV(net, param_grid, scoring='roc_auc').fit(cat_con_train.cpu(), y_train.cpu()) 100 101 models.best_params_ 11 frames /usr/local/lib/python3.6/dist-packages/skorch/classifier.py in infer(self, x, **fit_params) 303 raise ValueError( 304 "Expected module output to have shape (n,) or " --> 305 "(n, 1), got {} instead".format(tuple(y_infer.shape))) 306 307 y_infer = y_infer.reshape(-1) ValueError: Expected module output to have shape (n,) or (n, 1), got (128, 2) instead I am not sure what is wrong or how to fix this. Any help on this would really be appreciated. Many thanks in advance!
st49485
Solved by ptrblck in post #2 I guess the NeuralNetBinaryClassifier expects the output to have one logit, since it’s used for a binary use case. If you want to use two output units for a binary classification (which would be a multi-class classification with 2 classes), you would have to use another wrapper I guess. I’m not de…
st49486
I guess the NeuralNetBinaryClassifier expects the output to have one logit, since it’s used for a binary use case. If you want to use two output units for a binary classification (which would be a multi-class classification with 2 classes), you would have to use another wrapper I guess. I’m not deeply familiar with skorch, but think that NeuralNetClassifier might work. CC @ottonemo in case I’m wrong.
st49487
Hi @ptrblck, good pick up that my model has 2 output units! You are right and NeuralNetClassifier worked! As always many thanks for your help @ptrblck!
st49488
I am implementing my own version of DP-FCN, a deformable parts version of R-FCN (here)[https://arxiv.org/abs/1605.06409] R- FCN [https://arxiv.org/abs/1707.06175] DP- FCN As part of the computation of ROI sensitive pooling, I take in a feature map of size (38, 38, 7, 7, 21), where 7 is the grid size, and 21 is the number of classes. I then need to project a region proposal onto the spatial component of the feature map, followed by a position sensitive pooling, where for each grid entry, for each class, I compute an average pooling of the corresponding grid location (with some displacements as in deformable parts architectures). Rois are in the form (x_min, y_min, x_max, y_max). def computeROISensitivePooling(feature_map, rois, stride = 38): rois = torch.floor_divide(rois, stride) # get the roi projected to the feature map rois = rois.type(torch.int16) # convert float bounding box coords to int grid_scores = self.getGridScores(rois, feature_map) return grid_scores # Get the area to pool over for an roi in grid element i,j def getBinRange(roi, i, j): roi_width = roi[2] - roi[0] + 1 roi_height = roi[3] - roi[1] + 1 x_min = torch.floor(i*torch.true_divide(roi_width, self.k)).type(torch.int16) x_max = torch.ceil((i+1)*torch.true_divide(roi_width, self.k) - 1).type(torch.int16) y_min = torch.floor(j*torch.true_divide(roi_height, self.k)).type(torch.int16) y_max = torch.ceil((j+1)*torch.true_divide(roi_height, self.k) - 1).type(torch.int16) return x_min, x_max, y_min, y_max # these values are inclusive def computeROIHeatMapScore(score_map, i, j, roi, class_num, bin_range, dx=0, dy=0): x_min = roi[0] + bin_range[0] y_min = roi[1] + bin_range[2] x_max = roi[0] + bin_range[1] y_max = roi[1] + bin_range[3] bin_width = x_max-x_min+1 bin_height = y_max-y_min+1 ROI_width = roi[2]-roi[0]+1 ROI_height = roi[3]-roi[1]+1 heat_map_score = torch.sum(score_map[x_min+dx:x_max+dx+1, y_min+dy:y_max+dy+1, i, j, class_num]) heat_map_score /= float(bin_width*bin_height) heat_map_score -= computeDeformationCost(dx, dy, ROI_width, ROI_height) return (dx, dy), heat_map_score def computeDeformationCost(self, dx, dy, ROI_width, ROI_height): return self.regularization_parameter*((dx**2/ROI_width)+(dy**2/ROI_height)) def computeOptimalDisplacementAndHeatMapScore(score_map, i, j, roi, class_num, feature_map_width=38, feature_map_height=38): bin_x_min, bin_x_max, bin_y_min, bin_y_max = getBinRange(roi, i, j) bin_range = [bin_x_min, bin_x_max, bin_y_min, bin_y_max] heat_map_scores = torch.empty(5*5) # 5 x displacements , 5 y displacements displacements = [] counter = 0 for dx in [0, 1, 2, 3, 4]: for dy in [0, 1, 2, 3, 4]: if bin_x_max+dx < feature_map_width-1 and bin_y_max+dy < feature_map_height-1: displacement, heat_map_score = computeROIHeatMapScore(score_map, grid_index_i, grid_index_j, roi, bin_range, dx, dy) displacements.append(displacement) heat_map_scores[counter] = heat_map_score counter += 1 max_index = torch.argmax(heat_map_scores) return displacements[max_index], heat_map_scores[max_index] The issue I am having is that my code takes up to 8 seconds for a single region proposal, and since the original author’s batch size is 1 image with 64 region proposals, this is clearly way too slow. I am wondering if there are any obvious optimizations that may speed up execution of my code. I would be happy to provide any additional info or clarifications as this is very important for me.
st49489
This question was updated, I write in following comment . I want to make RNN to be as follow input(t) = output(t-1) In pytorch’s RNN, the time series data, the input are generally obvious from the beginning. But this case isn’t because the input depend on its output. So, I think, There are only way to do this is for statement in this case, but I also know that for statement in python is too slow, and this is the reason why I’m thinking to be used PyTorch. Is there any ideas? Thank you for reading.
st49490
Hi, It is not very clear from your question what kind of objects input and output are. But adding an extra entry at the beginning will do this: [None,] + output for a list for example. Or torch.cat([torch.tensor(whatever_is_the_right_size), output], 0) for Tensors.
st49491
This is the algorism what I want to create. There are one layler RNN, called Reservoir computing like this, Recent advances in physical reservoir computing: A review linkinghub.elsevier.com Redirecting In my case, for example, I want this network to work like below, This is the input, torch.tensor[x1(t),x2(t),......,x50(t)] And output will be torch.tensor[x1(t+1),x2(t+1),......,x50(t+1)] Then, the next input should be torch.tensor[x1(t+1),x2(t+1),......,x50(t+1)] (equal to the before output.) If it goes, there are no input from external, so the following external inputs are separately given as triggers to drive this network. torch.tensor[x51(t),x52(t),x53(t)] Thank you for your attention.
st49492
Hi all, I’m new to PyTorch and I’m using a CNN for classification. The input sample is 192*288, and there are 12 channels in each sample. So each input sample will be around 2MB. I noticed there are some discussions about load data lazily, so I tried the following dataset. class FileDataset(Dataset): def __init__(self): super(FileDataset, self).__init__() self.Path = 'test_dataset/whole/' self.pos_files = os.listdir(self.Path+'positive') self.p_files = [os.path.join(self.Path+'positive', i) for i in self.pos_files] self.neg_files = os.listdir(self.Path + 'negative') self.n_files = [os.path.join(self.Path + 'negative', i) for i in self.neg_files] self.files = self.n_files+self.p_files def __len__(self): return len(self.files) def __getitem__(self, item): path = self.files[item] x = np.load(path); # print(x.shape) x_t = torch.from_numpy(x) return x_t The data loading speed in DataLoader is very slow. GPU and CPU usage are both low. I have also tried memmap in NumPy and HDF5. Still, the speed is not acceptable. The code is something like the following. class MmapDataset(Dataset): def __init__(self, ens, train=True): super(MmapDataset, self).__init__() if train: self.x = np.memmap('large_test'+ens, mode='r', shape=(9760, 12, 192, 288), dtype='float32') self.y = np.load('data/classification/Q850_train_y'+ens+'.npy') else: self.x = np.memmap('large_test_val'+ens, mode='r', shape=(610, 12, 192, 288), dtype='float32') self.y = np.load('data/classification/Q850_test_y' + ens + '.npy') def __getitem__(self, item): x = self.x[item] x = torch.from_numpy(x) y = self.y[item] return x, y def __len__(self): return self.x.shape[0] I tried to create a huge memmap file or hdf5 file, and create several smaller files and use ConcatDataset. The results are similar. Does anyone have any idea about the potential improvement? Thanks in advance!
st49493
Hi, You might want to use torch.save/torch.load directly to reduce intermediary states in the loading. Also adding more workers to the dataloader will help with loading things faster. Finally, make sure to use an ssd if possible as it makes huge difference compared to a spinning disk.
st49494
Hi thanks for your reply. Do you mean I should save the data with torch.save instead of NumPy files? I tried different number of workers, but it is still slow.
st49495
Yes you can save it in torch format to make sure that you don’t need the extra hop when loading between numpy and torch. Reading many small files from disk is slow (especially for spinning disks) there is no way around that I’m afraid. You can increase the number of workers until it starts slowing down. But beyond that there isn’t much you can do if you dataset doesn’t fit in memory.
st49496
I have a tensor t of shape (38, 38, 7, 7, 21) -> (x, y, i, j, c) Supposing I have x_min, x_max, y_min, y_max and two values i,j, and a class num c I want to sum all elements in the range (x_min : xmax, y_min:y_max, i, j, c) I am thinking something like torch.sum(t[x_min : xmax, y_min:y_max, i:i+1, j:j+1, c:c+1]) but this would not work if i = 6 for instance, so I would need if statements. Is there a cleaner way of achieving this?
st49497
JamesDickens: if i = 6 for instance, so I would need if statements. Why wounldn’t it? I think that selecting the subset you care about and calling sum is the right thing to do here. Note that you don’t need to give ranges for dimensions where you want a single value: torch.sum(t[x_min : xmax, y_min:y_max, i, j, c]) but that will give the same result
st49498
Hi, I am training a seq2seq RNN model, and I keep getting CUDNN_STATUS_EXECUTION_FAILED errors. I’ve looked through numerous threads and many solutions, and none of them apply here. I’ve checked that the GPU memory and the RAM are not running out, I’ve triple checked my CUDA and CUDNN versions are good, and I still get this error. The strange thing is that when I use torch.autograd.detect_anomaly(), the problem goes away. It trains much slower because of all the other debugging things that anomaly detection does, but strangely enough it also takes care of my error. What does anomaly detection do that could remedy this?
st49499
Could you rerun your code via: CUDA_LAUNCH_BLOCKING=1 python script.py args and post the stack trace here, please?
st49500
Thanks for the response. I’ve used that and it now doesn’t give an error. I’ve been running it for two days now, and it still doesn’t throw an error anymore. It does however take much longer to run, though I assume that’s the point.
st49501
@ptrblck Do you have any ideas what could be causing this issue? Can it possibly be a memory issue? If so, I’m not sure why I can fit a much larger batch size if I use CUDA LAUNCH BLOCKING.
st49502
I don’t know what could cause this issue. Could you post an executable code snippet to reproduce this issue, so that we could debug it?
st49503
@ptrblck Unfortunately my project is large and deeply engrained with a library I wrote, so it will take time to write a minimal reproduction. I have a stack trace I get without CUDA LAUNCH BLOCKING: Traceback (most recent call last): File "train.py", line 142, in <module> if __name__ == "__main__": main() File "train.py", line 74, in main train_epoch(model, train_dataset, test_dataset, optimizer, epoch, eval_every=eval_every) File "train.py", line 113, in train_epoch loss.backward() File "C:\Users\Joe Fioti\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "C:\Users\Joe Fioti\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\autograd\__init__.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED Exception raised from _cudnn_rnn_backward_input at ..\aten\src\ATen\native\cudnn\RNN.cpp:923 (most recent call first): 00007FF997E675A200007FF997E67540 c10.dll!c10::Error::Error [<unknown file> @ <unknown line number>] 00007FF94242529600007FF9424251E0 torch_cuda.dll!at::native::Descriptor<cudnnRNNStruct,&cudnnCreateRNNDescriptor,&cudnnDestroyRNNDescriptor>::Descriptor<cudnnRNNStruct,&cudnnCreateRNNDescriptor,&cudnnDestroyRNNDescriptor> [<unknown file> @ <unknown line number>] 00007FF94243C11B00007FF942439AD0 torch_cuda.dll!at::native::_cudnn_rnn_backward [<unknown file> @ <unknown line number>] 00007FF94243A03000007FF942439AD0 torch_cuda.dll!at::native::_cudnn_rnn_backward [<unknown file> @ <unknown line number>] 00007FF942492BA800007FF94244E400 torch_cuda.dll!at::native::set_storage_cuda_ [<unknown file> @ <unknown line number>] 00007FF9424A13DD00007FF94244E400 torch_cuda.dll!at::native::set_storage_cuda_ [<unknown file> @ <unknown line number>] 00007FF93A51BBF100007FF93A48D9D0 torch_cpu.dll!at::native::mkldnn_sigmoid_ [<unknown file> @ <unknown line number>] 00007FF93A56B9DA00007FF93A568FA0 torch_cpu.dll!at::bucketize_out [<unknown file> @ <unknown line number>] 00007FF93A552ECA00007FF93A552D40 torch_cpu.dll!at::_cudnn_rnn_backward [<unknown file> @ <unknown line number>] 00007FF93B85088900007FF93B80E010 torch_cpu.dll!torch::autograd::GraphRoot::apply [<unknown file> @ <unknown line number>] 00007FF93B85D12D00007FF93B80E010 torch_cpu.dll!torch::autograd::GraphRoot::apply [<unknown file> @ <unknown line number>] 00007FF93A51BBF100007FF93A48D9D0 torch_cpu.dll!at::native::mkldnn_sigmoid_ [<unknown file> @ <unknown line number>] 00007FF93A56B9DA00007FF93A568FA0 torch_cpu.dll!at::bucketize_out [<unknown file> @ <unknown line number>] 00007FF93A552ECA00007FF93A552D40 torch_cpu.dll!at::_cudnn_rnn_backward [<unknown file> @ <unknown line number>] 00007FF93B75C12D00007FF93B75BAF0 torch_cpu.dll!torch::autograd::generated::CudnnRnnBackward::apply [<unknown file> @ <unknown line number>] 00007FF93B747E9100007FF93B747B50 torch_cpu.dll!torch::autograd::Node::operator() [<unknown file> @ <unknown line number>] 00007FF93BCAF9BA00007FF93BCAF300 torch_cpu.dll!torch::autograd::Engine::add_thread_pool_task [<unknown file> @ <unknown line number>] 00007FF93BCB03AD00007FF93BCAFFD0 torch_cpu.dll!torch::autograd::Engine::evaluate_function [<unknown file> @ <unknown line number>] 00007FF93BCB4FE200007FF93BCB4CA0 torch_cpu.dll!torch::autograd::Engine::thread_main [<unknown file> @ <unknown line number>] 00007FF93BCB4C4100007FF93BCB4BC0 torch_cpu.dll!torch::autograd::Engine::thread_init [<unknown file> @ <unknown line number>] 00007FF97E3D08B700007FF97E3A9F90 torch_python.dll!THPShortStorage_New [<unknown file> @ <unknown line number>] 00007FF93BCABF1400007FF93BCAB780 torch_cpu.dll!torch::autograd::Engine::get_base_engine [<unknown file> @ <unknown line number>] 00007FF9C86E0E8200007FF9C86E0D40 ucrtbase.dll!beginthreadex [<unknown file> @ <unknown line number>] 00007FF9CA477BD400007FF9CA477BC0 KERNEL32.DLL!BaseThreadInitThunk [<unknown file> @ <unknown line number>] 00007FF9CA82CE5100007FF9CA82CE30 ntdll.dll!RtlUserThreadStart [<unknown file> @ <unknown line number>] This refers to a line in the RNN.cpp file with this: auto datatype = getCudnnDataType(input); Not sure if this helps or not.
st49504
I can’t remember seeing a similar issue pointing to this line of code, so would need to reproduce it in order to debug it properly. Are you trying to use the cudnn RNN in eval() mode during training? Also, are you using the latest PyTorch version?
st49505
@ptrblck I am using the nn.GRU modules in multiple places throughout the model, and using it in both training and eval modes. I am packing the input before I pass it through, and unpacking after. I am using PyTorch 1.6.0, Cuda 10.1 on Windows.
st49506
Hello all, I have a model and a dataset class. On a fresh boot, the system takes up 1.8GB of ram with no process running. The dataset class upon initialization takes up an additional 2 ~2.5 GB as it stores some variables for further reference. I instantiate the model class and pass it to the training function which looks as follows - def trainer(model, train_dataloader, val_dataloader, num_epochs): torch.backends.cudnn.benchmark = True model.train() model.cuda() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3, weight_decay=0.00009) criterion = nn.CrossEntropyLoss().cuda() model.train() epoch_loss_train = 0 epoch_acc_train = 0 for _, (image, label) in enumerate(train_dataloader): optimizer.zero_grad() image = image.cuda() label = label.cuda() output = model(image) loss = criterion(output, label) loss.backward() optimizer.step() del image, label As seen above, the model is shifted to the GPU and the dataloader returns the image and label which are shifted on to the GPU. The dataloader’s runs on a single thread and the system monitor reflects that. Once the training loop starts around 2.8 GB of GPU memory is utilized. However RAM gets filled up to 7 GB. I was wondering where is this additional 2.5 GB is coming from ??
st49507
Solved by albanD in post #2 Hi, Can you try checking the ram usage if you just do a simple cuda op like torch.rand(10, device="cuda")? The cpu size memory usage of the cuda driver is known to be very large.
st49508
Hi, Can you try checking the ram usage if you just do a simple cuda op like torch.rand(10, device="cuda")? The cpu size memory usage of the cuda driver is known to be very large.
st49509
Damn you are right, my ram usage shot up from 3.1GB to 5.2GB. So this is where the extra 2.1GB comes from.
st49510
Hi, I’m using Pytorch 1.5.1. I’m having a problem using some functions that exist is the documentation, but I got the error 'torch' has no attribute. For example, the code provided in this pytorch link 1 doesn’t work: >>> import torch >>> x=torch.randn(4, 2) >>> torch.view_as_complex(x) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'torch' has no attribute 'view_as_complex' >>> How can I solve this problem? Thank you in advance!
st49511
Solved by albanD in post #3 Hi, I am afraid this is not something that can easily be backported to older versions of pytorch as they don’t have complex support. You will have to use 1.6+ I’m afraid
st49512
It appears that this function is only available in torch 1.6. Is there a way to use in an anterior version of Pytorch?
st49513
Hi, I am afraid this is not something that can easily be backported to older versions of pytorch as they don’t have complex support. You will have to use 1.6+ I’m afraid
st49514
I have written the following code to mimic tf.scan def scan(foo, x): res = [] res.append(x[0].unsqueeze(0)) a_ = x[0].clone() for i in range(1, len(x)): res.append(foo(a_, x[i]).unsqueeze(0)) a_ = foo(a_, x[i]) return torch.cat(res) It generates the desired output for a number of examples. My only question is if the append and torch.cat part in this work break the backpropagation computations.
st49515
Hi Blade! blade: My only question is if the append and torch.cat part in this work break the backpropagation computations. No, backpropagation will not be broken by append(). Even though you are appending to a python list, you’re still (presumably) appending a valid pytorch tensor, and cat() is a valid pytorch tensor operation. (Of course, something in foo() might break backpropagation.) Here is a simple script that illustrates backpropagating through append() and cat(): import torch torch.__version__ t1 = torch.autograd.Variable (torch.FloatTensor ([1.0]), requires_grad = True) t2 = torch.autograd.Variable (torch.FloatTensor ([2.0]), requires_grad = True) l = [] l.append (t1) l.append (t2) t = torch.cat (l) t.prod().backward() t1.grad t2.grad And here is its output: >>> import torch >>> torch.__version__ '0.3.0b0+591e73e' >>> >>> t1 = torch.autograd.Variable (torch.FloatTensor ([1.0]), requires_grad = True) >>> t2 = torch.autograd.Variable (torch.FloatTensor ([2.0]), requires_grad = True) >>> l = [] >>> l.append (t1) >>> l.append (t2) >>> t = torch.cat (l) >>> t.prod().backward() >>> t1.grad Variable containing: 2 [torch.FloatTensor of size 1] >>> t2.grad Variable containing: 1 [torch.FloatTensor of size 1] Best. K. Frank
st49516
I would like to connect each nn.Module in a model to its named parent. class AddBlock(nn.Module): def forward(self, x, y): return x+y class multi_inp(nn.Module): def __init__(self): super().__init__() self.conv = nn.Conv2d(3,32,kernel_size=3) self.add = AddBlock() def forward(self, x, y): return self.conv(self.add(x,y)) a = torch.rand(3,128,128) b = torch.rand(3,128,128) model = multi_inp() using the flowing: for n,m in model.named_modules(): if n: print(n,m) prints: conv Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1)) add AddBlock() I would also like to get another column stating a list of named modules for each node module-name | parents | module =========== | ======= | ====== conv | None | Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1)) add | [conv] | AddBlock()
st49517
“…I would also like to get another column…” - just for clarity, could you type the full output (for the code above) you are expecting?
st49518
Please see if this helps: #Variables level = 0 strParent = ‘’ lstModules = list( model.named_modules())[1:] #Initial Print Statements print(’{0:<20}’.format(‘Module Name’), ‘|’, ‘{0:<10}’.format(‘Parent’) , ‘|’, ‘{0:<10}’.format(‘Module’, ’ ‘*10)) print(’{0:<20}’.format(’===========’), ‘|’, ‘{0:<10}’.format(’======’), ‘|’, ‘{0:<10}’.format(’======’)) #Loop through the module for i in range(len(lstModules)): if level==0: strParent=‘None’ module_name = lstModules[i][0] parent_name = strParent module = str(lstModules[i][1]) print(’{0:<20}’.format(module_name), ‘|’ , ‘{0:<10}’.format(parent_name), ‘|’, ‘{0:<100}’.format(module)) strParent = lstModules[i][0] level=1
st49519
thanks @KarthikR. your snippet works fine for hierarchical architectures where all children are structured properly below their parents. I am looking for a way to list all parents that would work also for complex scenarios such as resnets, where the connection scheme have multiple parents in non-sequential order. try: import torchvision.models as models model = models.resnet18() I think this would require the inspection of the actual tensor flow along the net.
st49520
Problem Recently I ran into an error when I tried to convert an array into float tensor using torch.tensor(arr, dtype=torch.FloatTensor). TypeError: tensor(): argument 'dtype' must be torch.dtype, not torch.tensortype I checked out the documentation here 14 and it turns out I did not have correct understanding about data type. I previously thought torch.float is equivalent to torch.FloatTensor. However, I still do not get the difference between those two and rationale behind this design, which appears to be redundant at first glimpse. Maybe torch.FloatTensor depends on particular storage type whereas torch.float doe not?
st49521
Hi, dtype is a datatype, like torch.float or torch.double tensortype is a type of tensor, like torch.FloatTensor, torch.DoubleTensor.
st49522
This below two code does the same thing. You can either explicitly provide the datatype or use that type of tensor. float_tensor = torch.tensor([4,5,6], dtype=torch.float) float_tensor = torch.FloatTensor([4,5,6])
st49523
Assuming I do not need to do backprop, is it possible to do a forward pass faster? I understand that it beats the purpose of creating a computational graph and having an autoGrad engine. I’m not a computer scientist, so I just want to know, whether its possible? For example while running in model.eval mode. Is it viable to have a different algorithm to do the forward pass when backprop is not required?
st49524
You can wrap the forward pass into with torch.no_grad() to avoid storing the intermediate activations and create the computation graph. However, this shouldn’t change the used operations. If you are using a GPU, you could use torch.backends.cudnn.benchmark = True to let cudnn benchmark all workloads and select the fastest algorithm. Note that the first iteration will observe an overhead due to the profiling.
st49525
I have a tensor of shape z = (38, 38, 7, 7, 21) = (x_pos, y_pos, grid_i, grid_j, class_num), and I wish to normalize it according to the formula: I have produced a working example of what I mean here, and the problem is that it is extremely slow, approximately 2-3 seconds for each grid entry (of which there are 49, so 49*3 seconds = 147 seconds, which is way too long, considering I need to do this with thousands of image feature maps). Any optimizations or obvious problems very much appreciated. This is part of a Pytorch convolutional neural network architecture, so I am using torch tensors and tensor ops. import torch def normalizeScoreMap(score_map): for grid_i in range(7): for grid_j in range(7): for x in range(38): for y in range(38): grid_sum = torch.tensor(0.0).cuda() for class_num in range(21): grid_sum += torch.pow(score_map[x][y][grid_i][grid_j][class_num], 2) grid_normalizer = torch.sqrt(grid_sum) for class_num in range(21): score_map[x][y][grid_i][grid_j][class_num] /= grid_normalizer return score_map random_score_map = torch.rand(38,38,7,7,21).cuda() score_map = normalizeScoreMap(random_score_map) Edit: For reference I have an i9-9900K CPU and a nvidia 2080 GPU, so my hardware is quite good. I would be willing to try multi-threading but I am looking for more obvious problems/optimizations.
st49526
Solved by ptrblck in post #2 This should work: x = random_score_map.clone() s = (x**2).sum(4, keepdims=True) n = torch.sqrt(s) x /= n print(torch.allclose(x, score_map)) > True Note that you should avoid for loops where possible and try to use vectorized code instead.
st49527
This should work: x = random_score_map.clone() s = (x**2).sum(4, keepdims=True) n = torch.sqrt(s) x /= n print(torch.allclose(x, score_map)) > True Note that you should avoid for loops where possible and try to use vectorized code instead.
st49528
1,Could you tell me how can I achieve multiple convolution kernels sharing the same weight? If I want to make 4 conv to share 1 conv weight. for example, I want to keep conv1 and conv2,3,4 using the same weights(using the weight of conv1) self.conv1 = nn.Conv2d(3, 3, 3) self.conv2 = nn.Conv2d(3, 3, 3) self.conv3 = nn.Conv2d(3, 3, 3) self.conv4 = nn.Conv2d(3, 3, 3) 2,May I ask whether the implementation of dilated convolution is to generate a convolution kernel with a dilation rate = r as in the original paper or to periodically sample the feature map at equal intervals, then convolve with a common convolution kernel, and then resample the image Spell back the original feature map size(As mentioned in “Smoothed Dilated Convolutions for Improved Dense Prediction”?)? Thank you in advance!
st49529
I am trying to run a simple benchmark script, but it fails due to a CUDA error, which leads to another error: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method Traceback (most recent call last): File "/home/cbarkhof/code-thesis/Experimentation/Benchmarking/benchmarking-models-claartje.py", line 23, in <module> benchmark.run() File "/home/cbarkhof/.local/lib/python3.6/site-packages/transformers/benchmark/benchmark_utils.py", line 674, in run memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length) ValueError: too many values to unpack (expected 2) My script is simply: from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments benchmark_args = PyTorchBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512], save_to_csv=True, log_filename='log', env_info_csv_file='env_info') benchmark = PyTorchBenchmark(benchmark_args) benchmark.run() I am not aware of doing any multi-processing, so why is this happening? If anyone can point me to why this might be happening. Please let me know :). Cheers!
st49530
I am trying to understand what is happening in the scenario where I have a 2d convolutional layer with the following parameters: 2d_conv = nn.Conv2d(3, 3, kernel_size=1, stride=1, padding=0, bias=False, dilation=2) Is this just the same as a regular 1x1 2-d convolution, since I’m not sure what effect the dilation would have. I am asking since in the paper describing the a-trous algorithm modification of a resnet-101 backbone in R-FCN the authors write: All layers before and on the conv4 stage [9] (stride=16) are unchanged; the stride=2 operations in the first conv5 block is modified to have stride=1, and all convolutional filters on the conv5 stage are modified by the “hole algorithm” [15, 2] (“Algorithme à trous”) to compensate for the reduced stride
st49531
Solved by phan_phan in post #2 When kernel_size=1, dilation has no effect. See this page for a visualization of dilation : for only 1 pixel as input, should not change a thing.
st49532
When kernel_size=1, dilation has no effect. See this page 6 for a visualization of dilation : for only 1 pixel as input, should not change a thing.
st49533
image1174×598 63.6 KB aren’t the model and its weights automatically saved in Pytorch lighting?
st49534
Solved by ptrblck in post #2 The events... file should be created by tensorboard, while the second file seems to contain some hyperparameters, not the state_dict. Since I’m not deeply familiar with Lightning, CC @williamFalcon.
st49535
The events... file should be created by tensorboard, while the second file seems to contain some hyperparameters, not the state_dict. Since I’m not deeply familiar with Lightning, CC @williamFalcon.
st49536
Hi I am new to Pytorch. Is model.train() the same as model for training? I find they are the same, am I correct? Thanks. RNN Model class RNN(nn.Module): def init(self, input_size, hidden_size, num_layers, num_classes): super(RNN, self).init() self.hidden_size = hidden_size … … = dropout(…) def forward(self, x): … … … rnn = RNN(input_size, hidden_size, num_layers, num_classes) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adagrad(…) train… … test model.eval()
st49537
Solved by dys129 in post #2 Yes, they are the same. By default all the modules are initialized to train mode (self.training = True). Also be aware that some layers have different behavior during train/and evaluation (like BatchNorm, Dropout) so setting it matters. Also as a rule of thumb for programming in general, try to exp…
st49538
Yes, they are the same. By default all the modules are initialized to train mode (self.training = True). Also be aware that some layers have different behavior during train/and evaluation (like BatchNorm, Dropout) so setting it matters. Also as a rule of thumb for programming in general, try to explicitly state your intent and set model.train() and model.eval() when necessary.
st49539
Why do model.train() and model.eval() return a reference to the model. What is the intended usage for the return value? I am using as follows: model.train() But this means that in a Jupyter notebook it outputs the model object repr which is unwanted: LeNet( (m): Sequential( (0): Sequential( (0): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1)) (1): BatchNorm2d(20, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): MaxPool2d(kernel_size=3, stride=3, padding=0, dilation=1, ceil_mode=False) (4): Dropout(p=0.25) ) (1): Sequential( (0): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1)) (1): BatchNorm2d(50, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (4): Dropout(p=0.25) ) (2): View() (3): Linear(in_features=200, out_features=500, bias=True) (4): BatchNorm1d(500, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace) (6): Dropout(p=0.25) (7): Linear(in_features=500, out_features=10, bias=True) ) )
st49540
to avoid this you could simply do model = model.train() And the same should work for eval(). Since the reference is assigned to a variable it should not be printed any more
st49541
Thanks, I can see how that solves the problem. But I don’t understand why it was set up this way. Usually if you are trying to change an attribute or a setting, the method would be called something like set_attribute(value). In this case, set_mode("train") maybe or set_trainmode(True). A method of this type would not be expected to return a value. model.train() sounds like it is going to actually train the network. Not very Pythonic!
st49542
Maybe it was implemented like this to be consistent with methods like .float() or .cuda() but this is only a guess.
st49543
It returns self so that you can chain different function such as: model.train().cuda() . It’s called a fluent interface.
st49544
Nice! Thanks for explaining that and elucidating the intended usage. B.t.w. presumably something like this is possible then: model.eval().do_something().train() (I haven’t tried it).
st49545
model.eval().do_something().train() will only work if do_something() return a reference to the model object. And even if it works, I personally wouldn’t recommend it! I find it much more readible and clear to do it this way: model.eval() model.do_something() model.train()
st49546
You don’t need to do model = model.train() It’s an internal method hence just model.train()
st49547
I think, the intention of model = model.train() is to avoid seeing the lengthy output when run in a Jupyter Notebook. But for that, I’d suggest a small trick I recently learned – adding a semicolon (yes, in Python ) at the end of the line. So model.train(); will not produce any output
st49548
i have 2 equivalent implementations of a conv network. in one of them the network.eval() works fine. and network(input) gives good predictions. in another it doesnt. what could be the reason?? thanks
st49549
Interesting… I wonder why they didn’t mention this point or use train() and eval() in the beginner tutorials, seeing as it is so important. I’m just starting out with PyTorch by adapting the code from these tutorials and could have easily missed this point…
st49550
Given two raw logit vectors p and q, I would like to calculate -(P-Q)*log((1-P)/(1-Q)), which is equals to KL(1-P||1-Q) + KL(1-Q||1-P), where P/Q is the probability distribution of p/q after softmax(). Here is my code: -(F.softmax(p, dim=1) - F.softmax(q, dim=1)) * torch.log((1.0001 - F.softmax(p, dim=1)) / (1.0001 - F.softmax(q, dim=1))) but it turns out to be numerical unstable. How can I fix it?
st49551
Hi, why torch.dot(a,b) == torch.sum(a * b) returns False? e.g.: a = torch.rand(1000) b = torch.rand(1000) print(torch.dot(a,b) == torch.sum(a * b)), Thanks!
st49552
Welcome to the community! The values of both those expression are not equal after a certain decimal point. Please run the following to understand: a = torch.rand(1000) b = torch.rand(1000) x1 = torch.dot(a,b) x2 = torch.sum(a*b) torch.set_printoptions(precision=10) print(x1.item()) print(x2.item())
st49553
Any easy/less painful ways of installing pytorch on a raspberry pi than python setup.py build with the source?
st49554
I’m not too sure, have tried using pip3, and keep getting the ‘not supported on this platform’ error. I have got 64bit arch software, and the raspi 3 is 64 arch hardware, so I am as baffled as you. Did you manage to get it working?
st49555
You could try to build from source, as I’m not sure if someone published arm binaries of PyTorch. Even though the RPi uses a 64bit CPU, the architecture differs from the official binaries (x86).
st49556
I’ve been trying to build from source for the past month, it’s a very painful experience, mainly because of raspi not releasing 64 bit software, I have been using ubuntu server 64bit for aarm but, after a month of constant trial and error, it’s becoming a project I’m tiring of, and was hoping someone might have had more success. Still, I have reduced the amount of errors I am having by almost half, so I guess I’m doing something right
st49557
Good question, I am currently re-installing my backup image of the last stable state of the SD card, and then shall compile again, and record the feedback for you. Okay so far I have this, right at the beginning [ 0%] Building C object confu-deps/pthreadpool/CMakeFiles/pthreadpool.dir/src/threadpool-pthreads.c.o /home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c: In function ‘clog_vlog_fatal’: /home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c:120:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result] write(STDERR_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c: In function ‘clog_vlog_error’: /home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c:196:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result] write(STDERR_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c: In function ‘clog_vlog_warning’: /home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c:272:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result] write(STDERR_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c: In function ‘clog_vlog_info’: /home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c:348:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result] write(STDOUT_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c: In function ‘clog_vlog_debug’: /home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c:424:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result] write(STDOUT_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Which has confused me a little, as I set environments as follows: export NO_CUDA=1 export NO_DISTRIBUTED=0 export NO_MKLDNN=1 export NO_NNPACK=1 export NO_QNNPACK=1
st49558
Okay, it’s 48 hours in to compiling, and at 92%, but I’m getting many warnings about deprecation, and the dangerous use of tmpnam, and that I should be using mkstemp, but these are things I have no control over, correct?
st49559
This is the error I recieved during the night [ 92%] Building CXX object test_api/CMakeFiles/test_api.dir/dataloader.cpp.o caffe2/CMakeFiles/op_registration_test.dir/build.make:62: recipe for target ‘caffe2/CMakeFiles/op_registration_test.dir/__/aten/src/ATen/core/op_registration/op_registration_test.cpp.o’ failed CMakeFiles/Makefile2:3922: recipe for target ‘caffe2/CMakeFiles/op_registration_test.dir/all’ failed [ 93%] Built target test_api Makefile:140: recipe for target ‘all’ failed Building wheel torch-1.3.0a0+c2549cb – Building version 1.3.0a0+c2549cb cmake -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/ubuntu/pytorch/torch -DCMAKE_PREFIX_PATH=/usr/lib/python3/dist-packages -DNUMPY_INCLUDE_DIR=/home/ubuntu/.local/lib/python3.6/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib/libpython3.6m.so.1.0 -DTORCH_BUILD_VERSION=1.3.0a0+c2549cb -DUSE_CUDA=False -DUSE_DISTRIBUTED=True -DUSE_NUMPY=True /home/ubuntu/pytorch cmake --build . --target install --config Release – -j 4
st49560
No, unfortunately not, the deprecation warnings became to much. I will try it again later.
st49561
I published ARM64 binaries 51 of PyTorch compiled on the Raspberry (I actually compiled 1.4, too, just didn’t upload yet). You would need a 64bit distribution (eg Debian for the Raspberry Pi 3) or a 64 bit kernel from the Raspberry Inc and arm64 chroot. There also are ARM32 binaries 53 from @LeviViana . At least in September, some things like JIT tracing didn’t work when I built on ARM32, Levi would know if he fixed it or it’s still open. Best regards Thomas
st49562
I have been using Ubuntu Server 18.10 ARM64, and was constantly warned over deprecations within the build. I find a testament to the lack of foresight in Raspberry Pi that they refuse to make an ARM64 architecture software, because of earlier 32 bit Pi models still in use. But in saying that, it does provide a challenge which is half the fun, but double the frustration.
st49563
Indeed I did some quick fixes last year to get torch 1.3 compiled on RPI. However, all the unit tests didn’t pass (this is why I didn’t event try a PR or something), but the main commonly used torch functions are working just as they should. I can make some digging and find out what I did last year, but I got no time for this right now , moreover, I think it doesn’t matter that much. I’ll probably do the same for torch 1.4 in the coming weeks or months.
st49564
I just compiled torch 1.4, you’ll find the wheels here: https://wintics-opensource.s3.eu-west-3.amazonaws.com/torch-1.4.0a0%2B7963631-cp37-cp37m-linux_armv7l.whl 409 Have fun !
st49565
Could you provide build instructions? I get some nasty compilation errors (https://github.com/pytorch/pytorch/issues/35049 60)
st49566
Just in case its helpful, I’ve also compiled a version of torch 1.4 and torchvision 0.5 for the Raspberry Pi (Tested with an RPi 3B). These wheels have the NEON optimisations enabled, and can support the new torch jit functionality also. You can find the wheels here: GitHub choonkiatlee/pi-torch 98 Contribute to choonkiatlee/pi-torch development by creating an account on GitHub. and the docker images I used to compile these wheels here: GitHub choonkiatlee/qemu-raspbian 23 Contribute to choonkiatlee/qemu-raspbian development by creating an account on GitHub.
st49567
How should I apply the transform_train to my train_loader or train_dataset here? rgb_mean = (97.13, 97.15, 97.15) rgb_std = (28.74, 28.79, 28.8) transform_train = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(rgb_mean, rgb_std), ]) class MothLandmarksDataset(Dataset): """Face Landmarks dataset.""" def __init__(self, csv_file, root_dir, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.landmarks_frame = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.landmarks_frame) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() img_name = os.path.join(self.root_dir, self.landmarks_frame.iloc[idx, 0]) image = io.imread(img_name) landmarks = self.landmarks_frame.iloc[idx, 1:] landmarks = np.array([landmarks]) landmarks = landmarks.astype('float').reshape(-1, 2) sample = {'image': image, 'landmarks': landmarks} if self.transform: sample = self.transform(sample) return sample dataset = MothLandmarksDataset('moth_gt.csv', '.', transform=transform_train) # Device configuration device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') seed = 42 np.random.seed(seed) torch.manual_seed(seed) # split the dataset into validation and test sets len_valid_set = int(0.1*len(dataset)) len_train_set = len(dataset) - len_valid_set train_dataset , valid_dataset, = torch.utils.data.random_split(dataset , [len_train_set, len_valid_set]) # shuffle and batch the datasets train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=8, shuffle=True, num_workers=4) test_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=8, shuffle=True, num_workers=4) The main problem with above code is I cannot segregate the transform_train and transform_test. How can I differentiate them in MothLandmarksDataset? Mainly I think the problem happens because MothLandmarksDataset is instantiated on the whole dataset not train_dataset or test_dataset. Any help is really appreciated.