instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Integrating ROS with pycharm
I wanted to run ROS in PyCharm, but could not find that .desktop file as mentioned here in which changes should be made. Moreover, I want to use the same environment that is created for PyTorch, do not want to change the interpreter. Can someone help me out with this? Regards.
You could add a virtual environment with the following instructions, then you should add ROS distpackages (roslib) on it with this instruction. File > Settings (or Ctrl+Alt+s as shortcut)> Project: > Project interpreter. In the project interpreter dropdown list, you can specify ROS Python interpreter by selecting the appropriate from the list. ROS distpackages path that you need: /opt/ros/kinetic/lib/python2.7/distpackages
https://stackoverflow.com/questions/55246952/
What is the best way to use multiprocessing CPU inference for PyTorch models?
I have to productionize a PyTorch BERT Question Answer model. The CPU inference is very slow for me as for every query the model needs to evaluate 30 samples. Out of the result of these 30 samples, I pick the answer with the maximum score. GPU would be too costly for me to use for inference. Can I leverage multiprocessing / parallel CPU inference for this? If Yes, what is the best practice to do so? If No, is there a cloud option that bills me only for the GPU queries I make and not for continuously running the GPU instance?
Another possible way to get better performance would be to reduce the model as much as possible. One of the most promising techniques is quantized and binarized neural networks. Here are some references: https://arxiv.org/abs/1603.05279 https://arxiv.org/abs/1602.02505
https://stackoverflow.com/questions/55253708/
How to integrate LIME with PyTorch?
Using this mnist image classification model : %reset -f import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms import torch.utils.data as data_utils import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_moons from matplotlib import pyplot from pandas import DataFrame import torchvision.datasets as dset import os import torch.nn.functional as F import time import random import pickle from sklearn.metrics import confusion_matrix import pandas as pd import sklearn trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))]) root = './data' if not os.path.exists(root): os.mkdir(root) train_set = dset.MNIST(root=root, train=True, transform=trans, download=True) test_set = dset.MNIST(root=root, train=False, transform=trans, download=True) batch_size = 64 train_loader = torch.utils.data.DataLoader( dataset=train_set, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( dataset=test_set, batch_size=batch_size, shuffle=True) class NeuralNet(nn.Module): def __init__(self): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(28*28, 500) self.fc2 = nn.Linear(500, 256) self.fc3 = nn.Linear(256, 2) def forward(self, x): x = x.view(-1, 28*28) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x num_epochs = 2 random_sample_size = 200 values_0_or_1 = [t for t in train_set if (int(t[1]) == 0 or int(t[1]) == 1)] values_0_or_1_testset = [t for t in test_set if (int(t[1]) == 0 or int(t[1]) == 1)] print(len(values_0_or_1)) print(len(values_0_or_1_testset)) train_loader_subset = torch.utils.data.DataLoader( dataset=values_0_or_1, batch_size=batch_size, shuffle=True) test_loader_subset = torch.utils.data.DataLoader( dataset=values_0_or_1_testset, batch_size=batch_size, shuffle=False) train_loader = train_loader_subset # Hyper-parameters input_size = 100 hidden_size = 100 num_classes = 2 # learning_rate = 0.00001 learning_rate = .0001 # Device configuration device = 'cpu' print_progress_every_n_epochs = 1 model = NeuralNet().to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) N = len(train_loader) # Train the model total_step = len(train_loader) most_recent_prediction = [] test_actual_predicted_dict = {} rm = random.sample(list(values_0_or_1), random_sample_size) train_loader_subset = data_utils.DataLoader(rm, batch_size=4) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader_subset): # Move tensors to the configured device images = images.reshape(-1, 2).to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (epoch) % print_progress_every_n_epochs == 0: print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item())) predicted_test = [] model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance) probs_l = [] predicted_values = [] actual_values = [] labels_l = [] with torch.no_grad(): for images, labels in test_loader_subset: images = images.to(device) labels = labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) predicted_test.append(predicted.cpu().numpy()) sm = torch.nn.Softmax() probabilities = sm(outputs) probs_l.append(probabilities) labels_l.append(labels.cpu().numpy()) predicted_values.append(np.concatenate(predicted_test).ravel()) actual_values.append(np.concatenate(labels_l).ravel()) if (epoch) % 1 == 0: print('test accuracy : ', 100 * len((np.where(np.array(predicted_values[0])==(np.array(actual_values[0])))[0])) / len(actual_values[0])) I'm to attempting to integrate 'Local Interpretable Model-Agnostic Explanations for machine learning classifiers' : https://marcotcr.github.io/lime/ It appears PyTorch support is not enabled as it is not mentioned in doc and following tutorial : https://marcotcr.github.io/lime/tutorials/Tutorial%20-%20images.html With my updated code for PyTorch : from lime import lime_image import time explainer = lime_image.LimeImageExplainer() explanation = explainer.explain_instance(images[0].reshape(28,28), model(images[0]), top_labels=5, hide_color=0, num_samples=1000) Causes error : /opt/conda/lib/python3.6/site-packages/skimage/color/colorconv.py in gray2rgb(image, alpha) 830 is_rgb = False 831 is_alpha = False --> 832 dims = np.squeeze(image).ndim 833 834 if dims == 3: AttributeError: 'Tensor' object has no attribute 'ndim' So appears tensorflow object is expected here ? How to integrate LIME with PyTorch image classification ?
Here's my solution: Lime expects an image input of type numpy. This is why you get the attribute error and a solution would be to convert the image (from Tensor) to numpy before passing it to the explainer object. Another solution would be to select a specific image with the test_loader_subset and convert it with img = img.numpy(). Secondly, in order to make LIME work with pytorch (or any other framework), you'll need to specify a batch prediction function which outputs the prediction scores of each class for each image. The name of this function (here I've called it batch_predict) is then passed to explainer.explain_instance(img, batch_predict, ...). The batch_predict needs to loop through all images passed to it, convert them to Tensor, make a prediction and finally return the prediction score list (with numpy values). This is how I got it working. Note also that the images need to have shape (... ,... ,3) or (... ,... ,1) in order to be properly segmented by the default segmentation algorithm. This means that you might have to use np.transpose(img, (...)). You may specify the segmentation algorithm as well if the results are poor. Finally you'll need to display the LIME image mask on top of the original image. This snippet shows how this may be done: from skimage.segmentation import mark_boundaries temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=5, hide_rest=False) img_boundry = mark_boundaries(temp, mask) plt.imshow(img_boundry) plt.show() This notebook is a good reference: https://github.com/marcotcr/lime/blob/master/doc/notebooks/Tutorial%20-%20images%20-%20Pytorch.ipynb
https://stackoverflow.com/questions/55257039/
PyTorch specify model parameters
I am trying to create a convolutional model in PyTorch where one layer is fixed (initialized to prescribed values) another layer is learned (but initial guess taken from prescribed values). Here is a sample code for model definition: import torch.nn as nn class Net(nn.Module): def __init__(self, weights_fixed, weights_guess): super(Net, self).__init__() self.convL1 = nn.Conv1d(1, 3, 3, bias=False) self.convL1.weight = weights_fixed # I want to keep these weights fixed self.convL2 = nn.Conv1d(3, 1, 1, bias=False) self.convL1.weight = weights_guess # I want to learn these weights def forward(self, inp_batch): out1 = self.convL1(inp_batch) out2 = self.convL2(out1) return out2 and the sample use: weights_fixed = ... weights_guess = ... model = Net(weights_fixed, weights_guess) loss_fn = nn.CrossEntropyLoss() optim = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) train_dataset = ... #define training set here for (X, y) in train_dataset: optim.zero_grad() out = model(X) loss = loss_fn(out, y) loss.backward() optim.step() How can I make the weights weights_fixed - fixed and weights_guess - learnable? My guess would be weights_fixed = nn.Parameter(W1,requires_grad=False) weights_guess = nn.Parameter(W2,requires_grad=True) where for the sake of completeness import numpy as np import torch krnl = np.zeros((5,order+1)) krnl[:,0] = [ 0. , 1., 0. ] krnl[:,1] = [-0.5, 0., 0.5] krnl[:,2] = [ 1. ,-2., 1. ] W1 = torch.tensor(krnl) a = np.array((1.,2.,3.)) W2 = torch.tensor(a) But I am utterly confused. Any suggestions or references would be greatly appreciated. Of course I went over PyTorch docs, but it did not add clarity to my understanding.
Just wrap the learnable parameter with nn.Parameter (requires_grad=True is the default, no need to specify this), and have the fixed weight as a Tensor without nn.Parameter wrapper. All nn.Parameter weights are automatically added to net.parameters(), so when you do training like optimizer = optim.SGD(net.parameters(), lr=0.01), the fixed weight will not be changed. So basically this: weights_fixed = W1 weights_guess = nn.Parameter(W2)
https://stackoverflow.com/questions/55267538/
How to return intermideate gradients (for non-leaf nodes) in pytorch?
My question is concerning the syntax of pytorch register_hook. x = torch.tensor([1.], requires_grad=True) y = x**2 z = 2*y x.register_hook(print) y.register_hook(print) z.backward() outputs: tensor([2.]) tensor([4.]) this snippet simply prints the gradient of z w.r.t x and y, respectively. Now my (most likely trivial) question is how to return the intermediate gradients (rather than only printing)? UPDATE: It appears that calling retain_grad() solves the issue for leaf nodes. ex. y.retain_grad(). However, retain_grad does not seem to solve it for non-leaf nodes. Any suggestions?
I think you can use those hooks to store the gradients in a global variable: grads = [] x = torch.tensor([1.], requires_grad=True) y = x**2 + 1 z = 2*y x.register_hook(lambda d:grads.append(d)) y.register_hook(lambda d:grads.append(d)) z.backward() But you most likely also need to remember the corresponding tensor these gradients were computed for. In that case, we slightly extend above using a dict instead of list: grads = {} x = torch.tensor([1.,2.], requires_grad=True) y = x**2 + 1 z = 2*y def store(grad,parent): print(grad,parent) grads[parent] = grad.clone() x.register_hook(lambda grad:store(grad,x)) y.register_hook(lambda grad:store(grad,y)) z.sum().backward() Now you can, for example, access tensor y's grad simply using grads[y]
https://stackoverflow.com/questions/55305262/
TypeError: forward() missing 1 required positional argument: 'hidden'
I'm trying to visualize my GRU model using PyTorchViz but every time I run this code it gives me an error. I want something like this in the import torch from torch import nn from torchviz import make_dot, make_dot_from_trace model = IC_V6(f.tokens) x = torch.randn(1,8) make_dot(model(x), params=dict(model.named_parameters())) here is my Class for holding the data class Flickr8KImageCaptionDataset: def __init__(self): all_data = json.load(open('caption_datasets/dataset_flickr8k.json', 'r')) all_data=all_data['images'] self.training_data = [] self.test_data = [] self.w2i = {ENDWORD: 0, STARTWORD: 1} self.word_frequency = {ENDWORD: 0, STARTWORD: 0} self.i2w = {0: ENDWORD, 1: STARTWORD} self.tokens = 2 #END is default self.batch_index = 0 for data in all_data: if(data['split']=='train'): self.training_data.append(data) else: self.test_data.append(data) for sentence in data['sentences']: for token in sentence['tokens']: if(token not in self.w2i.keys()): self.w2i[token] = self.tokens self.i2w[self.tokens] = token self.tokens +=1 self.word_frequency[token] = 1 else: self.word_frequency[token] += 1 def image_to_tensor(self,filename): image = Image.open(filename) image = TF.resize(img=image, size=(HEIGHT,WIDTH)) image = TF.to_tensor(pic=image) image = TF.normalize(image, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) return torch.unsqueeze(image,0) def return_train_batch(self): #size of 1 always #np.random.shuffle(self.training_data) for index in range(len(self.training_data)): #index = np.random.randint(len(self.training_data)) sentence_index = np.random.randint(len(self.training_data[index]['sentences'])) output_sentence_tokens = deepcopy(self.training_data[index]['sentences'][sentence_index]['tokens']) output_sentence_tokens.append(ENDWORD) #corresponds to end word image = self.image_to_tensor('/home/vincent/Documents/Final Code/Flicker8k_Dataset/'+self.training_data[index]['filename']) yield image, list(map(lambda x: self.w2i[x], output_sentence_tokens)), output_sentence_tokens, index def convert_tensor_to_word(self, output_tensor): output = F.log_softmax(output_tensor.detach().squeeze(), dim=0).numpy() return self.i2w[np.argmax(output)] def convert_sentence_to_tokens(self, sentence): tokens = sentence.split(" ") converted_tokens= list(map(lambda x: self.w2i[x], tokens)) converted_tokens.append(self.w2i[ENDWORD]) return converted_tokens def caption_image_greedy(self, net, image_filename, max_words=15): #non beam search, no temperature implemented net.eval() inception.eval() image_tensor = self.image_to_tensor(image_filename) hidden=None embedding=None words = [] input_token = STARTWORD input_tensor = torch.tensor(self.w2i[input_token]).type(torch.LongTensor) for i in range(max_words): if(i==0): out, hidden=net(input_tensor, hidden=image_tensor, process_image=True) else: out, hidden=net(input_tensor, hidden) word = self.convert_tensor_to_word(out) input_token = self.w2i[word] input_tensor = torch.tensor(input_token).type(torch.LongTensor) if(word==ENDWORD): break else: words.append(word) return ' '.join(words) def forward_beam(self, net, hidden, process_image, partial_sentences, sentences, topn_words=5, max_sentences=10): max_words = 50 hidden_index = {} while(sentences<max_sentences): #print("Sentences: ",sentences) new_partial_sentences = [] new_partial_sentences_logp = [] new_partial_avg_logp= [] if(len(partial_sentences[-1][0])>max_words): break for partial_sentence in partial_sentences: input_token = partial_sentence[0][-1] input_tensor = torch.tensor(self.w2i[input_token]).type(torch.FloatTensor) if(partial_sentence[0][-1]==STARTWORD): out, hidden=net(input_tensor, hidden, process_image=True) else: out, hidden=net(input_tensor, torch.tensor(hidden_index[input_token])) #take first topn words and add as children to root out = F.log_softmax(out.detach().squeeze(), dim=0).numpy() out_indexes = np.argsort(out)[::-1][:topn_words] for out_index in out_indexes: if(self.i2w[out_index]==ENDWORD): sentences=sentences+1 else: total_logp = float(out[out_index]) + partial_sentence[1] new_partial_sentences_logp.append(total_logp) new_partial_sentences.append([np.concatenate((partial_sentence[0], [self.i2w[out_index]])),total_logp]) len_words = len(new_partial_sentences[-1][0]) new_partial_avg_logp.append(total_logp/len_words) #print(self.i2w[out_index]) hidden_index[self.i2w[out_index]] = deepcopy(hidden.detach().numpy()) #select topn partial sentences top_indexes = np.argsort(new_partial_sentences_logp)[::-1][:topn_words] new_partial_sentences = np.array(new_partial_sentences)[top_indexes] #print("New partial sentences (topn):", new_partial_sentences) partial_sentences = new_partial_sentences return partial_sentences def caption_image_beam_search(self, net, image_filename, topn_words=10, max_sentences=10): net.eval() inception.eval() image_tensor = self.image_to_tensor(image_filename) hidden=None embedding=None words = [] sentences = 0 partial_sentences = [[[STARTWORD], 0.0]] #root_id = hash(input_token) #for start word #nodes = {} #nodes[root_id] = Node(root_id, [STARTWORD, 0], None) partial_sentences = self.forward_beam(net, image_tensor, True, partial_sentences, sentences, topn_words, max_sentences) logp = [] joined_sentences = [] for partial_sentence in partial_sentences: joined_sentences.append([' '.join(partial_sentence[0][1:]),partial_sentence[1]]) return joined_sentences def print_beam_caption(self, net, train_filename,num_captions=0): beam_sentences = f.caption_image_beam_search(net,train_filename) if(num_captions==0): num_captions=len(beam_sentences) for sentence in beam_sentences[:num_captions]: print(sentence[0]+" [",sentence[1], "]") and here is my GRU model class IC_V6(nn.Module): #V2: Fed image vector directly as hidden and fed words generated as iputs back to LSTM #V3: Added an embedding layer between words input and GRU/LSTM def __init__(self, token_dict_size): super(IC_V6, self).__init__() #Input is an image of height 500, and width 500 self.embedding_size = INPUT_EMBEDDING self.hidden_state_size = HIDDEN_SIZE self.token_dict_size = token_dict_size self.output_size = OUTPUT_EMBEDDING self.batchnorm = nn.BatchNorm1d(self.embedding_size) self.input_embedding = nn.Embedding(self.token_dict_size, self.embedding_size) self.embedding_dropout = nn.Dropout(p=0.22) self.gru_layers = 3 self.gru = nn.GRU(input_size=self.embedding_size, hidden_size=self.hidden_state_size, num_layers=self.gru_layers, dropout=0.22) self.linear = nn.Linear(self.hidden_state_size, self.output_size) self.out = nn.Linear(self.output_size, token_dict_size) def forward(self, input_tokens, hidden, process_image=False, use_inception=True): if(USE_GPU): device = torch.device('cuda') else: device = torch.device('cpu') if(process_image): if(use_inception): inp=self.embedding_dropout(inception(hidden)) else: inp=hidden #inp=self.batchnorm(inp) hidden=torch.zeros((self.gru_layers,1, self.hidden_state_size)) else: inp=self.embedding_dropout(self.input_embedding(input_tokens.view(1).type(torch.LongTensor).to(device))) #inp=self.batchnorm(inp) hidden = hidden.view(self.gru_layers,1,-1) inp = inp.view(1,1,-1) out, hidden = self.gru(inp, hidden) out = self.out(self.linear(out)) return out, hidden this is how I called them: f = Flickr8KImageCaptionDataset() net = IC_V6(f.tokens) the error is: TypeError Traceback (most recent call last) <ipython-input-42-7993fc1a032f> in <module> 6 x = torch.randn(1,8) 7 ----> 8 make_dot(model(x), params=dict(model.named_parameters())) ~/anaconda3/envs/Thesis/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) TypeError: forward() missing 1 required positional argument: 'hidden' What should I do to solve this problem? Any help will be much appreciated.
I think the error message is pretty straight forward. You have two positional arguments input_tokens and hidden for your forward(). Python complains that one of them (hidden) is missing when you call your forward() function. Looking at your code, you call your forward like this: model(x) So x is mapped to input_tokens, but you need to hand over a second argument hidden. So you need to call it like this, providing your hidden state: model(x, hidden)
https://stackoverflow.com/questions/55341072/
Error when trying to send neural net generated image in flask
I'm making a basic api to request for generated images from a generator model from pytorch. I've done this using flask and I'm running it locally on MacOS. Everything works and the image returns but then python quits unexpectedly. Here is the code and the error: Error: 2019-03-25 16:21:23.514 Python[78776:1407049] WARNING: NSWindow drag regions should only be invalidated on the Main Thread! This will throw an exception in the future. Called from ( 0 AppKit 0x00007fff4f96fccc -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 386 1 AppKit 0x00007fff4f96d07c -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1488 2 AppKit 0x00007fff4f96caa6 -[NSWindow initWithContentRect:styleMask:backing:defer:] + 45 3 _macosx.cpython-37m-darwin.so 0x000000010fe634c0 -[Window initWithContentRect:styleMask:backing:defer:withManager:] + 80 4 _macosx.cpython-37m-darwin.so 0x000000010fe66a17 FigureManager_init + 327 5 Python 0x00000001023780bc wrap_init + 12 6 Python 0x000000010232fe09 wrapperdescr_call + 121 7 Python 0x0000000102328ae1 _PyObject_FastCallKeywords + 433 8 Python 0x00000001023e76d4 call_function + 420 9 Python 0x00000001023e47d6 _PyEval_EvalFrameDefault + 25190 10 Python 0x0000000102329100 function_code_fastcall + 128 11 Python 0x00000001023286f4 _PyFunction_FastCallDict + 148 12 Python 0x0000000102329b3f _PyObject_Call_Prepend + 143 13 Python 0x0000000102378001 slot_tp_init + 145 14 Python 0x0000000102373959 type_call + 297 15 Python 0x0000000102328ae1 _PyObject_FastCallKeywords + 433 16 Python 0x00000001023e76d4 call_function + 420 17 Python 0x00000001023e47d6 _PyEval_EvalFrameDefault + 25190 18 Python 0x0000000102329100 function_code_fastcall + 128 19 Python 0x00000001023e7812 call_function + 738 20 Python 0x00000001023e47d6 _PyEval_EvalFrameDefault + 25190 21 Python 0x00000001023e8336 _PyEval_EvalCodeWithName + 2422 22 Python 0x000000010232886b _PyFunction_FastCallDict + 523 23 Python 0x0000000102329b3f _PyObject_Call_Prepend + 143 24 Python 0x0000000102328df7 PyObject_Call + 135 25 Python 0x00000001023e4ae7 _PyEval_EvalFrameDefault + 25975 26 Python 0x00000001023e8336 _PyEval_EvalCodeWithName + 2422 27 Python 0x0000000102328c91 _PyFunction_FastCallKeywords + 257 28 Python 0x00000001023e7812 call_function + 738 29 Python 0x00000001023e4877 _PyEval_EvalFrameDefault + 25351 30 Python 0x0000000102329100 function_code_fastcall + 128 31 Python 0x00000001023e7812 call_function + 738 32 Python 0x00000001023e4877 _PyEval_EvalFrameDefault + 25351 33 Python 0x00000001023e8336 _PyEval_EvalCodeWithName + 2422 34 Python 0x0000000102328c91 _PyFunction_FastCallKeywords + 257 35 Python 0x00000001023e7812 call_function + 738 36 Python 0x00000001023e4877 _PyEval_EvalFrameDefault + 25351 37 Python 0x00000001023e8336 _PyEval_EvalCodeWithName + 2422 38 Python 0x0000000102328c91 _PyFunction_FastCallKeywords + 257 39 Python 0x00000001023e7812 call_function + 738 40 Python 0x00000001023e47d6 _PyEval_EvalFrameDefault + 25190 41 Python 0x0000000102329100 function_code_fastcall + 128 42 Python 0x00000001023e7812 call_function + 738 43 Python 0x00000001023e47d6 _PyEval_EvalFrameDefault + 25190 44 Python 0x00000001023e8336 _PyEval_EvalCodeWithName + 2422 45 Python 0x000000010232886b _PyFunction_FastCallDict + 523 46 Python 0x00000001023e4ae7 _PyEval_EvalFrameDefault + 25975 47 Python 0x0000000102329100 function_code_fastcall + 128 48 Python 0x00000001023e7812 call_function + 738 49 Python 0x00000001023e47bc _PyEval_EvalFrameDefault + 25164 50 Python 0x0000000102329100 function_code_fastcall + 128 51 Python 0x00000001023e7812 call_function + 738 52 Python 0x00000001023e47bc _PyEval_EvalFrameDefault + 25164 53 Python 0x0000000102329100 function_code_fastcall + 128 54 Python 0x00000001023e7812 call_function + 738 55 Python 0x00000001023e47bc _PyEval_EvalFrameDefault + 25164 56 Python 0x0000000102329100 function_code_fastcall + 128 57 Python 0x00000001023286f4 _PyFunction_FastCallDict + 148 58 Python 0x0000000102329b3f _PyObject_Call_Prepend + 143 59 Python 0x0000000102376bc6 slot_tp_call + 150 60 Python 0x0000000102328ae1 _PyObject_FastCallKeywords + 433 61 Python 0x00000001023e76d4 call_function + 420 62 Python 0x00000001023e47d6 _PyEval_EvalFrameDefault + 25190 63 Python 0x00000001023370de gen_send_ex + 206 64 Python 0x00000001023e3fb8 _PyEval_EvalFrameDefault + 23112 65 Python 0x00000001023e8336 _PyEval_EvalCodeWithName + 2422 66 Python 0x0000000102328c91 _PyFunction_FastCallKeywords + 257 67 Python 0x00000001023e7812 call_function + 738 68 Python 0x00000001023e4877 _PyEval_EvalFrameDefault + 25351 69 Python 0x00000001023e8336 _PyEval_EvalCodeWithName + 2422 70 Python 0x0000000102328c91 _PyFunction_FastCallKeywords + 257 71 Python 0x00000001023e7812 call_function + 738 72 Python 0x00000001023e47bc _PyEval_EvalFrameDefault + 25164 73 Python 0x0000000102329100 function_code_fastcall + 128 74 Python 0x00000001023e7812 call_function + 738 75 Python 0x00000001023e47bc _PyEval_EvalFrameDefault + 25164 76 Python 0x0000000102329100 function_code_fastcall + 128 77 Python 0x00000001023e7812 call_function + 738 78 Python 0x00000001023e47d6 _PyEval_EvalFrameDefault + 25190 79 Python 0x0000000102329100 function_code_fastcall + 128 80 Python 0x00000001023e7812 call_function + 738 81 Python 0x00000001023e47bc _PyEval_EvalFrameDefault + 25164 82 Python 0x0000000102329100 function_code_fastcall + 128 83 Python 0x00000001023286f4 _PyFunction_FastCallDict + 148 84 Python 0x0000000102329b3f _PyObject_Call_Prepend + 143 85 Python 0x0000000102378001 slot_tp_init + 145 86 Python 0x0000000102373959 type_call + 297 87 Python 0x0000000102328ae1 _PyObject_FastCallKeywords + 433 88 Python 0x00000001023e76d4 call_function + 420 89 Python 0x00000001023e47d6 _PyEval_EvalFrameDefault + 25190 90 Python 0x0000000102329100 function_code_fastcall + 128 91 Python 0x00000001023e7812 call_function + 738 92 Python 0x00000001023e47bc _PyEval_EvalFrameDefault + 25164 93 Python 0x0000000102329100 function_code_fastcall + 128 94 Python 0x00000001023286f4 _PyFunction_FastCallDict + 148 95 Python 0x0000000102329b3f _PyObject_Call_Prepend + 143 96 Python 0x0000000102328df7 PyObject_Call + 135 97 Python 0x00000001023e4ae7 _PyEval_EvalFrameDefault + 25975 98 Python 0x0000000102329100 function_code_fastcall + 128 99 Python 0x00000001023e7812 call_function + 738 100 Python 0x00000001023e47bc _PyEval_EvalFrameDefault + 25164 101 Python 0x0000000102329100 function_code_fastcall + 128 102 Python 0x00000001023e7812 call_function + 738 103 Python 0x00000001023e47bc _PyEval_EvalFrameDefault + 25164 104 Python 0x0000000102329100 function_code_fastcall + 128 105 Python 0x00000001023286f4 _PyFunction_FastCallDict + 148 106 Python 0x0000000102329b3f _PyObject_Call_Prepend + 143 107 Python 0x0000000102328df7 PyObject_Call + 135 108 Python 0x000000010246fbd7 t_bootstrap + 71 109 Python 0x0000000102426819 pythread_wrapper + 25 110 libsystem_pthread.dylib 0x00007fff7f7ce305 _pthread_body + 126 111 libsystem_pthread.dylib 0x00007fff7f7d126f _pthread_start + 70 112 libsystem_pthread.dylib 0x00007fff7f7cd415 thread_start + 13 ) 127.0.0.1 - - [25/Mar/2019 16:21:23] "GET /sdfa HTTP/1.1" 200 - Assertion failed: (NSViewIsCurrentlyBuildingLayerTreeForDisplay() != currentlyBuildingLayerTree), function NSViewSetCurrentlyBuildingLayerTreeForDisplay, file /BuildRoot/Library/Caches/com.apple.xbs/Sources/AppKit/AppKit-1671.20.108/AppKit.subproj/NSView.m, line 14143 Main: from flask import Flask, request, send_file import loadModel app = Flask(__name__) @app.route('/') def index(): return 'this is the homepage' @app.route('/<ganType>') def generate(ganType): loadModel.loadModel() return send_file('fakes.jpg') if __name__ == "__main__": app.run(debug=True) Load Model: import torch import torch.nn as nn import torchvision.utils as vutils import matplotlib.pyplot as plt import numpy as np batch_size = 100 image_size = 64 nc = 3 nz = 100 ngf = 64 ndf = 64 num_epochs = 1 lr = 0.0002 beta1 = 0.5 class Generator(nn.Module): # nn.Module is the base class for all neural net modules. def __init__(self): super(Generator, self).__init__() # Calls the parent's initialization method self.main = nn.Sequential( # input is Z, going into a convolution nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False), nn.BatchNorm2d(ngf * 8), nn.ReLU(True), # state size. (ngf*8) x 4 x 4 nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), # state size. (ngf*4) x 8 x 8 nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf * 2), nn.ReLU(True), # state size. (ngf*2) x 16 x 16 nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf), nn.ReLU(True), # state size. (ngf) x 32 x 32 nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False), nn.Tanh() # state size. (nc) x 64 x 64 ) def forward(self, input): return self.main(input) def loadModel(): # device = torch.device("cpu") model = Generator() model.load_state_dict(torch.load('generator.pt', map_location='cpu')) model.eval() first_list = [] img_list = [] # Generate some images to test the model. noise = torch.randn(64, nz, 1, 1) fake = model(noise).detach().cpu() # inception_img_list.append(np.transpose(fake[0],(1,2,0))) img_list.append(vutils.make_grid(fake, padding=2, normalize=True)) # img_list.append(first_list) # plt.subplot(1,2,2) plt.axis("off") plt.title("Fake Images") plt.imsave('fakes.jpg', np.transpose(img_list[-1],(1,2,0))) return 'hello' if __name__ == "__main__": loadModel() As I said, it actually works and returns the generated image. It just also crashes python. I'm not sure if I am doing something incorrectly or if its a MacOs problem. Any help would be appreciated.
I worked out from people having the same problem: https://github.com/matplotlib/matplotlib/issues/11094 Closing plt before returning fixes the error.
https://stackoverflow.com/questions/55342589/
In language modeling, why do I have to init_hidden weights before every new epoch of training? (pytorch)
I have a question about the following code in pytorch language modeling: print("Training and generating...") for epoch in range(1, config.num_epochs + 1): total_loss = 0.0 model.train() hidden = model.init_hidden(config.batch_size) for ibatch, i in enumerate(range(0, train_len - 1, seq_len)): data, targets = get_batch(train_data, i, seq_len) hidden = repackage_hidden(hidden) model.zero_grad() output, hidden = model(data, hidden) loss = criterion(output.view(-1, config.vocab_size), targets) loss.backward() Please check line 5. And the init_hidden function is as follows: def init_hidden(self, bsz): weight = next(self.parameters()).data if self.rnn_type == 'LSTM': # lstm:(h0, c0) return (Variable(weight.new(self.n_layers, bsz, self.hi_dim).zero_()), Variable(weight.new(self.n_layers, bsz, self.hi_dim).zero_())) else: # gru & rnn:h0 return Variable(weight.new(self.n_layers, bsz, self.hi_dim).zero_()) My question is: Why do we need to init_hidden every epoch? Shouldn't it be that the model inherit the hidden parameters from last epoch and continue training on them.
The hidden state stores the internal state of the RNN from predictions made on previous tokens in the current sequence, this allows RNNs to understand context. The hidden state is determined by the output of the previous token. When you predict for the first token of any sequence, if you were to retain the hidden state from the previous sequence your model would perform as if the new sequence was a continuation of the old sequence which would give worse results. Instead for the first token you initialise an empty hidden state, which will then be filled with the model state and used for the second token. Think about it this way: if someone asked you to classify a sentence and handed you the US constitution (irrelevant information) vs. if someone gave you some background context about the sentence and then asked you to classify the sentence.
https://stackoverflow.com/questions/55350811/
How to deal with mini-batch loss in Pytorch?
I feed mini-batch data to model, and I just want to know how to deal with the loss. Could I accumulate the loss, then call the backward like: ... def neg_log_likelihood(self, sentences, tags, length): self.batch_size = sentences.size(0) logits = self.__get_lstm_features(sentences, length) real_path_score = torch.zeros(1) total_score = torch.zeros(1) if USE_GPU: real_path_score = real_path_score.cuda() total_score = total_score.cuda() for logit, tag, leng in zip(logits, tags, length): logit = logit[:leng] tag = tag[:leng] real_path_score += self.real_path_score(logit, tag) total_score += self.total_score(logit, tag) return total_score - real_path_score ... loss = model.neg_log_likelihood(sentences, tags, length) loss.backward() optimizer.step() I wonder that if the accumulation could lead to gradient explosion? So, should I call the backward in loop: for sentence, tag , leng in zip(sentences, tags, length): loss = model.neg_log_likelihood(sentence, tag, leng) loss.backward() optimizer.step() Or, use the mean loss just like the reduce_mean in tensorflow loss = reduce_mean(losses) loss.backward()
The loss has to be reduced by mean using the mini-batch size. If you look at the native PyTorch loss functions such as CrossEntropyLoss, there is a separate parameter reduction just for this and the default behaviour is to do mean on the mini-batch size.
https://stackoverflow.com/questions/55368741/
Torch allocates zero GPU memory on PyTorch
I am trying to use GPU to train my model but it seems that torch fails to allocate GPU memory. My model is a RNN built on PyTorch device = torch.device('cuda: 0' if torch.cuda.is_available() else "cpu") rnn = RNN(n_letters, n_hidden, n_categories_train) rnn.to(device) criterion = nn.NLLLoss() criterion.to(device) optimizer = torch.optim.SGD(rnn.parameters(), lr=learning_rate, weight_decay=.9) class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size self.i2h = nn.Linear(input_size + hidden_size, hidden_size) self.i2o = nn.Linear(input_size + hidden_size, output_size) self.softmax = nn.LogSoftmax(dim=1) def forward(self, input, hidden): input = input.cuda() hidden = hidden.cuda() combined = torch.cat((input, hidden), 1) hidden = self.i2h(combined) output = self.i2o(combined) output = self.softmax(output) output = output.cuda() hidden = hidden.cuda() return output, hidden def init_hidden(self): return Variable(torch.zeros(1, self.hidden_size).cuda()) Training function: def train(category_tensor, line_tensor, rnn, optimizer, criterion): rnn.zero_grad() hidden = rnn.init_hidden() for i in range(line_tensor.size()[0]): output, hidden = rnn(line_tensor[i], hidden) loss = criterion(output, category_tensor) loss.backward() optimizer.step() return output, loss.item() The function to get category_tensor and line_tensor: def random_training_pair(category_lines, n_letters, all_letters): category = random.choice(all_categories_train) line = random.choice(category_lines[category]) category_tensor = Variable(torch.LongTensor([all_categories_train.index(category)]).cuda()) line_tensor = Variable(process_data.line_to_tensor(line, n_letters, all_letters)).cuda() return category, line, category_tensor, line_tensor I ran the following the code: print(torch.cuda.get_device_name(0)) print('Memory Usage:') print('Allocated:', round(torch.cuda.memory_allocated(0) / 1024 ** 3, 1), 'GB') print('Cached: ', round(torch.cuda.memory_cached(0) / 1024 ** 3, 1), 'GB') and I got: GeForce GTX 1060 Memory Usage: Allocated: 0.0 GB Cached: 0.0 GB I did not get any errors but GPU usage is just 1% while CPU usage is around 31%. I am using Windows 10 and Anaconda, where my PyTorch is installed. CUDA and cuDNN is installed from .exe file downloaded from Nvidia website.
Your problem is that to() is not an in-place operation. If you call rnn.to(device) it will return a new object / model located on the desired device. But it will not move the old object anywhere! So changing: rnn = RNN(n_letters, n_hidden, n_categories_train) rnn.to(device) to: rnn = RNN(n_letters, n_hidden, n_categories_train).to(device) For all other instances you used to this way, you have to change it as well. Should do the trick for you! Note: All tensors and parameters you perform operations with have to be on the same device. If your model is on GPU but your input tensor is on CPU you will get an error message.
https://stackoverflow.com/questions/55368861/
In Colaboratory, CUDA cannot be used for the torch
The error message is as follows: RuntimeError Traceback (most recent call last) <ipython-input-24-06e96beb03a5> in <module>() 11 12 x_test = np.array(test_features) ---> 13 x_test_cuda = torch.tensor(x_test, dtype=torch.float).cuda() 14 test = torch.utils.data.TensorDataset(x_test_cuda) 15 test_loader = torch.utils.data.DataLoader(test, batch_size=batch_size, shuffle=False) /usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py in _lazy_init() 160 class CudaError(RuntimeError): 161 def __init__(self, code): --> 162 msg = cudart().cudaGetErrorString(code).decode('utf-8') 163 super(CudaError, self).__init__('{0} ({1})'.format(msg, code)) 164 RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:51
Click on Runtime and select Change runtime type. Now in Hardware Acceleration, select GPU and hit Save.
https://stackoverflow.com/questions/55368921/
How to train a neural network model with bert embeddings instead of static embeddings like glove/fasttext?
I am looking for some heads up to train a conventional neural network model with bert embeddings that are generated dynamically (BERT contextualized embeddings which generates different embeddings for the same word which when comes under different context). In normal neural network model, we would initialize the model with glove or fasttext embeddings like, import torch.nn as nn embed = nn.Embedding(vocab_size, vector_size) embed.weight.data.copy_(some_variable_containing_vectors) Instead of copying static vectors like this and use it for training, I want to pass every input to a BERT model and generate embedding for the words on the fly, and feed them to the model for training. So should I work on changing the forward function in the model for incorporating those embeddings? Any help would be appreciated!
If you are using Pytorch. You can use https://github.com/huggingface/pytorch-pretrained-BERT which is the most popular BERT implementation for Pytorch (it is also a pip package!). Here I'm just going to outline how to use it properly. For this particular problem there are 2 approaches - where you obviously cannot use the Embedding layer: You can incorporate generating BERT embeddings into your data preprocessing pipeline. You will need to use BERT's own tokenizer and word-to-ids dictionary. The repo's README has examples on preprocessing. You can write a loop for generating BERT tokens for strings like this (assuming - because BERT consumes a lot of GPU memory): (Note: to be more proper you should also add attention masks - which are LongTensor of 1 & 0 masking the sentence lengths) import torch from pytorch_pretrained_bert import BertTokenizer, BertModel batch_size = 32 X_train, y_train = samples_from_file('train.csv') # Put your own data loading function here tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') X_train = [tokenizer.tokenize('[CLS] ' + sent + ' [SEP]') for sent in X_train] # Appending [CLS] and [SEP] tokens - this probably can be done in a cleaner way bert_model = BertModel.from_pretrained('bert-base-uncased') bert_model = bert_model.cuda() X_train_tokens = [tokenizer.convert_tokens_to_ids(sent) for sent in X_train] results = torch.zeros((len(X_test_tokens), bert_model.config.hidden_size)).long() with torch.no_grad(): for stidx in range(0, len(X_test_tokens), batch_size): X = X_test_tokens[stidx:stidx + batch_size] X = torch.LongTensor(X).cuda() _, pooled_output = bert_model(X) results[stidx:stidx + batch_size,:] = pooled_output.cpu() After which you obtain the results tensor which contains the calculated embeddings, where you can use it as an input to your model. The full (and more proper) code for this is provided here This method has the advantage of not having to re-calculate these embeddings every epoch. With this method, e.g for classification your model should only consist of a Linear(bert_model.config.hidden_size, num_labels) layer, inputs to the model should be the results tensor in the above code Second, and arguably cleaner method: If you check out the repo, you can find there is wrappers for various tasks (e.g BertForSequenceClassification). It should also be easy to implement your custom classes that inherits from BertPretrainedModel and utilizes the various Bert classes from the repo. For example, you can use: model = BertForSequenceClassification.from_pretrained('bert-base-uncased', labels=num_labels) # Where num_labels is the number of labels you need to classify. After which you can continue with the preprocessing, up until generating token ids. Then you can train the entire model (but with a low learning rate e.g Adam 3e-5 for batch_size = 32) With this you can fine-tune BERT's embeddings itself, or use techniques like freezing BERT for a few epochs to train the classifier only, then unfreeze to fine-tune etc. But it is also more computationally expensive. An example for this is also provided in the repo
https://stackoverflow.com/questions/55369821/
Trying to understand Pytorch's implementation of LSTM
I have a dataset containing 1000 examples where each example has 5 features (a,b,c,d,e). I want to feed 7 examples to an LSTM so it predicts the feature (a) of the 8th day. Reading Pytorchs documentation of nn.LSTM() I came up with the following: input_size = 5 hidden_size = 10 num_layers = 1 output_size = 1 lstm = nn.LSTM(input_size, hidden_size, num_layers) fc = nn.Linear(hidden_size, output_size) out, hidden = lstm(X) # Where X's shape is ([7,1,5]) output = fc(out[-1]) output # output's shape is ([7,1]) According to the docs: The input of the nn.LSTM is "input of shape (seq_len, batch, input_size)" with "input_size – The number of expected features in the input x", And the output is: "output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the LSTM, for each t." In this case, I thought seq_len would be the sequence of 7 examples, batchis 1 and input_size is 5. So the lstm would consume each example containing 5 features refeeding the hidden layer every iteration. What am I missing?
When I extend your code to a full example -- I also added some comments to may help -- I get the following: import torch import torch.nn as nn input_size = 5 hidden_size = 10 num_layers = 1 output_size = 1 lstm = nn.LSTM(input_size, hidden_size, num_layers) fc = nn.Linear(hidden_size, output_size) X = [ [[1,2,3,4,5]], [[1,2,3,4,5]], [[1,2,3,4,5]], [[1,2,3,4,5]], [[1,2,3,4,5]], [[1,2,3,4,5]], [[1,2,3,4,5]], ] X = torch.tensor(X, dtype=torch.float32) print(X.shape) # (seq_len, batch_size, input_size) = (7, 1, 5) out, hidden = lstm(X) # Where X's shape is ([7,1,5]) print(out.shape) # (seq_len, batch_size, hidden_size) = (7, 1, 10) out = out[-1] # Get output of last step print(out.shape) # (batch, hidden_size) = (1, 10) out = fc(out) # Push through linear layer print(out.shape) # (batch_size, output_size) = (1, 1) This makes sense to me, given your batch_size = 1 and output_size = 1 (I assume, you're doing regression). I don't know where your output.shape = (7, 1) come from. Are you sure that your X has the correct dimensions? Did you create nn.LSTM maybe with batch_first=True? There are lot of little things that can sneak in.
https://stackoverflow.com/questions/55408365/
PyTorch datasets: ImageFolder and subfolder filtering
I would like to use ImageFolder to create an Image Dataset. My current image directory structure looks like this: /root -- train/ ---- 001.jpg ---- 002.jpg ---- .... -- test/ ---- 001.jpg ---- 002.jpg ---- .... I would like to have a dataset dedicated to training data, and a dataset dedicated to test data. As I understand, doing so: dataset = ImageFolder(root='root/train') does not find any images. Doing dataset = ImageFolder(root='root') find images but train and test images are just scrambled together. ImageFolder has argument loader but I did not manage to find any use-case for it. How can I discriminate images in the root folder according to the subfolder they belong to?
ImageFolder expects the data folder (the one that you pass as root) to contain subfolders representing the classes to which its images belong. Something like this: data/ ├── train/ | ├── class_0/ | | ├── 001.jpg | | ├── 002.jpg | | └── 003.jpg | └── class_1/ | ├── 004.jpg | └── 005.jpg └── test/ ├── class_0/ | ├── 006.jpg | └── 007.jpg └── class_1/ ├── 008.jpg └── 009.jpg Having the above folder structure you can do the following: train_dataset = ImageFolder(root='data/train') test_dataset = ImageFolder(root='data/test') Since you don't have that structure, one obvious option is to create class-subfolders and put the images into them. Another option is to create a custom Dataset, see here.
https://stackoverflow.com/questions/55435832/
ModuleNotFoundError: No module named 'torch._C'
I want to import the torch,but then the interpreter return this result,i have no idea how to deal with it Traceback (most recent call last): File "D:/Programing/tool/Python/learn_ml_the_hard_way/ML/scipy1.py", line 1, in <module> import torch File "D:\Programing\python\Anaconda3.5\lib\site-packages\torch\__init__.py", line 84, in <module> from torch._C import * ModuleNotFoundError: No module named 'torch._C'
I had the same problem and followed the instructions in this link You can also find the torch path with this command if needed: sudo find / -iname torch
https://stackoverflow.com/questions/55441939/
Pytorch Not Updating Variables in .step()
I'm attempting to convert old code to PyTorch code as an experiment. Ultimately, I will be doing regression on a 10,000+ x 100 Matrix, updating weights and whatnot appropriately. Trying to learn, I'm slowly scaling up on toy examples. I'm hitting a wall with the following sample code. import torch import torch.nn as nn import torch.nn.functional as funct from torch.autograd import Variable device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") x_data = Variable( torch.Tensor( [ [1.0, 2.0], [2.0, 3.0], [3.0, 4.0] ] ), requires_grad=True ) y_data = Variable( torch.Tensor( [ [2.0], [4.0], [6.0] ] ) ) w = Variable( torch.randn( 2, 1, requires_grad=True ) ) b = Variable( torch.randn( 1, 1, requires_grad=True ) ) class Model(torch.nn.Module) : def __init__(self) : super( Model, self).__init__() self.linear = torch.nn.Linear(2,1) ## 2 features per entry. 1 output def forward(self, x2, w2, b2) : y_pred = x2 @ w2 + b2 return y_pred model = Model() criterion = torch.nn.MSELoss( size_average=False ) optimizer = torch.optim.SGD( model.parameters(), lr=0.01 ) for epoch in range(10) : y_pred = model( x_data,w,b ) # Get prediction loss = criterion( y_pred, y_data ) # Calc loss print( epoch, loss.data.item() ) # Print loss optimizer.zero_grad() # Zero gradient loss.backward() # Calculate gradients optimizer.step() # Update w, b However, doing so, my loss is always the same, and investigating shows my w and b never actually change. I'm a bit lost at what's going on here. Ultimately, I'd like to be able to store the results of the "new" w and b to compare across iterations and datasets.
It looks like a case of cargo programming to me. Notice that your Model class doesn't make use of self in forward, so it is effectively a "regular" (non-method) function, and model is entirely stateless. The simplest fix to your code is to make optimizer aware of w and b, by creating it as optimizer = torch.optim.SGD([w, b], lr=0.01). I also rewrite model to be a function import torch import torch.nn as nn # torch.autograd.Variable is roughly equivalent to requires_grad=True # and is deprecated in PyTorch 1.0 # your code gives not reason to have `requires_grad=True` on `x_data` x_data = torch.tensor( [ [1.0, 2.0], [2.0, 3.0], [3.0, 4.0] ]) y_data = torch.tensor( [ [2.0], [4.0], [6.0] ] ) w = torch.randn( 2, 1, requires_grad=True ) b = torch.randn( 1, 1, requires_grad=True ) def model(x2, w2, b2): return x2 @ w2 + b2 criterion = torch.nn.MSELoss( size_average=False ) optimizer = torch.optim.SGD([w, b], lr=0.01 ) for epoch in range(10) : y_pred = model( x_data,w,b ) loss = criterion( y_pred, y_data ) print( epoch, loss.data.item() ) optimizer.zero_grad() loss.backward() optimizer.step() That being said, nn.Linear is built to simplify this procedure. It automatically creates an equivalent of both w and b, called self.weight and self.bias, respectively. Also, self.__call__(x) is equivalent to the definition of forward of your Model, in that it returns self.weight @ x + self.bias. In other words, you can also use alternative code import torch import torch.nn as nn x_data = torch.tensor( [ [1.0, 2.0], [2.0, 3.0], [3.0, 4.0] ] ) y_data = torch.tensor( [ [2.0], [4.0], [6.0] ] ) model = nn.Linear(2, 1) criterion = torch.nn.MSELoss( size_average=False ) optimizer = torch.optim.SGD(model.parameters(), lr=0.01 ) for epoch in range(10) : y_pred = model(x_data) loss = criterion( y_pred, y_data ) print( epoch, loss.data.item() ) optimizer.zero_grad() loss.backward() optimizer.step() where model.parameters() can be used to enumerate model parameters (equivalent to the manually created list [w, b] above). To access your parameters (load, save, print, whatever) use model.weight and model.bias.
https://stackoverflow.com/questions/55444804/
Querying an image with bilinear interpolation e.e. finding RGB value in the fractional coordinates using Pytorch
I have a input T1 of size (1,256,256,3) i.e. images/grid of batch size 1. I have another tensor T2 of size (1, N, 2) i.e. tensor consisting of coordinates i.e. [ [10.5 , 200.787], [150.568, 190.456], …]. How do I compute functional values (using bilinear interpolation) of coordinates in T2 from T1 data? Thanks for any help I have tested the same functionality using tensorflow with the function "tf.contrib.resampler.resampler"
try grid_sample: torch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros') Given an input and a flow-field grid, computes the output using input values and pixel locations from grid. For each output location output[n, :, h, w], the size-2 vector grid[n, h, w] specifies input pixel locations x and y, which are used to interpolate the output value output[n, :, h, w]. mode argument specifies nearest or bilinear interpolation method to sample the input pixels. coordinate should be in range of [-1, 1]. This is because the pixel locations are normalized by the input spatial dimensions. sampler git example pytorch documentation
https://stackoverflow.com/questions/55472629/
Iterate over two Pytorch tensors at once?
I have two Pytorch tensors (really, just 1-D lists), t1 and t2. Is it possible to iterate over them in parallel, i.e. do something like for a,b in zip(t1,t2) ? Thanks.
For me (Python version 3.7.3 and PyTorch version 1.0.0) the zip function works as expected with PyTorch tensors: >>> import torch >>> t1 = torch.ones(3) >>> t2 = torch.zeros(3) >>> list(zip(t1, t2)) [(tensor(1.), tensor(0.)), (tensor(1.), tensor(0.)), (tensor(1.), tensor(0.))] The list call is just needed to display the result. Iterating over zip works normally.
https://stackoverflow.com/questions/55486631/
Unpickling saved pytorch model throws AttributeError: Can't get attribute 'Net' on <module '__main__' despite adding class definition inline
I'm trying to serve a pytorch model in a flask app. This code was working when I ran this on a jupyter notebook earlier but now I'm running this within a virtual env and apparently it can't get attribute 'Net' even though the class definition is right there. All the other similar questions tell me to add the class definition of the saved model in the same script. But it still doesn't work. The torch version is 1.0.1 (where the saved model was trained as well as the virtualenv) What am I doing wrong? Here's my code. import os import numpy as np from flask import Flask, request, jsonify import requests import torch from torch import nn from torch.nn import functional as F MODEL_URL = 'https://storage.googleapis.com/judy-pytorch-model/classifier.pt' r = requests.get(MODEL_URL) file = open("model.pth", "wb") file.write(r.content) file.close() class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.fc2 = nn.Linear(hidden_size, hidden_size) self.fc3 = nn.Linear(hidden_size, output_size) def forward(self, x): x = torch.sigmoid(self.fc1(x)) x = torch.sigmoid(self.fc2(x)) x = self.fc3(x) return F.log_softmax(x, dim=-1) model = torch.load('model.pth') app = Flask(__name__) @app.route("/") def hello(): return "Binary classification example\n" @app.route('/predict', methods=['GET']) def predict(): x_data = request.args['x_data'] x_data = x_data.split() x_data = list(map(float, x_data)) sample = np.array(x_data) sample_tensor = torch.from_numpy(sample).float() out = model(sample_tensor) _, predicted = torch.max(out.data, -1) if predicted.item() == 0: pred_class = "Has no liver damage - ", predicted.item() elif predicted.item() == 1: pred_class = "Has liver damage - ", predicted.item() return jsonify(pred_class) Here's the full traceback: Traceback (most recent call last): File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/bin/flask", line 10, in &lt;module&gt; sys.exit(main()) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/flask/cli.py", line 894, in main cli.main(args=args, prog_name=name) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/flask/cli.py", line 557, in main return super(FlaskGroup, self).main(*args, **kwargs) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/click/decorators.py", line 64, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/flask/cli.py", line 767, in run_command app = DispatchingApp(info.load_app, use_eager_loading=eager_loading) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/flask/cli.py", line 293, in __init__ self._load_unlocked() File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/flask/cli.py", line 317, in _load_unlocked self._app = rv = self.loader() File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/flask/cli.py", line 372, in load_app app = locate_app(self, import_name, name) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/flask/cli.py", line 235, in locate_app __import__(module_name) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/app.py", line 34, in &lt;module&gt; model = torch.load('model.pth') File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/torch/serialization.py", line 368, in load return _load(f, map_location, pickle_module) File "/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/lib/python3.6/site-packages/torch/serialization.py", line 542, in _load result = unpickler.load() AttributeError: Can't get attribute 'Net' on &lt;module '__main__' from '/Users/judyraj/Judy/pytorch-deployment/flask_app/liver_disease_finder/bin/flask'&gt; This doesn't solve my issue. I do not want to change the way I persist the model. torch.save() worked fine for me outside the virtual env. I don't mind adding the class definition to the script. I'm trying to see what's causing the error despite that.
(This is a partial answer) I don't think torch.save(model,'model.pt') works from the command prompt, or when a model is saved from one script running as '__main__' and loaded from another. The reason is that torch must be automatically loading the module that was used to save the file, and it gets the module name from __name__. Now for the partial part: It's unclear how to fix this issue, especially when you have virtualenvs in the mix. Thanks to Jatentaki for starting the conversation in this direction.
https://stackoverflow.com/questions/55488795/
How to make the convolution in pytorch associative?
The discrete convolution is by definition associative. But when I try to verify this in pytorch, I can not find get a plausible result. The associative law is $f*(g*\psi)=(f * g)*\psi$, so I create three discrete functions centered at zero(as tensors) and convolve them with proper zero paddings so that all non-zero element in result map is obtained. import torch import torch.nn as nn def test_conv_compst(): # $\psi$ inputs = torch.randn((1,4,7,7)) # $g$ a = torch.randn((7, 4, 3, 3)) # $f$ b = torch.randn((3, 7, 3, 3)) int_1 = torch.conv2d(inputs, a, padding=2) # results obtained by the first order res_1 = torch.conv2d(int_1, b, padding=2) comp_k = torch.conv2d(a.transpose(1, 0), b, padding=2).transpose(1, 0) print(comp_k.shape) # results obtained through the second order res_2 = torch.conv2d(inputs, comp_k, padding=4) print(res_1.shape) print(res_2.shape) print(torch.max(torch.abs(res_2-res_1))) The expected result is that the difference from the two results is negligible. But it returns: torch.Size([3, 4, 5, 5]) torch.Size([1, 3, 11, 11]) torch.Size([1, 3, 11, 11]) tensor(164.8044)
Long story short, this is because of batching. The first argument of torch.conv2d is interpreted as [batch, channel, height, width], the second as [out_channel, in_channel, height, width] and the output as [batch, channel, height, width]. So if you call conv2d(a, conv2d(b, c)), you treat b's leading dimension as batch and if you call conv2d(conv2d(a, b), c), you treat it as out_channels. That being said, I get the impression that you're asking about math here, so let me expand. Your idea is correct in theory: convolutions are linear operators and should be associative. However, since we provide them with kernels rather than the actual matrices representing the linear operators, there is some "conversion" that needs to happen behind the scenes so that the kernels are interpreted as matrices properly. Classically, this can be done by constructing the corresponding circulant matrices (border conditions aside). If we denote the kernels with a, b, c and the circulant matrix creation operator with M, we get that M(a) @ [M(b) @ M(c)] = [M(a) @ M(b)] @ M(c), where @ denotes matrix-matrix multiplication. Convolution implementations return an image (vector, kernel, however you call it) and not the associated circulant matrix, which is ridiculously redundant and wouldn't fit in the memory in most cases. Therefore we also need some circulant-to-vector operator V(matrix), which returns the first column of matrix and is therefore the inverse of M. In abstract mathematical terms, functions such as scipy.signal.convolve (actually correlate, since convolution requires an extra flip of one of the inputs, which I skip for clarity) are implemented as convolve = lambda a, b: V(M(a) @ M(b)) and thus convolve(a, convolve(b, c)) = = V(M(a) @ M(V[M(b) @ M(c)]) = V(M(a) @ M(b) @ M(c)) = V(M(V[M(a) @ M(b)]) @ M(c)) = convolve(convolve(a, b), c) I hope I haven't lost you, this is just converting one into the other by making use of the fact that V is the inverse of M and of the associativeness of matrix multiplication to move the parentheses. Note that the middle line is basically the "raw" ABC. We can verify with the following code: import numpy as np import scipy.signal as sig c2d = sig.convolve2d a = np.random.randn(7, 7) b = np.random.randn(3, 3) c = np.random.randn(3, 3) ab = c2d(a, b) ab_c = c2d(ab, c) bc = c2d(b, c) a_bc = c2d(a, bc) print((a_bc - ab_c).max()) The problem with PyTorch is that it interprets the first input as [batch, channel, height, width] and the second as [out_channels, in_channels, height, width]. This means that the "conversion" operator M is different for the first argument and the second argument. Let's call them M and N, respectively. Since there is only one output, there is only one V and it can be the inverse of either M or N, but not both (since they are different). If you rewrite the above equation taking care to distinguish between M and N you will see that, depending on your choice whether V inverts one or the other, you're unable to write the equality either between lines 2 and 3 or 3 and 4. In practice, there also the additional issue of the channel dimension, which is not there in the classic definition of convolutions, however my first guess is that it could be dealt with with a single lifting operator M for both operands, unlike batching.
https://stackoverflow.com/questions/55499891/
How to change (assign) new value in FloatTensor, Pytorch?
I am changing/assigning the value on the array(torch.cuda.floatTensor). I tried some way but it does not work. Please help me! #1 #dis is [torch.cuda.FloatTensor of size 3185x1 (GPU 0)] s = dis.size(0) #3185 for i in range (0,s,1): if (dis[i,0] &lt; 0): dis[i,0]== 0 #There is no error but It does not work. #2 #dis is [torch.cuda.FloatTensor of size 3185x1 (GPU 0)] s = dis.size(0) a = torch.zeros(s, 1).cuda() idx = (dis &gt; a) dis[idx] = a[idx] AssertionError: can't compare Variable and tensor #3 #dis is [torch.cuda.FloatTensor of size 3185x1 (GPU 0)] s = dis.size(0) a = torch.zeros(s, 1).cuda() for i in range (0,s,1): if (dis[i,0] &lt; a[i, 0]): dis[i,0]==a[i, 0] #RuntimeError: bool value of Variable objects containing non-empty torch.cuda.ByteTensor is ambiguous
IIUC, you need to replace values smaller than 0 with 0, Just use torch.clamp, which is meant for such use cases: dis = dis.clamp(min=0) Example: import torch dis = torch.tensor([[1], [-3], [0]]) #tensor([[ 1], # [-3], # [ 0]]) dis.clamp(min=0) #tensor([[1], # [0], # [0]])
https://stackoverflow.com/questions/55506634/
Differences between F.relu(X) and torch.max(X, 0)
I am trying to implement the following loss function To me, the most straight forword implementation would be using torch.max losses = torch.max(ap_distances - an_distances + margin, torch.Tensor([0])) However, I saw other implementations on github using F.relu losses = F.relu(ap_distances - an_distances + margin) They give essential the same output, but I wonder if there's any fundamental difference between the two methods.
torch.max is not differentiable according to this discussion. A loss function needs to be continuous and differentiable to do backprop. relu is differentiable as it can be approximated and hence the use of it in a loss function.
https://stackoverflow.com/questions/55545354/
PyTorch will not fit straight line to two data points
I'm facing issues in fitting a simple y= 4x1 line with 2 data points using pytorch. While running the inference code, the model seems to output same value to any input which is strange. Pls find the code attached along with the data files used by me. Appreciate any help here. import torch import numpy as np import pandas as pd df = pd.read_csv('data.csv') test_data = pd.read_csv('test_data.csv') inputs = df[['x1']] target = df['y'] inputs = torch.tensor(inputs.values).float() target = torch.tensor(target.values).float() test_data = torch.tensor(test_data.values).float() #Defining Network Architecture import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net,self).__init__() hidden1 = 3 # hidden2 = 5 self.fc1 = nn.Linear(1,hidden1) self.fc3 = nn.Linear(hidden1,1) def forward(self,x): x = F.relu(self.fc1(x)) x = self.fc3(x) return x #instantiate the model model = Net() print(model) criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(),lr=0.01) model.train() #epochs epochs = 100 for x in range(epochs): #initialize the training loss to 0 train_loss = 0 #clear out gradients optimizer.zero_grad() #calculate the output output = model(inputs) #calculate loss loss = criterion(output,target) #backpropagate loss.backward() #update parameters optimizer.step() if ((x%5)==0): print('Training Loss after epoch {:2d} is {:2.6f}'.format(x,loss)) #set the model in evaluation mode model.eval() #Test the model on unseen data test_output = model(test_data) print(test_output) Below is the model output #model output tensor([[56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579], [56.7579]], grad_fn=&lt;AddmmBackward&gt;)
Your model is collapsing. You can probably see that based on the prints. You may want to use a lower learning rate (1e-5, 1e-6, etc.). Switching from SGD(...)to Adam(...) may be easier if you do not have experience and want less trouble fine-tuning these hparams. Also, maybe 100 epochs is not enough. As you did not share an MCVE, I cannot tell you for sure what it is. Here is an MCVE of linefitting using the same Net you used: import torch import numpy as np import torch.nn as nn import torch.nn.functional as F epochs = 1000 max_range = 40 interval = 4 # DATA x_train = torch.arange(0, max_range, interval).view(-1, 1).float() x_train += torch.rand(x_train.size(0), 1) - 0.5 # small noise y_train = (4 * x_train) y_train += torch.rand(x_train.size(0), 1) - 0.5 # small noise x_test = torch.arange(interval // 2, max_range, interval).view(-1, 1).float() y_test = 4 * x_test class Net(nn.Module): def __init__(self): super(Net, self).__init__() hidden1 = 3 self.fc1 = nn.Linear(1, hidden1) self.fc3 = nn.Linear(hidden1, 1) def forward(self, x): x = F.relu(self.fc1(x)) x = self.fc3(x) return x model = Net() print(model) criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=1e-5) # TRAIN model.train() for epoch in range(epochs): optimizer.zero_grad() y_pred = model(x_train) loss = criterion(y_pred, y_train) loss.backward() optimizer.step() if epoch % 10 == 0: print('Training Loss after epoch {:2d} is {:2.6f}'.format(epoch, loss)) # TEST model.eval() y_pred = model(x_test) print(torch.cat((x_test, y_pred, y_test), dim=-1)) This is what the data looks like: And this is what the training looks like: Training Loss after epoch 0 is 7416.805664 Training Loss after epoch 10 is 6645.655273 Training Loss after epoch 20 is 5792.936523 Training Loss after epoch 30 is 4700.106445 Training Loss after epoch 40 is 3245.384277 Training Loss after epoch 50 is 1779.370728 Training Loss after epoch 60 is 747.418579 Training Loss after epoch 70 is 246.781311 Training Loss after epoch 80 is 68.635155 Training Loss after epoch 90 is 17.332235 Training Loss after epoch 100 is 4.280161 Training Loss after epoch 110 is 1.170808 Training Loss after epoch 120 is 0.453974 ... Training Loss after epoch 970 is 0.232296 Training Loss after epoch 980 is 0.232090 Training Loss after epoch 990 is 0.231888 And this is what the output looks like: | x_test | y_pred | y_test | |:-------:|:--------:|:--------:| | 2.0000 | 8.6135 | 8.0000 | | 6.0000 | 24.5276 | 24.0000 | | 10.0000 | 40.4418 | 40.0000 | | 14.0000 | 56.3303 | 56.0000 | | 18.0000 | 72.1884 | 72.0000 | | 22.0000 | 88.0465 | 88.0000 | | 26.0000 | 103.9047 | 104.0000 | | 30.0000 | 119.7628 | 120.0000 | | 34.0000 | 135.6210 | 136.0000 | | 38.0000 | 151.4791 | 152.0000 |
https://stackoverflow.com/questions/55558978/
How to do element wise multiplication for two 4D unequal size tensors in pytorch?
I have got a tensor A and Tensor B. Size of A = [2,64,56,56] Size of B = [2,64,29,29] How can I perform torch.mul(A,B)? The tensors are of unequal size. RuntimeError: shape [2, 64, 56, 56] is invalid for input of size 107648
You can check out the documentation here: https://pytorch.org/docs/stable/torch.html#torch.mul There, you can read: The shapes of input and other must be broadcastable. You can read about broadcastability here: https://pytorch.org/docs/stable/notes/broadcasting.html#broadcasting-semantics Lastly, it probably makes sense to think about what you want to achieve. I'm not sure what you wanted such operation to do since to have the element-wise multiplication you need the tensors to have the same shapes.
https://stackoverflow.com/questions/55559610/
How can i solve backward() got an unexpected keyword argument 'retain_variables'?
I write the code following below but I got this error: TypeError: backward() got an unexpected keyword argument 'retain_variables' My code is: def learn(self, batch_state, batch_next_state, batch_reward, batch_action): outputs = self.model(batch_state).gather(1, batch_action.unsqueeze(1)).squeeze(1) next_outputs = self.model(batch_next_state).detach().max(1)[0] target = self.gamma*next_outputs + batch_reward td_loss = F.smooth_l1_loss(outputs, target) self.optimizer.zero_grad() td_loss.backward(retain_variables = True) self.optimizer.step()
I was having the same problem. This solution worked for me. td_loss.backward(retain_graph = True) It worked.
https://stackoverflow.com/questions/55564676/
I don't understand the code for training a classifier in pytorch
I don't understand the line labels.size(0). I'm new to Pytorch and been quite confused about the data structure. correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total))`
labels is a Tensor with dimensions [N, 1], where N is equal to the number of samples in the batch. .size(...) returns a subclass of tuple (torch.Size) with the dimensions of the Tensor, and .size(0) returns an integer with the value of the first (0-based) dimension (i.e., N).
https://stackoverflow.com/questions/55565687/
Explicit slicing across a particular dimension
I've got a 3D tensor x (e.g 4x4x100). I want to obtain a subset of this by explicitly choosing elements across the last dimension. This would have been easy if I was choosing the same elements across last dimension (e.g. x[:,:,30:50] but I want to target different elements across that dimension using the 2D tensor indices which specifies the idx across third dimension. Is there an easy way to do this in numpy? A simpler 2D example: x = [[1,2,3,4,5,6],[10,20,30,40,50,60]] indices = [1,3] Let's say I want to grab two elements across third dimension of x starting from points specified by indices. So my desired output is: [[2,3],[40,50]] Update: I think I could use a combination of take() and ravel_multi_index() but some of the platforms that are inspired by numpy (like PyTorch) don't seem to have ravel_multi_index so I'm looking for alternative solutions
Iterating over the idx, and collecting the slices is not a bad option if the number of 'rows' isn't too large (and the size of the sizes is relatively big). In [55]: x = np.array([[1,2,3,4,5,6],[10,20,30,40,50,60]]) In [56]: idx = [1,3] In [57]: np.array([x[j,i:i+2] for j,i in enumerate(idx)]) Out[57]: array([[ 2, 3], [40, 50]]) Joining the slices like this only works if they all are the same size. An alternative is to collect the indices into an array, and do one indexing. For example with a similar iteration: idxs = np.array([np.arange(i,i+2) for i in idx]) But broadcasted addition may be better: In [58]: idxs = np.array(idx)[:,None]+np.arange(2) In [59]: idxs Out[59]: array([[1, 2], [3, 4]]) In [60]: x[np.arange(2)[:,None], idxs] Out[60]: array([[ 2, 3], [40, 50]]) ravel_multi_index is not hard to replicate (if you don't need clipping etc): In [65]: np.ravel_multi_index((np.arange(2)[:,None],idxs),x.shape) Out[65]: array([[ 1, 2], [ 9, 10]]) In [66]: x.flat[_] Out[66]: array([[ 2, 3], [40, 50]]) In [67]: np.arange(2)[:,None]*x.shape[1]+idxs Out[67]: array([[ 1, 2], [ 9, 10]])
https://stackoverflow.com/questions/55572737/
Whats the equivalent of tf.nn.softmax_cross_entropy_with_logits in pytorch?
I was trying to replicate a code ,which was written in tensorflow ,with pytorch. I came across a loss function in tensorflow, softmax_cross_entropy_with_logits.I was looking for an equivalent of it in pytorch and i found torch.nn.MultiLabelSoftMarginLoss,though im not quite sure it is the right function.Also i dont know how to measure the accuracy of my model when i use this loss function and no relu layer at the end of the network here is my code : # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): loss = torch.nn.MultiLabelSoftMarginLoss() return loss(Z3,Y) def model(net,X_train, y_train, X_test, y_test, learning_rate = 0.009, num_epochs = 100, minibatch_size = 64, print_cost = True): optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate) optimizer.zero_grad() total_train_acc=0 for epoch in range(num_epochs): for i, data in enumerate(train_loader, 0): running_loss = 0.0 inputs, labels = data inputs, labels = Variable(inputs), Variable(labels) Z3 = net(inputs) # Cost function cost = compute_cost(Z3, labels) # Backpropagation: Define the optimizer. # Use an AdamOptimizer that minimizes the cost. cost.backward() optimizer.step() running_loss += cost.item() # Measuring the accuracy of minibatch acc = (labels==Z3).sum() total_train_acc += acc.item() #Print every 10th batch of an epoch if epoch%1 == 0: print("Cost after epoch {} : {:.3f}".format(epoch,running_loss/len(train_loader)))
Use torch.nn.CrossEntropyLoss(). It combines both softmax and cross-entropy. From documentation: This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class. Example: # define loss function loss_fn = torch.nn.CrossEntropyLoss(reduction='mean') # during training for (x, y) in train_loader: model.train() y_pred = model(x) # your input `torch.FloatTensor` loss_val = loss_fn(y_pred, y) print(loss_val.item()) # prints numpy value optimizer.zero_grad() loss_val.backward() optimizer.step() Make sure that the types of x and y are correct. Usually the conversion is done like this: loss_fn(y_pred.type(torch.FloatTensor), y.type(torch.LongTensor)). To measure accuracy you can define a custom function: def compute_accuracy(y_pred, y): if list(y_pred.size()) != list(y.size()): raise ValueError('Inputs have different shapes.', list(y_pred.size()), 'and', list(y.size())) result = [1 if y1==y2 else 0 for y1, y2 in zip(y_pred, y)] return sum(result) / len(result) And use both like this: model.train() y_pred = model(x) loss_val = loss_fn(y_pred.type(torch.FloatTensor), y.type(torch.LongTensor)) _, y_pred = torch.max(y_pred, 1) accuracy_val = compute_accuracy(y_pred, y) print(loss_val.item()) # print loss value print(accuracy_val) # print accuracy value # update step e.t.c If your input data is one-hot encoded you can convert it to regular encoding before you use loss_fn: _, targets = y.max(dim=1) y_pred = model(x) loss_val = loss_fn(y_pred, targets)
https://stackoverflow.com/questions/55577519/
Unable to save Pytorch model to Google Drive in Google Colab?
I am trying to save my model to my drive on google colab. I have used the following code to mount my Google Drive- from google.colab import drive drive.mount('/content/gdrive') After all the preprocessing, model definition and training, I want to save my model to the drive because training it will take a long time. So, I will save it to drive at regular intervals and reload from that point to continue. The code to save my model is: def save_model(model, model_name, iter): path = f'content/gdrive/My Drive/Machine Learning Models/kaggle_jigsaw_{model_name}_iter_{iter}.pth' print(f'Saving {model_name} model...') torch.save(model.state_dict(), path) print(f'{model_name} saved successfully.') EMBEDDING_DIMS = 128 HIDDEN_SIZE = 256 gru = GRU(vocab.n_words, EMBEDDING_DIMS, HIDDEN_SIZE, 2).to(device) save_model(gru, 'gru', 0) I am getting the following error: Saving gru model... --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) &lt;ipython-input-27-d2510611a9d4&gt; in &lt;module&gt;() 9 10 gru = GRU(vocab.n_words, EMBEDDING_DIMS, HIDDEN_SIZE, 2).to(device) ---&gt; 11 save_model(gru, 'gru', 0) &lt;ipython-input-27-d2510611a9d4&gt; in save_model(model, model_name, iter) 2 path = f'content/gdrive/My Drive/Machine Learning Models/kaggle_jigsaw_{model_name}_iter_{iter}.pth' 3 print(f'Saving {model_name} model...') ----&gt; 4 torch.save(model.state_dict(), path) 5 print(f'{model_name} saved successfully.') 6 /usr/local/lib/python3.6/dist-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol) 217 &gt;&gt;&gt; torch.save(x, buffer) 218 """ --&gt; 219 return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol)) 220 221 /usr/local/lib/python3.6/dist-packages/torch/serialization.py in _with_file_like(f, mode, body) 140 (sys.version_info[0] == 3 and isinstance(f, pathlib.Path)): 141 new_fd = True --&gt; 142 f = open(f, mode) 143 try: 144 return body(f) FileNotFoundError: [Errno 2] No such file or directory: 'content/gdrive/My Drive/Machine Learning Models/kaggle_jigsaw_gru_iter_0.pth' I have manually created the folder in my drive and only the file needs to be created. Still, the error persists. Though, I am sure that manually creating the folder was not required. The problem is something else. Where am I going wrong?
You likely need a leading / in your path. Try changing this line: path = f'content/gdrive/My Drive/Machine Learning Models/kaggle_jigsaw_{model_name}_iter_{iter}.pth' to: path = f'/content/gdrive/My Drive/Machine Learning Models/kaggle_jigsaw_{model_name}_iter_{iter}.pth'
https://stackoverflow.com/questions/55596375/
How does the apply(fn) function in pytorch work with a function without return statement as argument?
I have some questions about the following code fragments: &gt;&gt;&gt; def init_weights(m): print(m) if type(m) == nn.Linear: m.weight.data.fill_(1.0) print(m.weight) &gt;&gt;&gt; net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) &gt;&gt;&gt; net.apply(init_weights) apply() is part of the pytorch.nn package. You find the code in the documentation of this package. The final questions: 1. Why does this code sample work, although there is no argument or brackets added to init_weights() when it is given to apply()? 2. Where does the function init_weights(m) gets its argument m from, when it's given as a parameter to the function apply() without brackets and an m?
We find the answers to your questions in said documentation of torch.nn.Module.apply(fn): Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init). Why does this code sample work, althouh there is no argument or brackets added to init_weights() when it is given to apply()? The given function init_weights isn't called prior to the apply call, precisely because there are no parentheses, rather a reference to init_weights is given to apply, and only from within apply later on init_weights is called. Where does the function init_weights(m) gets its argument m from, when it's given as a parameter to the function apply() without brackets and an m? It gets its argument with each call within apply, and, as the documentation tells, it is called for m iterating over every submodule of (in this case) net as well as net itself, due to the method call net.apply(…).
https://stackoverflow.com/questions/55613518/
Using Multiple GPUs outside of training in PyTorch
I'm calculating the accumulated distance between each pair of kernel inside a nn.Conv2d layer. However for large layers it runs out of memory using a Titan X with 12gb of memory. I'd like to know if it is possible to divide such calculations across two gpus. The code follows: def ac_distance(layer): total = 0 for p in layer.weight: for q in layer.weight: total += distance(p,q) return total Where layer is instance of nn.Conv2d and distance returns the sum of the differences between p and q. I can't detach the graph, however, for I need it later on. I tried wrapping my model around a nn.DataParallel, but all calculations in ac_distance are done using only 1 gpu, however it trains using both.
Parallelism while training neural networks can be achieved in two ways. Data Parallelism - Split a large batch into two and do the same set of operations but individually on two different GPUs respectively Model Parallelism - Split the computations and run them on different GPUs As you have asked in the question, you would like to split the calculation which falls into the second category. There are no out-of-the-box ways to achieve model parallelism. PyTorch provides primitives for parallel processing using the torch.distributed package. This tutorial comprehensively goes through the details of the package and you can cook up an approach to achieve model parallelism that you need. However, model parallelism can be very complex to achieve. The general way is to do data parallelism with either torch.nn.DataParallel or torch.nn.DistributedDataParallel. In both the methods, you would run the same model on two different GPUs, however one huge batch would be split into two smaller chunks. The gradients will be accumulated on a single GPU and optimization happens. Optimization takes place on a single GPU in Dataparallel and parallely across GPUs in DistributedDataParallel by using multiprocessing. In your case, if you use DataParallel, the computation would still take place on two different GPUs. If you notice imbalance in GPU usage it could be because of the way DataParallel has been designed. You can try using DistributedDataParallel which is the fastest way to train on multiple GPUs according to the docs. There are other ways to process very large batches too. This article goes through them in detail and I'm sure it would be helpful. Few important points: Do gradient accumulation for larger batches Use DataParallel If that doesn't suffice, go with DistributedDataParallel
https://stackoverflow.com/questions/55624102/
How does one dynamically add new parameters to optimizers in Pytorch?
I was going through this post in the pytorch forum, and I also wanted to do this. The original post removes and adds layers but I think my situation is not that different. I also want to add layers or more filters or word embeddings. My main motivation is that the AI agent does not know the whole vocabulary/dictionary in advance because its large. I prefer strongly (for the moment) to not do character by character RNNs. So what will happen for me is when the agent starts a forward pass it might find new words it has never seen and will need to add them to the embedding table (or perhaps add new filters before it starts the forward pass). So what I want to make sure is: embeddings are added correctly (at the right time, when a new computation graph is made) so that they are updatable by the optimizer no issues with stored info of past parameters e.g. if its using some sort of momentum How does one do this? Any sample code that works?
Just to add an answer to the title of your question: "How does one dynamically add new parameters to optimizers in Pytorch?" You can append params at any time to the optimizer: import torch import torch.optim as optim model = torch.nn.Linear(2, 2) # Initialize optimizer optimizer = optim.Adam(model.parameters(), lr=0.001, momentum=0.9) extra_params = torch.randn(2, 2) optimizer.param_groups.append({'params': extra_params }) #then you can print your `extra_params` print("extra params", extra_params) print("optimizer params", optimizer.param_groups)
https://stackoverflow.com/questions/55640836/
How to convert BatchNorm weight of caffe to pytorch BathNorm?
BathNorm and Scale weight of caffe model can be read from pycaffe, which are three weights in BatchNorm and two weights in Scale. I tried to copy those weights to pytorch BatchNorm with codes like this: if 'conv3_final_bn' == name: assert len(blobs) == 3, '{} layer blob count: {}'.format(name, len(blobs)) torch_mod['conv3_final_bn.running_mean'] = blobs[0].data torch_mod['conv3_final_bn.running_var'] = blobs[1].data elif 'conv3_final_scale' == name: assert len(blobs) == 2, '{} layer blob count: {}'.format(name, len(blobs)) torch_mod['conv3_final_bn.weight'] = blobs[0].data torch_mod['conv3_final_bn.bias'] = blobs[1].data The two BatchNorm acts differently. I also tried to set conv3_final_bn.weight=1 and conv3_final_bn.bias=0 to verify the BN layer of caffe, the results didn't match either. How should I deal with the wrong matching?
Got it! There is still a third parameter in BatchNorm of caffe. Codes should be: if 'conv3_final_bn' == name: assert len(blobs) == 3, '{} layer blob count: {}'.format(name, len(blobs)) torch_mod['conv3_final_bn.running_mean'] = blobs[0].data / blobs[2].data[0] torch_mod['conv3_final_bn.running_var'] = blobs[1].data / blobs[2].data[0] elif 'conv3_final_scale' == name: assert len(blobs) == 2, '{} layer blob count: {}'.format(name, len(blobs)) torch_mod['conv3_final_bn.weight'] = blobs[0].data torch_mod['conv3_final_bn.bias'] = blobs[1].data
https://stackoverflow.com/questions/55644109/
How to use torch.nn.CrossEntropyLoss as autoencoder's reconstruction loss?
I want to compute the reconstruction accuracy of my autoencoder using CrossEntropyLoss: ae_criterion = nn.CrossEntropyLoss() ae_loss = ae_criterion(X, Y) where X is the autoencoder's reconstruction and Y is the target (since it is an autoencoder, Y is the same as the original input X). Both X and Y have shape [42, 32, 130] = [batch_size, timesteps, number_of_classes]. When I run the code above I get the following error: ValueError: Expected target size (42, 130), got torch.Size([42, 32, 130]) After looking the docs, I'm still unsure on how should I call nn.CrossEntropyLoss() in the appropriate way. It seems that I should change Y to be of shape [42, 32, 1], with each element being a scalar in the interval [0, 129] (or [1, 130]), am I right? Is there a way to avoid this? Since X and Y are between 0 and 1, could I just use binary cross-entropy loss element-wise in an equivalent way?
For CrossEntropyLoss, shape of the Y must be (42, 32), each element must be a Long scalar in the interval [0, 129]. You may want to use BCELoss or BCEWithLogitsLoss for your problem.
https://stackoverflow.com/questions/55651920/
How can I use KNN, Random Forest models in Pytorch?
This may seem like a X Y problem, but initially I had huge data and I was not able to train in given resources (RAM problem). So I thought I could use batch feature of Pytorch. But I want to use Methods like KNN, Random Forest, Clustering except Deep Learning. So is it possible or can I use scikit libraries in Pytorch?
Update Currently, there are some sklearn alternatives utilizing GPU, most prominent being cuML (link here) provided by rapidsai. Previous answer I would advise against using PyTorch solely for the purpose of using batches. Argumentation goes as follows: scikit-learn has docs about scaling where one can find MiniBatchKMeans and there are other options like partial_fit method or warm_start arguments (as is the case with RandomForest, check this approach). KNN cannot be easily used without hand-made implementation with disk caching as it stores whole dataset in memory (and you lack RAM). This approach would be horribly inefficient either way, do not try. You most probably will not be able to create algorithms on-par with those from scikit (at least not solo and not without considerable amount of work). Your best bet is to go with quite battle-tested solutions (even though it's still 0.2x currently). It should be possible to get some speed improvements through numba but that's beside the scope of this question. Maybe you could utilize CUDA for different algorithms but it's even more non-trivial task. All in all PyTorch is suited for deep learning computations with heavy CUDA usage. If you need neural networks, this framework is one of the best out there, otherwise go with something like sklearn or other frameworks allowing incremental training. You can always bridge those two easily with numpy() and few other calls in pytorch. EDIT: I have found KNN implementation possibly suiting your requirements in this github repository
https://stackoverflow.com/questions/55663672/
How to fix 'ImportError: /home/... .../lib/libtorch.so.1: undefined symbol: nvrtcGetProgramLogSize' in DGL?
I get an error in the importation of pytorch inside dgl (Deep Graph Library by DeepMind), concretely: ImportError: /home/user/anaconda3/envs/my_env/lib/python3.7/site-packages/torch/lib/libtorch.so.1: undefined symbol: nvrtcGetProgramLogSize I tried to reinstall pytorch (uninstall reinstall with conda un/install). I also search in google and I found this https://github.com/pytorch/pytorch/issues/14973. There, they solve it linking in libnvrtc.so and libcuda.so, but I have no idea what that means. Does anyone know it? This is the basic code: import dgl from parseador import train_df g = dgl.DGLGraph() g.add_nodes(5) g.add_edges([0, 0, 0, 0], [1, 2, 3, 4]) g.ndata['h'] = th.randn(5, 3) g.edata['h'] = th.randn(4, 4) And this is the error: Traceback (most recent call last): File "/home/user/Documentos/Repo/grafos.py", line 1, in &lt;module&gt; import dgl File "/home/user/anaconda3/envs/my_env/lib/python3.7/site-packages/dgl/__init__.py", line 2, in &lt;module&gt; from . import function File "/home/user/anaconda3/envs/my_env/lib/python3.7/site-packages/dgl/function/__init__.py", line 5, in &lt;module&gt; from .message import * File "/home/user/anaconda3/envs/my_env/lib/python3.7/site-packages/dgl/function/message.py", line 7, in &lt;module&gt; from .. import backend as F File "/home/user/anaconda3/envs/my_env/lib/python3.7/site-packages/dgl/backend/__init__.py", line 46, in &lt;module&gt; load_backend(os.environ.get('DGLBACKEND', 'pytorch').lower()) File "/home/user/anaconda3/envs/my_env/lib/python3.7/site-packages/dgl/backend/__init__.py", line 18, in load_backend mod = importlib.import_module('.%s' % mod_name, __name__) File "/home/user/anaconda3/envs/my_env/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/home/user/anaconda3/envs/my_env/lib/python3.7/site-packages/dgl/backend/pytorch/__init__.py", line 1, in &lt;module&gt; from .tensor import * File "/home/user/anaconda3/envs/my_env/lib/python3.7/site-packages/dgl/backend/pytorch/tensor.py", line 5, in &lt;module&gt; import torch as th File "/home/user/anaconda3/envs/my_env/lib/python3.7/site-packages/torch/__init__.py", line 102, in &lt;module&gt; from torch._C import * ImportError: /home/user/anaconda3/envs/my_env/lib/python3.7/site-packages/torch/lib/libtorch.so.1: undefined symbol: nvrtcGetProgramLogSize How can I fix this error? some time ago I run correctly this code in Windows 10, now I'm running Ubuntu 18.04.
I also ran into this, but I actually wanted to use GPU, so installing pytorch-cpu was not an option for me. Instead, installing pytorch package from pytorch channel (instead of defaults) solved the issue for me: conda install pytorch --channel pytorch
https://stackoverflow.com/questions/55665606/
Manage memory differently on train and test time pytorch
Currently I'm writing a segmentation model based on U-net with pytorch and I want to use something similar to inverted residual introduced on mobilenet v2 to improve the model's speed on cpu. pytorch code for mobile netv2 Then I realize that the model uses a lot more memory on train phase and test phase. While the model should use more memory on train phase because all the mid-step tensors(feature maps) are saved and with separable convolution there are more tensors created for each "convolution" operation. But on run time, actually only few last step tensors must be saved to be used for skip connection and all the other tensors can be deleted once it's next step is created. The memory efficiency should be the same for u-net with normal convolution and u-net with separable convolution on test phase. I'm newbee to pytorch so I don't know how to write code that prevents unnecessary memory cost on test time. Since pytorch is binded with python. I guess I can manually delete all the unnecessary tensors in forward function with del. But I guess that if I just delete variables on forward function, it will influence training stage. Is here more advanced functionality on pytorch that is able to optimize test phase memory usage with a 'network graph'? I'm also curious if tensorflow deals with those problems automatically since it has a more abstract and complex graph building logic.
After reading the official pytorch code for resnet, I realize I shouldn't give all variables a name.aka I shouldn't write: conv1 = self.conv1(x) conv2 = self.conv2(conv1) I should just write: out = self.conv1(x) out = self.conv2(out) On this way nothing refers to obj corresponds to conv1 after it is used and python is able to clean it. because there are residual connections between blocks, I need to have one more python variable to refer to the variable: aka out = self.conv1(x) residual_connect = out out = self.conv2(out) out = conv1 + out But on upsampling stage only out is needed. So I deleted residual_connect at the beginning of the decoding stage. del residual_connect It seems like a hack and I'm surprised that it didn't cause problem on training stage. The ram usage for my model is greatly reduced now but I feel here should be a more elegant way to solve the problem.
https://stackoverflow.com/questions/55667005/
Torch.cuda.is_available() keeps switching to False
I have tried several solutions which hinted at what to do when the CUDA GPU is available and CUDA is installed but the Torch.cuda.is_available() returns False. They did help but only temporarily, meaning torch.cuda-is_available() reported True but after some time, it switched back to False. I use CUDA 9.0.176 and GTX 1080. What should I do to get the permanent effect? I tried the following methods: https://forums.fast.ai/t/torch-cuda-is-available-returns-false/16721/5 https://github.com/pytorch/pytorch/issues/15612 Note: When torch.cuda.is_available() works fine but then at some point switches to False, then I have to restart the computer and then it works again (for some time).
The reason for torch.cuda.is_available() resulting False is the incompatibility between the versions of pytorch and cudatoolkit. As on Jun-2022, the current version of pytorch is compatible with cudatoolkit=11.3 whereas the current cuda toolkit version = 11.7. Source Solution: Uninstall Pytorch for a fresh installation. You cannot install an old version on top of a new version without force installation (using pip install --upgrade --force-reinstall &lt;package_name&gt;. Run conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch to install pytorch. Install CUDA 11.3 version from https://developer.nvidia.com/cuda-11.3.0-download-archive. You are good to go.
https://stackoverflow.com/questions/55717751/
PyTorch: Batch Outer-Addition
I have two PyTorch tensors: A and B, both of shape (b, c, 3). I want to make outer product C of A and B so that the resulting shape is (b, c, 3, 3), and replace the multiplication operation with addition. How should I do it?
You can add a corresponding singleton dimension: C = A[..., None] + B[..., None, :] For example, with batch and channel dimensions equal to 1 (b=1, c=1): import torch A = torch.tensor([[[1, 2, 3.]]]) B = torch.tensor([[[4., 5., 6.]]]) A[..., None] + B[..., None, :] Out[ ]: tensor([[[[5., 6., 7.], [6., 7., 8.], [7., 8., 9.]]]])
https://stackoverflow.com/questions/55739993/
Missing Keys in state_dict
I am having problems loading my model on google colab. here is the code: I have attached the code below I have tried changing the name of the statedict and it does not help basically, I am trying to save my model for later use, but, this is becoming extremely difficult since I am not being able to properly save and load it. Please help me with the problem. After the section of the code, you will also find the error that I have attached below. here is the code from zipfile import ZipFile file_name = 'data.zip' with ZipFile(file_name, 'r') as zip: zip.extractall() from zipfile import ZipFile file_name = 'results.zip' with ZipFile(file_name, 'r') as zip: zip.extractall() !pip install tensorflow-gpu from __future__ import print_function import torch import torch.nn as nn import torch.nn.parallel import torch.optim as optim import torch.utils.data import torchvision.datasets as dset import torchvision.transforms as transforms import torchvision.utils as vutils from torch.autograd import Variable batchSize = 64 imageSize = 64 transform = transforms.Compose([transforms.Resize(imageSize), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),]) dataset = dset.CIFAR10(root = './data', download = True, transform = transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size = batchSize, shuffle = True, num_workers = 2) def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: m.weight.data.normal_(0.0, 0.02) elif classname.find('BatchNorm') != -1: m.weight.data.normal_(1.0, 0.02) m.bias.data.fill_(0) class G(nn.Module): def __init__(self): super(G, self).__init__() self.main = nn.Sequential( nn.ConvTranspose2d(100, 512, 4, 1, 0, bias = False), nn.BatchNorm2d(512), nn.ReLU(True), nn.ConvTranspose2d(512, 256, 4, 2, 1, bias = False), nn.BatchNorm2d(256), nn.ReLU(True), nn.ConvTranspose2d(256, 128, 4, 2, 1, bias = False), nn.BatchNorm2d(128), nn.ReLU(True), nn.ConvTranspose2d(128, 64, 4, 2, 1, bias = False), nn.BatchNorm2d(64), nn.ReLU(True), nn.ConvTranspose2d(64, 3, 4, 2, 1, bias = False), nn.Tanh() ) def forward(self, input): output = self.main(input) return output netG = G() netG.load_state_dict(torch.load('generator.pth')) netG.eval() #netG.apply(weights_init) class D(nn.Module): def __init__(self): super(D, self).__init__() self.main = nn.Sequential( nn.Conv2d(3, 64, 4, 2, 1, bias = False), nn.LeakyReLU(0.2, inplace = True), nn.Conv2d(64, 128, 4, 2, 1, bias = False), nn.BatchNorm2d(128), nn.LeakyReLU(0.2, inplace = True), nn.Conv2d(128, 256, 4, 2, 1, bias = False), nn.BatchNorm2d(256), nn.LeakyReLU(0.2, inplace = True), nn.Conv2d(256, 512, 4, 2, 1, bias = False), nn.BatchNorm2d(512), nn.LeakyReLU(0.2, inplace = True), nn.Conv2d(512, 1, 4, 1, 0, bias = False), nn.Sigmoid() ) def forward(self, input): output = self.main(input) return output.view(-1) netD = D() netD.load_state_dict(torch.load('discriminator.pth')) netD.eval() #netD.apply(weights_init) criterion = nn.BCELoss() checkpoint = torch.load('discriminator.pth') optimizerD = optim.Adam(netD.parameters(), lr = 0.0002, betas = (0.5, 0.999)) optimizerD.load_state_dict(checkpoint['optimizer_state_dict']) epoch = checkpoint['epoch'] errD = checkpoint['loss'] checkpoint1 = torch.load('genrator.pth') optimizerG = optim.Adam(netG.parameters(), lr = 0.0002, betas = (0.5, 0.999)) optimizerG.load_state_dict(checkpoint1['optimizer_state_dict']) errG = checkpoint1['loss'] k = epoch for j in range(k, 10): for i, data in enumerate(dataloader, 0): netD.zero_grad() real, _ = data input = Variable(real) target = Variable(torch.ones(input.size()[0])) output = netD(input) errD_real = criterion(output, target) noise = Variable(torch.randn(input.size()[0], 100, 1, 1)) fake = netG(noise) target = Variable(torch.zeros(input.size()[0])) output = netD(fake.detach()) errD_fake = criterion(output, target) errD = errD_real + errD_fake errD.backward() optimizerD.step() netG.zero_grad() target = Variable(torch.ones(input.size()[0])) output = netD(fake) errG = criterion(output, target) errG.backward() optimizerG.step() print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f' % (epoch+1, 10, i+1, len(dataloader), errD.data, errG.data)) if i % 100 == 0: vutils.save_image(real, '%s/real_samples.png' % "./results", normalize = True) fake = netG(noise) vutils.save_image(fake.data, '%s/fake_samples_epoch_%03d.png' % ("./results", epoch+1), normalize = True) torch.save({ 'epoch': epoch, 'model_state_dict': netD.state_dict(), 'optimizer_state_dict': optimizerD.state_dict(), 'loss': errD }, 'discriminator.pth') torch.save({ 'epoch': epoch, 'model_state_dict': netG.state_dict(), 'optimizer_state_dict': optimizerG.state_dict(), 'loss': errG }, 'generator.pth') here is the error RuntimeError Traceback (most recent call last) &lt;ipython-input-23-3e55546152c7&gt; in &lt;module&gt;() 26 # Creating the generator 27 netG = G() ---&gt; 28 netG.load_state_dict(torch.load('generator.pth')) 29 netG.eval() 30 #netG.apply(weights_init) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict) 767 if len(error_msgs) &gt; 0: 768 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( --&gt; 769 self.__class__.__name__, "\n\t".join(error_msgs))) 770 771 def _named_members(self, get_members_fn, prefix='', recurse=True): RuntimeError: Error(s) in loading state_dict for G: Missing key(s) in state_dict: "main.0.weight", "main.1.weight", "main.1.bias", "main.1.running_mean", "main.1.running_var", "main.3.weight", "main.4.weight", "main.4.bias", "main.4.running_mean", "main.4.running_var", "main.6.weight", "main.7.weight", "main.7.bias", "main.7.running_mean", "main.7.running_var", "main.9.weight", "main.10.weight", "main.10.bias", "main.10.running_mean", "main.10.running_var", "main.12.weight". Unexpected key(s) in state_dict: "epoch", "model_state_dict", "optimizer_state_dict", "loss".
You need to access the 'model_state_dict' key inside the loaded checkpoint. Try: netG.load_state_dict(torch.load('generator.pth')['model_state_dict']) You'll probably need to apply the same fix to the discriminator as well.
https://stackoverflow.com/questions/55744941/
Model learns with SGD but not Adam
I was going through a basic PyTorch MNIST example here and noticed that when I changed the optimizer from SGD to Adam the model did not converge. Specifically, I changed line 106 from optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) to optimizer = optim.Adam(model.parameters(), lr=args.lr) I thought this would have no effect on the model. With SGD the loss quickly dropped to low values after about a quarter of an epoch. However with Adam, the loss did not drop at all even after 10 epochs. I'm curious to why this is happening; it seems to me these should have nearly identical performance. I ran this on Win10/Py3.6/PyTorch1.01/CUDA9 And to save you a tiny bit of code digging, here are the hyperparams: lr=0.01 momentum=0.5 batch_size=64
Adam is famous for working out of the box with its default paremeters, which, in almost all frameworks, include a learning rate of 0.001 (see the default values in Keras, PyTorch, and Tensorflow), which is indeed the value suggested in the Adam paper. So, I would suggest changing to optimizer = optim.Adam(model.parameters(), lr=0.001) or simply optimizer = optim.Adam(model.parameters()) in order to leave lr in its default value (although I would say I am surprised, as MNIST is famous nowadays for working practically with whatever you may throw into it).
https://stackoverflow.com/questions/55770783/
What is the meaning of "x:" and of the following line?
What is the meaning of x: and of the following line? image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) Can somebody explain the syntax of this line? It is from PyTorch tutorial: https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html # Data augmentation and normalization for training # Just normalization for validation data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = 'data/hymenoptera_data' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} x comes from the for you have below. for x in ['train', 'val'], so for each value in ['train', 'val'], you are creating a dict in which that x is the key. This kind of construct, as Kabanus said, its a dictionary comprehension, a dictionary generator. You can know more about generators in here: https://docs.python.org/3/tutorial/classes.html#generators
https://stackoverflow.com/questions/55792839/
error: unrecognized arguments: - pytorch code in colab
I tried to run my code on google colab. but I get this message (error: unrecognized arguments)when I'm trying to call this function : def parse_opts(): parser = argparse.ArgumentParser() parser.add_argument( '--root_path', default='/root/data/ActivityNet', type=str, help='Root directory path of data') parser.add_argument( '--video_path', default='video_kinetics_jpg', type=str, help='Directory path of Videos') args = parser.parse_args() return args but this is failed and I get this error tester_video.py: error: unrecognized arguments: cifar_comp_20_200_0.01_0.1 20 10 0.01 0.1 I tried to use Easydict but it seems its not working thinks
The problem only appears in Jupyter notebook/lab/colab. Change args = parser.parse_args() to args = parser.parse_args(args=[]) and it should fix it.
https://stackoverflow.com/questions/55793112/
RuntimeError: CUDA out of memory. Problem when re-loading the model in a loop
I am running into the classic: CUDA out of memory. What I want to do: I want to load the same model using a different matrix of embeddings every time. I have to do that 300 times, one for each dimension of the word embeddings. I am not training the model, that is why I am using model.eval(), I thought that would be enough to keep Pytorch from creating a graph. Please notice that I never pass the model, nor the data, to cuda. In fact, I wanted to debug the code using cpu before sending the code to be performed by a GPU. The loop below is executed once, a RuntimeError is raised in the second iteration. My guess is that the code is loading a new model into a GPU memory at each iteration (which I did not know was possible without explicitly pointing it to do so). The emb_matrix is quite heavy and could cause the GPU memory to crash. emb_dim = 300 acc_dim = torch.zeros((emb_dim, 4)) for d in range(emb_dim): #create embeddings with one dimension shuffled emb_matrix = text_f.vocab.vectors.clone() #get a random permutation across one of the dimensions rand_index = torch.randperm(text_f.vocab.vectors.shape[0]) emb_matrix[:, d] = text_f.vocab.vectors[rand_index, d] #load model with the scrumbled embeddings model = load_classifier(emb_matrix, encoder_type = encoder_type) model.eval() for batch in batch_iters["test"]: x_pre = batch.premise x_hyp = batch.hypothesis y = batch.label #perform forward pass y_pred = model.forward(x_pre, x_hyp) #calculate accuracies acc_dim[d] += accuracy(y_pred, y)/test_batches #avoid memory issues y_pred.detach() print(f"Dimension {d} accuracies: {acc_dim[d]}") I get the following error: RuntimeError: CUDA out of memory. Tried to allocate 146.88 MiB (GPU 0; 2.00 GiB total capacity; 374.63 MiB already allocated; 0 bytes free; 1015.00 KiB cached) I tried passing the model and the data to CPU, but I get precisely the same error. I looked around for how to fix the problem, but I could not find an obvious solution. Any suggestions on how to load the model and data in the correct place, or how to clean the GPU's memory after each iteration are welcome.
It looks like acc_dim accumulates the grad history - see https://pytorch.org/docs/stable/notes/faq.html Because you're only do inference, with torch.no_grad(): should be used. This will completely sidestep the possible issue with accumulating grad history. model.eval() doesn't prevent grad bookkeeping from happening, it just switches behavior of some layers like dropout. Both model.eval() and with torch.no_grad(): together should be used for inference.
https://stackoverflow.com/questions/55800592/
Check if the expected object is of backend CUDA or CPU?
I am trying to run a code on both CPU and CUDA. The problem arise when I create objects, as I need to know what's expected. I need to determine if the computer is expecting a CUDA or CPU tensor, before it is created. Code: def initilize(self, input): self.x = torch.nn.Parameter(torch.zeros((1,M)) def run(self,x,state): B = torch.cat((self.x,h) This outputs: Error: 'Expected object of backend CUDA but got backend CPU for argument #1' Code idea: def initilize(self, input): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") if (expecting_cuda == True): self.x = torch.nn.Parameter(torch.zeros((1,M)).to(device)) else self.x = torch.nn.Parameter(torch.zeros((1,M)) def run(self,h): B = torch.cat((self.x,h) Question: How to figure out what the computer expects? Limitations: I am running on a pre-defined "check" procedure, so I can not send an argument into the function 'initilize' with information about CUDA or CPU.
You can just use self.x = torch.nn.Parameter(torch.zeros((1,M)).to(device)), no need for if (expecting_cuda == True): because to(device) will also work for cpu.
https://stackoverflow.com/questions/55806248/
Error in loading state_dict for customed model
I had problems when loading the weights of model. Here's some parts of the model class InceptionV4(nn.Module): def __init__(self, num_classes=1001): super(InceptionV4, self).__init__() # Special attributs self.input_space = None self.input_size = (299, 299, 3) self.mean = None self.std = None # Modules self.features = nn.Sequential( BasicConv2d(3, 32, kernel_size=3, stride=2), BasicConv2d(32, 32, kernel_size=3, stride=1), BasicConv2d(32, 64, kernel_size=3, stride=1, padding=1), Mixed_3a(), Mixed_4a(), Mixed_5a(), Inception_A(), Inception_A(), Inception_A(), ... ) self.avg_pool = nn.AvgPool2d(8, count_include_pad=False) self.last_linear = nn.Linear(1536, num_classes) I have tried to save the weights, something like torch.save(model.state_dict(), weight_name) and then reload again model.load_state_dict(torch.load(weight_name)) but got these errors: Missing key(s) in state_dict: "features.0.conv.weight", "features.0.bn.weight", "features.0.bn.bias", "features.0.bn.running_mean", "features.0.bn.running_var", "features.1.conv.weight", "features.1.bn.weight", "features.1.bn.bias", "features.1.bn.running_mean", "features.1.bn.running_var", "features.2.conv.weight", "features.2.bn.weight and also: Unexpected key(s) in state_dict: "conv.0.conv1.0.weight", "conv.0.conv1.0.bias", "conv.0.conv1.2.weight", "conv.0.conv1.2.bias", "conv.0.conv1.2.running_mean", "conv.0.conv1.2.running_var", "conv.0.conv1.2.num_batches_tracked", "conv.0.conv2.0.weight", "conv.0.conv2.0.bias", "conv.0.conv2.2.weight", "conv.0.conv2.2.bias", "conv.0.conv2.2.running_mean", "conv.0.conv2.2.running_var", "conv.0.conv2.2.num_batches_tracked", "conv.1.conv1.0.weight", "conv.1.conv1.0.bias", "conv.1.conv1.2.weight", "conv.1.conv1.2.bias", "conv.1.conv1.2.running_mean", "conv.1.conv1.2.running_var", "conv.1.conv1.2.num_batches_tracked Any hints on this? Thanks in advance.
I faced this problem several times. The error indicates that your model state_dict has different names from the pre-trained weights that you load. I don't see the pretrained model for Inception_v4 in torchvision model zoo, so it would be a little difficult to tell exactly where your InceptionV4 class has a problem with mismatched dict. Regardless of where you get your the pre-trained file, but the key point is to define your model the same as the pre-trained model code, and you can load the weight file smoothly. Here are some indicators where your code is different from the model: # change self.features -&gt; self.conv: This helps in solving mismatched names. self.conv = nn.Sequential(...) # Google how to change the BatchNorm in your current pytorch version # and the older pytorch version which the pretrained model was defined. conv.1.conv1.2.num_batches_tracked # it is deprecated in pytorch version 0.4 or newer The hint is: # Define your model (or parts you want to reuse) the same as the original Hope this helps :)
https://stackoverflow.com/questions/55825961/
Different `grad_fn` for similar looking operations in Pytorch (1.0)
I am working on an attention model, and before running the final model, I was going through the tensor shapes which flow through the code. I have an operation where I need to reshape the tensor. The tensor is of the shape torch.Size([[30, 8, 9, 64]]) where 30 is the batch_size, 8 is the number of attention head (this is not relevant to my question) 9 is the number of words in the sentence and 64 is some intermediate embedding representation of the word. I have to reshape the tensor to a size of torch.size([30, 9, 512]) before processing it further. So I was looking into some reference online and they have done the following x.transpose(1, 2).contiguous().view(30, -1, 512) whereas I was thinking that this should work x.transpose(1, 2).reshape(30, -1, 512). In the first case the grad_fn is &lt;ViewBackward&gt;, whereas in my case it is &lt;UnsafeViewBackward&gt;. Aren't these two the same operations? Will this result in a training error?
Aren't these two the same operations? No. While they produce effectively the same tensor, the operations are not the same, and they are not guaranteed to have the same storage. TensorShape.cpp: // _unsafe_view() differs from view() in that the returned tensor isn't treated // as a view for the purposes of automatic differentiation. (It's not listed in // VIEW_FUNCTIONS in gen_autograd.py). It's only safe to use if the `self` tensor // is temporary. For example, the viewed tensor here (a + b) is discarded immediately // after viewing: // // res = at::_unsafe_view(a + b, size); // // This is a hack because in-place operations on tensors treated like views // can be much more expensive than the same operations on non-view tensors. Note this can produce an error if applied to complex inputs, but this is generally not yet fully supported in PyTorch and not unique to this function.
https://stackoverflow.com/questions/55835557/
Pytorch LSTM text-generator repeats same words
UPDATE: It was a mistake in the logic generating new characters. See answer below. ORIGINAL QUESTION: I built an LSTM for character-level text generation with Pytorch. The model trains well (loss decreases reasonably etc.) but the trained model ends up outputting the last handful of words of the input repeated over and over again (e.g. Input: "She told her to come back later, but she never did"; Output: ", but she never did, but she never did, but she never did" and so on). I have played around with the hyperparameters a bit, and the problem persists. I'm currently using: Loss function: BCE Optimizer: Adam Learning rate: 0.001 Sequence length: 64 Batch size: 32 Embedding dim: 128 Hidden dim: 512 LSTM layers: 2 I also tried not always choosing the top choice, but this only introduces incorrect words and doesn't break the loop. I've been looking at countless tutorials, and I can't quite figure out what I'm doing differently/wrong. The following is the code for training the model. training_data is one long string and I'm looping over it predicting the next character for each substring of length SEQ_LEN. I'm not sure if my mistake is here or elsewhere but any comment or direction is highly appreciated! loss_dict = dict() for e in range(EPOCHS): print("------ EPOCH {} OF {} ------".format(e+1, EPOCHS)) lstm.reset_cell() for i in range(0, DATA_LEN, BATCH_SIZE): if i % 50000 == 0: print(i/float(DATA_LEN)) optimizer.zero_grad() input_vector = torch.tensor([[ vocab.get(char, len(vocab)) for char in training_data[i+b:i+b+SEQ_LEN] ] for b in range(BATCH_SIZE)]) if USE_CUDA and torch.cuda.is_available(): input_vector = input_vector.cuda() output_vector = lstm(input_vector) target_vector = torch.zeros(output_vector.shape) if USE_CUDA and torch.cuda.is_available(): target_vector = target_vector.cuda() for b in range(BATCH_SIZE): target_vector[b][vocab.get(training_data[i+b+SEQ_LEN])] = 1 error = loss(output_vector, target_vector) error.backward() optimizer.step() loss_dict[(e, int(i/BATCH_SIZE))] = error.detach().item()
ANSWER: I had made a stupid mistake when producing the characters with the trained model: I got confused with the batch size and assumed that at each step the network would predict an entire batch of new characters when in fact it only predicts a single one… That's why it simply repeated the end of the input. Yikes! Anyways, if you run into this problem DOUBLE CHECK that you have the right logic for producing new output with the trained model (especially if you're using batches). If it's not that and the problem persists, you can try fine-tuning the following: sequence length greediness (e.g. probabilistic choice vs. top choice for next character) batch size epochs
https://stackoverflow.com/questions/55861392/
Pytorch is installed, but while importing getting error
I have installed Anaconda on windows 8, after that done with the PyTorch. But while importing torch getting error. import torch AttributeError Traceback (most recent call last) &lt;ipython-input-4-eb42ca6e4af3&gt; in &lt;module&gt; ----&gt; 1 import torch I:\ProgramData\Anaconda3\lib\site-packages\torch\__init__.py in &lt;module&gt; 51 from ctypes.wintypes import DWORD, HMODULE 52 ---&gt; 53 AddDllDirectory = windll.kernel32.AddDllDirectory 54 AddDllDirectory.restype = DWORD 55 AddDllDirectory.argtypes = [c_wchar_p] I:\ProgramData\Anaconda3\lib\ctypes\__init__.py in __getattr__(self, name) 367 if name.startswith('__') and name.endswith('__'): 368 raise AttributeError(name) --&gt; 369 func = self.__getitem__(name) 370 setattr(self, name, func) 371 return func I:\ProgramData\Anaconda3\lib\ctypes\__init__.py in __getitem__(self, name_or_ordinal) 372 373 def __getitem__(self, name_or_ordinal): --&gt; 374 func = self._FuncPtr((name_or_ordinal, self)) 375 if not isinstance(name_or_ordinal, int): 376 func.__name__ = name_or_ordinal AttributeError: function 'AddDllDirectory' not found Also when I used the following command it is showing the pytorch in Anaconda command prompt. (base) C:\Users\rk88&gt;conda list pytorch # packages in environment at I:\ProgramData\Anaconda3: # # Name Version Build Channel pytorch 1.0.1 py3.7_cuda100_cudnn7_1 PyTorch Anyone can please help me how to resolve this, if you faced this issue earlier. Thanks,
Try out the below in your anaconda prompt conda create -n &lt;env_name&gt;python=3.6 conda activate &lt;env_name&gt; conda install pytorch-cpu torchvision-cpu -c pytorch If you are using a GPU conda install pytorch torchvision cudatoolkit=9.0 -c pytorch Please refer https://pytorch.org/
https://stackoverflow.com/questions/55880253/
How get a Python pathlib Path from an Azure blob datastore?
I am trying to do some custom manipulation of a torch.utils.data.DataLoader in AzureML but cannot get it to instantiate directly from my azureml.core.Datastore : ws = Workspace( # ... etc ... ) ds = Datastore.get(ws, datastore_name='my_ds') am = ds.as_mount() # HOW DO I GET base_path, data_file from am? dataloader = DataLoader( ListDataset(base_path, data_file), #... etc... ) The value of am.path() is "$AZUREML_DATAREFERENCE_my_ds" but I cannot figure out how to go from that to a pathlib.Path as is expected by the constructor to ListDataset. Things I've tried include Path(am.path()) and Path(os.environ[am.path()]) but they don't seem to work. It's clear that there's some answer, since : script_params = { '--base_path': ds.as_mount(), '--epochs': 30, '--batch_size' : 16, '--use_cuda': 'true' } torch = PyTorch(source_directory='./', script_params=script_params, compute_target=compute_target, entry_script='train.py', pip_packages=packages, use_gpu=True) seems to create a legit object.
You can perhaps try using the DataPath class. It exposes attributes such as path_on_datastore which might be the path you're looking for. To construct this class from your DataReference object i.e. variable am; you can use create_from_data_reference() method. Example: ds = Datastore.get(ws, datastore_name='my_ds') am = ds.as_mount() dp = DataPath().create_from_data_reference(am) base_path = dp.path_on_datastore
https://stackoverflow.com/questions/55884641/
Unexpected and missing keys in state_dict when converting pytorch to onnx
When I convert a '.pth' model from PyTorch to ONNX, an error like Unexpected keys and missing keys occur. This is my model: 1 import torch 2 import torch.onnx 3 from mmcv import runner 4 import torch.`enter code here`nn as nn 5 from mobilenet import MobileNet 6 # A model class instance (class not shown) 7 md=MobileNet(1,2) 8 model = md 9 device_ids = [0,2,6,7,8] 10 model = nn.DataParallel(model,device_ids) 11 #torch.backends.cudnn.benchmark = True 12 # Load the weights from a file (.pth usually) 13 runner.load_checkpoint(model,'../mmdetection- master/work_dmobile/faster_rcnn_r50_fpn_1x/epoch_60.pth') 14 #model = MMDataParallel(model, device_ids=[0]) 15 #state_dict=torch.load('../mmdetection-master/r.pkl.json') 16 # Load the weights now into a model net architecture defined by our class 17 #model.load_state_dict(state_dict) 18 #model = runner.load_state_dict(state_dict) 19 model=runner.load_state_dict({k.replace('module.',' '):v for k,v in state_dict['state_dict'].items()}) 20 # Create the right input shape (e.g. for an image) 21 dummy_input = torch.randn(1, 64, 512, 256) 22 23 torch.onnx.export(model, dummy_input, "onnx_model_name.onnx") And this is the error: unexpected key in source state_dict: backbone.stem.0.conv.weight, backbone.stem.0.bn.weight, backbone.stem.0.bn.bias, backbone.stem.0.bn.running_mean, backbone.stem.0.bn.running_var, backbone.stem.0.bn.num_batches_tracked, backbone.stem.1.depthwise.0.weight, backbone.stem.1.depthwise.1.weight, backbone.stem.1.depthwise.1.bias, backbone.stem.1.depthwise.1.running_mean, backbone.stem.1.depthwise.1.running_var, backbone.stem.1.depthwise.1.num_batches_tracked, backbone.stem.1.pointwise.0.weight, backbone.stem.1.pointwise.0.bias, backbone.stem.1.pointwise.1.weight, backbone.stem.1.pointwise.1.bias, backbone.stem.1.pointwise.1.running_mean, backbone.stem.1.pointwise.1.running_var, backbone.stem.1.pointwise.1.num_batches_tracked, backbone.conv1.0.depthwise.0.weight, backbone.conv1.0.depthwise.1.weight, backbone.conv1.0.depthwise.1.bias, backbone.conv1.0.depthwise.1.running_mean, backbone.conv1.0.depthwise.1.running_var, backbone.conv1.0.depthwise.1.num_batches_tracked, backbone.conv1.0.pointwise.0.weight, backbone.conv1.0.pointwise.0.bias, backbone.conv1.0.pointwise.1.weight, backbone.conv1.0.pointwise.1.bias, backbone.conv1.0.pointwise.1.running_mean, backbone.conv1.0.pointwise.1.running_var, backbone.conv1.0.pointwise.1.num_batches_tracked, backbone.conv1.1.depthwise.0.weight, backbone.conv1.1.depthwise.1.weight, backbone.conv1.1.depthwise.1.bias, backbone.conv1.1.depthwise.1.running_mean, backbone.conv1.1.depthwise.1.running_var, backbone.conv1.1.depthwise.1.num_batches_tracked, backbone.conv1.1.pointwise.0.weight, backbone.conv1.1.pointwise.0.bias, backbone.conv1.1.pointwise.1.weight, backbone.conv1.1.pointwise.1.bias, backbone.conv1.1.pointwise.1.running_mean, backbone.conv1.1.pointwise.1.running_var, backbone.conv1.1.pointwise.1.num_batches_tracked, backbone.conv2.0.depthwise.0.weight, backbone.conv2.0.depthwise.1.weight, backbone.conv2.0.depthwise.1.bias, backbone.conv2.0.depthwise.1.running_mean, backbone.conv2.0.depthwise.1.running_var, backbone.conv2.0.depthwise.1.num_batches_tracked, backbone.conv2.0.pointwise.0.weight, backbone.conv2.0.pointwise.0.bias, backbone.conv2.0.pointwise.1.weight, backbone.conv2.0.pointwise.1.bias, backbone.conv2.0.pointwise.1.running_mean, backbone.conv2.0.pointwise.1.running_var, backbone.conv2.0.pointwise.1.num_batches_tracked, backbone.conv2.1.depthwise.0.weight, backbone.conv2.1.depthwise.1.weight, backbone.conv2.1.depthwise.1.bias, backbone.conv2.1.depthwise.1.running_mean, backbone.conv2.1.depthwise.1.running_var, backbone.conv2.1.depthwise.1.num_batches_tracked, backbone.conv2.1.pointwise.0.weight, backbone.conv2.1.pointwise.0.bias, backbone.conv2.1.pointwise.1.weight, backbone.conv2.1.pointwise.1.bias, backbone.conv2.1.pointwise.1.running_mean, backbone.conv2.1.pointwise.1.running_var, backbone.conv2.1.pointwise.1.num_batches_tracked, backbone.conv3.0.depthwise.0.weight, backbone.conv3.0.depthwise.1.weight, backbone.conv3.0.depthwise.1.bias, backbone.conv3.0.depthwise.1.running_mean, backbone.conv3.0.depthwise.1.running_var, backbone.conv3.0.depthwise.1.num_batches_tracked, backbone.conv3.0.pointwise.0.weight, backbone.conv3.0.pointwise.0.bias, backbone.conv3.0.pointwise.1.weight, backbone.conv3.0.pointwise.1.bias, backbone.conv3.0.pointwise.1.running_mean, backbone.conv3.0.pointwise.1.running_var, backbone.conv3.0.pointwise.1.num_batches_tracked, backbone.conv3.1.depthwise.0.weight, backbone.conv3.1.depthwise.1.weight, backbone.conv3.1.depthwise.1.bias, backbone.conv3.1.depthwise.1.running_mean, backbone.conv3.1.depthwise.1.running_var, backbone.conv3.1.depthwise.1.num_batches_tracked, backbone.conv3.1.pointwise.0.weight, backbone.conv3.1.pointwise.0.bias, backbone.conv3.1.pointwise.1.weight, backbone.conv3.1.pointwise.1.bias, backbone.conv3.1.pointwise.1.running_mean, backbone.conv3.1.pointwise.1.running_var, backbone.conv3.1.pointwise.1.num_batches_tracked, backbone.conv3.2.depthwise.0.weight, backbone.conv3.2.depthwise.1.weight, backbone.conv3.2.depthwise.1.bias, backbone.conv3.2.depthwise.1.running_mean, backbone.conv3.2.depthwise.1.running_var, backbone.conv3.2.depthwise.1.num_batches_tracked, backbone.conv3.2.pointwise.0.weight, backbone.conv3.2.pointwise.0.bias, backbone.conv3.2.pointwise.1.weight, backbone.conv3.2.pointwise.1.bias, backbone.conv3.2.pointwise.1.running_mean, backbone.conv3.2.pointwise.1.running_var, backbone.conv3.2.pointwise.1.num_batches_tracked, backbone.conv3.3.depthwise.0.weight, backbone.conv3.3.depthwise.1.weight, backbone.conv3.3.depthwise.1.bias, backbone.conv3.3.depthwise.1.running_mean, backbone.conv3.3.depthwise.1.running_var, backbone.conv3.3.depthwise.1.num_batches_tracked, backbone.conv3.3.pointwise.0.weight, backbone.conv3.3.pointwise.0.bias, backbone.conv3.3.pointwise.1.weight, backbone.conv3.3.pointwise.1.bias, backbone.conv3.3.pointwise.1.running_mean, backbone.conv3.3.pointwise.1.running_var, backbone.conv3.3.pointwise.1.num_batches_tracked, backbone.conv3.4.depthwise.0.weight, backbone.conv3.4.depthwise.1.weight, backbone.conv3.4.depthwise.1.bias, backbone.conv3.4.depthwise.1.running_mean, backbone.conv3.4.depthwise.1.running_var, backbone.conv3.4.depthwise.1.num_batches_tracked, backbone.conv3.4.pointwise.0.weight, backbone.conv3.4.pointwise.0.bias, backbone.conv3.4.pointwise.1.weight, backbone.conv3.4.pointwise.1.bias, backbone.conv3.4.pointwise.1.running_mean, backbone.conv3.4.pointwise.1.running_var, backbone.conv3.4.pointwise.1.num_batches_tracked, backbone.conv3.5.depthwise.0.weight, backbone.conv3.5.depthwise.1.weight, backbone.conv3.5.depthwise.1.bias, backbone.conv3.5.depthwise.1.running_mean, backbone.conv3.5.depthwise.1.running_var, backbone.conv3.5.depthwise.1.num_batches_tracked, backbone.conv3.5.pointwise.0.weight, backbone.conv3.5.pointwise.0.bias, backbone.conv3.5.pointwise.1.weight, backbone.conv3.5.pointwise.1.bias, backbone.conv3.5.pointwise.1.running_mean, backbone.conv3.5.pointwise.1.running_var, backbone.conv3.5.pointwise.1.num_batches_tracked, backbone.conv4.0.depthwise.0.weight, backbone.conv4.0.depthwise.1.weight, backbone.conv4.0.depthwise.1.bias, backbone.conv4.0.depthwise.1.running_mean, backbone.conv4.0.depthwise.1.running_var, backbone.conv4.0.depthwise.1.num_batches_tracked, backbone.conv4.0.pointwise.0.weight, backbone.conv4.0.pointwise.0.bias, backbone.conv4.0.pointwise.1.weight, backbone.conv4.0.pointwise.1.bias, backbone.conv4.0.pointwise.1.running_mean, backbone.conv4.0.pointwise.1.running_var, backbone.conv4.0.pointwise.1.num_batches_tracked, backbone.conv4.1.depthwise.0.weight, backbone.conv4.1.depthwise.1.weight, backbone.conv4.1.depthwise.1.bias, backbone.conv4.1.depthwise.1.running_mean, backbone.conv4.1.depthwise.1.running_var, backbone.conv4.1.depthwise.1.num_batches_tracked, backbone.conv4.1.pointwise.0.weight, backbone.conv4.1.pointwise.0.bias, backbone.conv4.1.pointwise.1.weight, backbone.conv4.1.pointwise.1.bias, backbone.conv4.1.pointwise.1.running_mean, backbone.conv4.1.pointwise.1.running_var, backbone.conv4.1.pointwise.1.num_batches_tracked, neck.lateral_convs.0.conv.weight, neck.lateral_convs.0.conv.bias, neck.lateral_convs.1.conv.weight, neck.lateral_convs.1.conv.bias, neck.lateral_convs.2.conv.weight, neck.lateral_convs.2.conv.bias, neck.fpn_convs.0.conv.weight, neck.fpn_convs.0.conv.bias, neck.fpn_convs.1.conv.weight, neck.fpn_convs.1.conv.bias, neck.fpn_convs.2.conv.weight, neck.fpn_convs.2.conv.bias, rpn_head.rpn_conv.weight, rpn_head.rpn_conv.bias, rpn_head.rpn_cls.weight, rpn_head.rpn_cls.bias, rpn_head.rpn_reg.weight, rpn_head.rpn_reg.bias, bbox_head.fc_cls.weight, bbox_head.fc_cls.bias, bbox_head.fc_reg.weight, bbox_head.fc_reg.bias, bbox_head.shared_fcs.0.weight, bbox_head.shared_fcs.0.bias, bbox_head.shared_fcs.1.weight, bbox_head.shared_fcs.1.bias missing keys in source state_dict: conv2.1.depthwise.1.weight, conv4.0.depthwise.0.weight, conv4.1.pointwise.1.weight, conv3.2.depthwise.0.weight, conv3.1.pointwise.0.weight, conv3.4.pointwise.1.bias, conv3.5.depthwise.1.bias, conv2.1.pointwise.1.weight, stem.1.pointwise.1.running_mean, conv3.3.pointwise.1.weight, conv3.3.depthwise.1.running_mean, conv3.1.depthwise.1.num_batches_tracked, conv3.0.depthwise.1.num_batches_tracked, conv2.1.depthwise.1.running_var, conv1.0.depthwise.1.weight, conv3.5.depthwise.1.running_var, stem.0.bn.bias, conv3.2.depthwise.1.num_batches_tracked, conv2.0.depthwise.0.weight, conv2.1.pointwise.0.bias, conv3.1.pointwise.1.bias, conv3.2.pointwise.1.bias, conv2.0.pointwise.1.num_batches_tracked, stem.1.pointwise.0.weight, conv2.0.depthwise.1.weight, stem.1.depthwise.0.weight, conv1.1.pointwise.1.weight, conv3.5.pointwise.0.weight, conv3.4.depthwise.1.running_var, conv1.0.pointwise.0.bias, conv3.3.depthwise.1.running_var, conv3.0.pointwise.1.weight, conv4.0.pointwise.1.num_batches_tracked, conv4.1.depthwise.1.running_var, stem.1.depthwise.1.running_var, conv3.0.pointwise.1.running_var, conv3.4.depthwise.0.weight, conv3.4.pointwise.1.num_batches_tracked, conv4.0.depthwise.1.num_batches_tracked, conv3.0.depthwise.1.weight, conv3.3.pointwise.0.bias, conv3.0.depthwise.1.running_mean, conv3.2.pointwise.1.running_mean, conv3.1.pointwise.0.bias, conv3.5.depthwise.1.num_batches_tracked, conv3.5.pointwise.1.running_mean, conv3.1.pointwise.1.running_var, conv1.0.depthwise.1.running_mean, stem.1.pointwise.1.bias, conv1.0.depthwise.0.weight, conv3.2.pointwise.0.weight, conv4.0.pointwise.1.running_mean, conv2.1.pointwise.1.running_mean, stem.1.pointwise.1.weight, conv4.1.depthwise.1.weight, conv4.0.pointwise.0.weight, conv1.1.depthwise.1.bias, conv3.2.pointwise.1.num_batches_tracked, conv4.1.depthwise.0.weight, conv3.4.depthwise.1.running_mean, conv1.0.depthwise.1.bias, conv2.0.pointwise.0.bias, conv3.4.depthwise.1.num_batches_tracked, conv4.1.pointwise.1.running_mean, conv2.1.depthwise.1.bias, conv3.2.depthwise.1.weight, conv2.0.pointwise.1.weight, conv1.0.pointwise.0.weight, conv3.1.depthwise.1.running_var, conv2.0.pointwise.1.bias, conv4.0.depthwise.1.bias, conv3.3.pointwise.1.running_var, conv3.4.pointwise.1.weight, conv4.0.pointwise.0.bias, conv3.4.depthwise.1.bias, conv4.1.depthwise.1.num_batches_tracked, conv2.0.pointwise.1.running_mean, conv1.1.depthwise.1.weight, conv2.0.pointwise.1.running_var, stem.1.depthwise.1.running_mean, conv3.4.pointwise.1.running_var, stem.1.depthwise.1.num_batches_tracked, conv3.3.depthwise.1.weight, stem.1.pointwise.1.running_var, conv4.1.depthwise.1.bias, conv3.0.pointwise.1.bias, conv2.0.depthwise.1.running_mean, conv1.1.pointwise.1.bias, conv4.1.pointwise.0.bias, conv3.2.pointwise.0.bias, conv1.1.pointwise.0.weight, conv1.0.pointwise.1.weight, conv1.0.pointwise.1.running_mean, stem.0.conv.weight, stem.1.depthwise.1.bias, conv3.3.depthwise.0.weight, conv1.1.depthwise.1.num_batches_tracked, conv3.3.pointwise.1.num_batches_tracked, conv3.2.pointwise.1.running_var, conv3.2.depthwise.1.running_mean, conv3.3.depthwise.1.bias, conv4.1.pointwise.1.num_batches_tracked, conv2.0.depthwise.1.num_batches_tracked, conv3.0.pointwise.0.bias, conv3.1.depthwise.1.running_mean, conv3.1.depthwise.1.weight, conv3.0.pointwise.1.num_batches_tracked, conv3.1.pointwise.1.weight, conv4.0.pointwise.1.bias, conv3.3.depthwise.1.num_batches_tracked, conv3.4.pointwise.0.weight, stem.1.pointwise.0.bias, conv3.0.depthwise.1.bias, conv1.1.pointwise.0.bias, conv4.0.pointwise.1.running_var, stem.0.bn.weight, conv1.0.pointwise.1.num_batches_tracked, conv2.1.depthwise.1.running_mean, conv4.1.depthwise.1.running_mean, conv1.1.pointwise.1.running_var, conv2.1.pointwise.1.num_batches_tracked, conv2.0.depthwise.1.running_var, conv3.5.depthwise.1.weight, conv3.0.depthwise.0.weight, conv4.0.depthwise.1.running_mean, stem.0.bn.num_batches_tracked, conv3.3.pointwise.1.running_mean, conv2.1.pointwise.1.running_var, conv3.0.pointwise.1.running_mean, conv1.1.depthwise.1.running_var, conv3.0.depthwise.1.running_var, conv1.0.depthwise.1.running_var, stem.1.pointwise.1.num_batches_tracked, conv4.0.pointwise.1.weight, conv1.1.pointwise.1.running_mean, conv2.1.depthwise.0.weight, conv1.0.depthwise.1.num_batches_tracked, conv1.0.pointwise.1.running_var, conv3.5.pointwise.1.weight, conv3.5.depthwise.1.running_mean, conv3.1.depthwise.1.bias, conv3.1.depthwise.0.weight, conv1.1.depthwise.1.running_mean, conv2.0.pointwise.0.weight, conv4.1.pointwise.1.bias, conv3.2.depthwise.1.running_var, conv3.5.pointwise.0.bias, conv3.4.depthwise.1.weight, conv3.2.depthwise.1.bias, stem.0.bn.running_mean, conv4.0.depthwise.1.running_var, conv1.1.depthwise.0.weight, stem.0.bn.running_var, conv4.1.pointwise.0.weight, conv2.1.pointwise.1.bias, conv3.4.pointwise.0.bias, conv1.0.pointwise.1.bias, conv3.5.pointwise.1.running_var, conv1.1.pointwise.1.num_batches_tracked, conv3.1.pointwise.1.running_mean, conv2.1.depthwise.1.num_batches_tracked, conv2.1.pointwise.0.weight, stem.1.depthwise.1.weight, conv3.5.pointwise.1.bias, conv3.5.pointwise.1.num_batches_tracked, conv3.1.pointwise.1.num_batches_tracked, conv3.2.pointwise.1.weight, conv3.5.depthwise.0.weight, conv3.3.pointwise.0.weight, conv2.0.depthwise.1.bias, conv3.0.pointwise.0.weight, conv3.3.pointwise.1.bias, conv3.4.pointwise.1.running_mean, conv4.0.depthwise.1.weight, conv4.1.pointwise.1.running_var
In line 19, try using model=runner.load_state_dict(..., strict=False). Using the parameter strict=False tells the load_state_dict function that there might be missing keys in the checkpoint, which usually come from the BatchNorm layer as I see in this case.
https://stackoverflow.com/questions/55898666/
How to convert tensor to image array?
I would like to convert a tensor to image array and use tensor.data() method. But it doesn't work. #include &lt;torch/script.h&gt; // One-stop header. #include &lt;iostream&gt; #include &lt;memory&gt; #include &lt;sstream&gt; #include &lt;string&gt; #include &lt;vector&gt; #include "itkImage.h" #include "itkImageFileReader.h" #include "itkImageFileWriter.h" #include "itkImageRegionIterator.h" ////////////////////////////////////////////////////// //Goal: load jit script model and segment myocardium //Step: 1. load jit script model // 2. load input image // 3. predict by model // 4. save the result to file ////////////////////////////////////////////////////// typedef short PixelType; const unsigned int Dimension = 3; typedef itk::Image&lt;PixelType, Dimension&gt; ImageType; typedef itk::ImageFileReader&lt;ImageType&gt; ReaderType; typedef itk::ImageRegionIterator&lt;ImageType&gt; IteratorType; bool itk2tensor(ImageType::Pointer itk_img, torch::Tensor &amp;tensor_img) { typename ImageType::RegionType region = itk_img-&gt;GetLargestPossibleRegion(); const typename ImageType::SizeType size = region.GetSize(); std::cout &lt;&lt; "Input size: " &lt;&lt; size[0] &lt;&lt; ", " &lt;&lt; size[1]&lt;&lt; ", " &lt;&lt; size[2] &lt;&lt; std::endl; int len = size[0] * size[1] * size[2]; short rowdata[len]; int count = 0; IteratorType iter(itk_img, itk_img-&gt;GetRequestedRegion()); // convert itk to array for (iter.GoToBegin(); !iter.IsAtEnd(); ++iter) { rowdata[count] = iter.Get(); count++; } std::cout &lt;&lt; "Convert itk to array DONE!" &lt;&lt; std::endl; // convert array to tensor tensor_img = torch::from_blob(rowdata, {1, 1, (int)size[0], (int)size[1], (int)size[2]}, torch::kShort).clone(); tensor_img = tensor_img.toType(torch::kFloat); tensor_img = tensor_img.to(torch::kCUDA); tensor_img.set_requires_grad(0); return true; } bool tensor2itk(torch::Tensor &amp;t, ImageType::Pointer itk_img) { std::cout &lt;&lt; "tensor dtype = " &lt;&lt; t.dtype() &lt;&lt; std::endl; std::cout &lt;&lt; "tensor size = " &lt;&lt; t.sizes() &lt;&lt; std::endl; t = t.toType(torch::kShort); short * array = t.data&lt;short&gt;(); ImageType::IndexType start; start[0] = 0; // first index on X start[1] = 0; // first index on Y start[2] = 0; // first index on Z ImageType::SizeType size; size[0] = t.size(2); size[1] = t.size(3); size[2] = t.size(4); ImageType::RegionType region; region.SetSize( size ); region.SetIndex( start ); itk_img-&gt;SetRegions( region ); itk_img-&gt;Allocate(); int len = size[0] * size[1] * size[2]; IteratorType iter(itk_img, itk_img-&gt;GetRequestedRegion()); int count = 0; // convert array to itk std::cout &lt;&lt; "start!" &lt;&lt; std::endl; for (iter.GoToBegin(); !iter.IsAtEnd(); ++iter) { short temp = *array++; // ERROR! std::cout &lt;&lt; temp &lt;&lt; " "; iter.Set(temp); count++; } std::cout &lt;&lt; "end!" &lt;&lt; std::endl; return true; } int main(int argc, const char* argv[]) { int a, b, c; if (argc != 4) { std::cerr &lt;&lt; "usage: automyo input jitmodel output\n"; return -1; } std::cout &lt;&lt; "========= jit start =========\n"; // 1. load jit script model std::cout &lt;&lt; "Load script module: " &lt;&lt; argv[2] &lt;&lt; std::endl; std::shared_ptr&lt;torch::jit::script::Module&gt; module = torch::jit::load(argv[2]); module-&gt;to(at::kCUDA); // assert(module != nullptr); std::cout &lt;&lt; "Load script module DONE" &lt;&lt; std::endl; // 2. load input image const char* img_path = argv[1]; std::cout &lt;&lt; "Load image: " &lt;&lt; img_path &lt;&lt; std::endl; ReaderType::Pointer reader = ReaderType::New(); if (!img_path) { std::cout &lt;&lt; "Load input file error!" &lt;&lt; std::endl; return false; } reader-&gt;SetFileName(img_path); reader-&gt;Update(); std::cout &lt;&lt; "Load image DONE!" &lt;&lt; std::endl; ImageType::Pointer itk_img = reader-&gt;GetOutput(); torch::Tensor tensor_img; if (!itk2tensor(itk_img, tensor_img)) { std::cerr &lt;&lt; "itk2tensor ERROR!" &lt;&lt; std::endl; } else { std::cout &lt;&lt; "Convert array to tensor DONE!" &lt;&lt; std::endl; } std::vector&lt;torch::jit::IValue&gt; inputs; inputs.push_back(tensor_img); // 3. predict by model torch::Tensor y = module-&gt;forward(inputs).toTensor(); std::cout &lt;&lt; "Inference DONE!" &lt;&lt; std::endl; // 4. save the result to file torch::Tensor seg = y.gt(0.5); // std::cout &lt;&lt; seg &lt;&lt; std::endl; ImageType::Pointer out_itk_img = ImageType::New(); if (!tensor2itk(seg, out_itk_img)) { std::cerr &lt;&lt; "tensor2itk ERROR!" &lt;&lt; std::endl; } else { std::cout &lt;&lt; "Convert tensor to itk DONE!" &lt;&lt; std::endl; } std::cout &lt;&lt; out_itk_img &lt;&lt; std::endl; return true; } The runtime log is showed below: Load script module:model_myo_jit.pt Load script module DONE Load image: patch_6.nii.gz Load image DONE! Input size: 128, 128, 128 Convert itk to array DONE! Convert array to tensor DONE! Inference DONE! tensor dtype = unsigned char tensor size = [1, 1, 96, 96, 96] start! Segmentation fault (core dumped) Why and how to convert?
I have found the solution. When I convert the y to kCPU, it works. Because it in CUDA before.
https://stackoverflow.com/questions/55899140/
Why am I getting different results after saving and loading model weights in pytorch?
I have written a model, the architecture is follows: CNNLSTM( (cnn): CNNText( (embed): Embedding(19410, 300, padding_idx=0) (convs1): ModuleList( (0): Conv2d(1, 32, kernel_size=(3, 300), stride=(1, 1)) (1): Conv2d(1, 32, kernel_size=(5, 300), stride=(1, 1)) (2): Conv2d(1, 32, kernel_size=(7, 300), stride=(1, 1)) ) (dropout): Dropout(p=0.6) (fc1): Linear(in_features=96, out_features=1, bias=True) ) (lstm): RNN( (embedding): Embedding(19410, 300, padding_idx=0) (rnn): LSTM(300, 150, batch_first=True, bidirectional=True) (attention): Attention( (dense): Linear(in_features=300, out_features=1, bias=True) (tanh): Tanh() (softmax): Softmax() ) (fc1): Linear(in_features=300, out_features=50, bias=True) (dropout): Dropout(p=0.5) (fc2): Linear(in_features=50, out_features=1, bias=True) ) (fc1): Linear(in_features=146, out_features=1, bias=True) ) I have used the RNN and the CNN differently on the same dataset and I have the weights saved. In the mixed model, I load the weights using the following function: def load_pretrained_weights(self, model='cnn', path=None): if model not in ['cnn', 'rnn']: raise AttributeError("Model must be either rnn or cnn") if model == 'cnn': self.cnn.load_state_dict(torch.load(path)) if model == 'rnn': self.lstm.load_state_dict(torch.load(path)) And freeze the sub modules using the function: def freeze(self): for p in self.cnn.parameters(): p.requires_grad = False for p in self.lstm.parameters(): p.requires_grad = False Then I train the model, and got better result compared to the each submodule trained and evaluated alone. I used an early-stopping technique in my epoch loop to save the best parameters. After training I made a new instance of the same class and when I load the saved “best” parameters I am not getting similar result. I tried the same thing with each submodule (RNN and CNNText here) alone, it worked. But in this case it is not giving the same performance. Please help me understand it what is happening here. I am new to Deep Learning concepts. Thank you. Few Experiments I tried: I loaded the saved weights of each submodule and loaded the best parameters, got somehow close to the best result. Took the hidden layer from each submodule before applying the dropout, got better than the previous, but not the best! EDIT The init function of my class is as follows. And the RNN and CNN are just usual implementations. class CNNLSTM(nn.Module): def __init__(self, vocab_size, embedding_dim, embedding_weight, rnn_arch, isCuda=True, class_num=1, kernel_num=32, kernel_sizes=[3,4,5],train_wv=False, rnn_num_layers=1, rnn_bidirectional=True, rnn_use_attention=True): super(CNNLSTM, self).__init__() self.cnn = CNNText(vocab_size, embedding_dim, embedding_weight, class_num, kernel_num = kernel_num, kernel_sizes=kernel_sizes, static=train_wv,dropout=0.6) self.lstm = RNN(rnn_arch, vocab_size, embedding_dim, embedding_weight, num_layers=rnn_num_layers, rnn_unit='lstm', embedding_train=train_wv, isCuda=isCuda, bidirectional=rnn_bidirectional, use_padding=True, use_attention=rnn_use_attention, num_class=class_num) self.fc1 = nn.Linear(rnn_arch[-1] + len(kernel_sizes) * kernel_num , class_num) After declaring the object I Loaded individual pre-trained submodule as, model.load_pretrained_weights('rnn', 'models/bilstm_2_atten.pth') model.load_pretrained_weights('cnn', 'models/cnn2.pth') model.freeze() Then I train the last linear layer. I saved the model parameter values as torch.save(model.state_dict(),path) So at 3rd/4th from last epoch I am getting the 'best' result. And after training I loaded the parameters for best result with, state_dict = torch.load(MODEL_PATH) model.load_state_dict(state_dict)
After loading the model, you need to write model.eval(). state_dict = torch.load(MODEL_PATH) model.load_state_dict(state_dict) model.eval() Reference : Pytorch Documentation This is what it says: When saving a model for inference, it is only necessary to save the trained model’s learned parameters. Saving the model’s state_dict with the torch.save() function will give you the most flexibility for restoring the model later, which is why it is the recommended method for saving models. A common PyTorch convention is to save models using either a .pt or .pth file extension. Remember that you must call model.eval() to set dropout and batch normalization layers to evaluation mode before running inference. Failing to do this will yield inconsistent inference results.
https://stackoverflow.com/questions/55900754/
what is the difference between unsqueez_ in pytorch and epxand_dim in keras and what will be the shape of output after using it?
I am a beginner in keras and I have a pytorch code that I need to change it to keras, but I could not understand some part of it. specially I have problems in the size of the output shape. the shape of image is (:, 3,32,32) and the first dimension of image is the size of the batch. now, my question is: what this line do and what is the output shape: image_yuv_ch = image[:, channel, :, :].unsqueeze_(1) it adds a dimension in position 1? what is the output shape?:( the size of filters was (64,8,8) and then we have filters.unsqueez_(1), is this means the new shape of filters is (64,1,8,8)? what does this line do? image_conv = F.conv2d(image_yuv_ch, filters, stride=8) is it the same as conv2d in keras what is the shape of output tensor from it? I also could not understand what view do? I know it tries to show tensor in new shape but in the below code I could not understand the output shape after each unsqueez_, permute or view. could you please tell me what is the output shape of each line? Thank you in advance. import torch.nn.functional as F def apply_conv(self, image, filter_type: str): if filter_type == 'dct': filters = self.dct_conv_weights elif filter_type == 'idct': filters = self.idct_conv_weights else: raise('Unknown filter_type value.') image_conv_channels = [] for channel in range(image.shape[1]): image_yuv_ch = image[:, channel, :, :].unsqueeze_(1) image_conv = F.conv2d(image_yuv_ch, filters, stride=8) image_conv = image_conv.permute(0, 2, 3, 1) image_conv = image_conv.view(image_conv.shape[0], image_conv.shape[1], image_conv.shape[2], 8, 8) image_conv = image_conv.permute(0, 1, 3, 2, 4) image_conv = image_conv.contiguous().view(image_conv.shape[0], image_conv.shape[1]*image_conv.shape[2], image_conv.shape[3]*image_conv.shape[4]) image_conv.unsqueeze_(1) # image_conv = F.conv2d() image_conv_channels.append(image_conv) image_conv_stacked = torch.cat(image_conv_channels, dim=1) return image_conv_stacked
It seems like you are Keras-user or Tensorflow-user and trying to learn Pytorch. You should go to the website of Pytorch document to understand more about each operation. unsqueeze is to expand the dim by 1 of the tensor. The underscore in unsqueeze_() means this is in-place function. view() can be understood as .reshape() in keras. permute() is to switch multiple dimensions of tensor. For example: x = torch.randn(1,2,3) # shape [1,2,3] x = torch.permute(2,0,1) # shape [3,1,2] In order to know the shape of the tensor after each operation, just simply add print(x.size()). For example: image_conv = image_conv.permute(0, 2, 3, 1) print(image_conv.size()) image_conv = image_conv.view(image_conv.shape[0], image_conv.shape[1], print(image_conv.size()) image_conv.shape[2], 8, 8) print(image_conv.size()) image_conv = image_conv.permute(0, 1, 3, 2, 4) print(image_conv.size()) The big difference between Pytorch and Tensorflow (back-end of Keras) is that Pytorch will generate a dynamic graph, rather than a static graph as Tensorflow. Your way of defining a model would not work properly in Pytorch since the weights of conv will not be save in model.parameters() which can't be optimized during the backpropagation. One more comment, please check this link to learn how to define a proper model using Pytorch: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) The code for the comment: import torch x = torch.randn(8, 3, 32, 32) print(x.shape) torch.Size([8, 3, 32, 32]) channel = 1 y = x[:, channel, :, :] print(y.shape) torch.Size([8, 32, 32]) y = y.unsqueeze_(1) print(y.shape) torch.Size([8, 1, 32, 32]) Hope this helps and enjoy your learning!
https://stackoverflow.com/questions/55910278/
PyTorch Design: Why does torch.distributions.multivariate_normal have methods outside of its class?
I'm trying to understand the design of pytorch a little bit better. I was trying to draw samples from a multivariate normal, and found torch.distributions.multivariate_normal, which to my surprise is a module with many protected functions defined outside of its MultivariateNormal() class. I was confused as to why this was the case. Why not just define all of these functions as class methods inside the MultivariateNormal() class? That way, we could instantiate an object of this class by torch.distributions.multivariate_normal(mu,sigma) rather than torch.distributions.multivariate_normal.MultivariateNormal(mu,sigma). Any thoughts? Thanks.
You can call MultivariateNormal directly: import torch gaussian = torch.distributions.MultivariateNormal(torch.ones(2),torch.eye(2)) But the class MultivariateNormal is implemented in the file "torch/distributions/multivariate_normal.py", so both calls are correct
https://stackoverflow.com/questions/55927149/
Why does loss continue decreasing but performance keep unchanged?
I am using bert-lstm-crf model, with bert model from https://github.com/huggingface/pytorch-pretrained-BERT/ and lstm crf models are written by myself. After training bert-lstm-crf model for 25 epochs, the performance on training set, dev set and test set keep unchanged but the loss continue decreasing. Where should I make a change? Here is performance: 25th epoch: tensor(10267.6279, device='cuda:0') (0.42706720346856614, 0.4595134955014995, 0.4426966292134832) (0.43147208121827413, 0.4271356783919598, 0.42929292929292934) (0.4460093896713615, 0.4668304668304668, 0.4561824729891957) 26th epoch: tensor(10219.3398, device='cuda:0') (0.44544364508393286, 0.4951682772409197, 0.46899163642101943) (0.4469135802469136, 0.4547738693467337, 0.45080946450809467) (0.45871559633027525, 0.4914004914004914, 0.4744958481613286) 27th epoch: tensor(10169.0742, device='cuda:0') (0.44544364508393286, 0.4951682772409197, 0.46899163642101943) (0.4469135802469136, 0.4547738693467337, 0.45080946450809467) (0.45871559633027525, 0.4914004914004914, 0.4744958481613286) more epochs: lower loss with same performance: (0.44544364508393286, 0.4951682772409197, 0.46899163642101943) (0.4469135802469136, 0.4547738693467337, 0.45080946450809467) (0.45871559633027525, 0.4914004914004914, 0.4744958481613286) It is really a weird problem, I have no idea how to handle this. Any suggestion would be of great help. Here is related code: for epoch in tqdm(range(200)): {loss = train_one_epoch(dataloader=source_train_dataloader, model=model, optimizer=optimizer) train_perf = test_one_epoch(dataloader=source_train_dataloader_for_test, model=model) dev_perf = test_one_epoch(dataloader=source_dev_dataloader, model=model) test_perf = test_one_epoch(dataloader=source_test_dataloader, model=model) base_result_loc = "bert_char_ps/bert_char_result" # store performance result add_model_result( base_result_loc, epoch, loss, train_perf, dev_perf, test_perf) } The performance should change with loss, but now it does not
I have modified this PyTorch implementation of BERT-NER model and added CRF. I have the following class which works fine in my case. class BertWithCRF(BertPreTrainedModel): def __init__(self, config, labels, dropout=0.1): super(BertWithCRF, self).__init__(config) self.tagset_size = len(labels) self.tag_to_ix = {k: v for v, k in enumerate(labels)} self.bert = BertModel(config) self.dropout = nn.Dropout(dropout) self.classifier = nn.Linear(config.hidden_size, self.tagset_size) self.apply(self.init_bert_weights) self.transitions = nn.Parameter( torch.zeros(self.tagset_size, self.tagset_size)) self.transitions.data[self.tag_to_ix[START_TAG], :] = -10000 self.transitions.data[:, self.tag_to_ix[STOP_TAG]] = -10000 def _batch_forward_alg(self, feats, mask): assert mask is not None # calculate in log domain # feats is batch_size * len(sentence) * tagset_size # initialize alpha with a Tensor with values all equal to -10000. score = torch.Tensor(feats.size(0), self.tagset_size).fill_(-10000.) score[:, self.tag_to_ix[START_TAG]] = 0. if feats.is_cuda: score = score.cuda() mask = mask.float() trans = self.transitions.unsqueeze(0) # [1, C, C] for t in range(feats.size(1)): # recursion through the sequence mask_t = mask[:, t].unsqueeze(1) emit_t = feats[:, t].unsqueeze(2) # [B, C, 1] score_t = score.unsqueeze(1) + emit_t + trans # [B, 1, C] -&gt; [B, C, C] score_t = batch_log_sum_exp(score_t) # [B, 1, C] -&gt; [B, C, C] score = score_t * mask_t + score * (1 - mask_t) score = batch_log_sum_exp(score + self.transitions[self.tag_to_ix[STOP_TAG]]) return score # partition function def _batch_score_sentence(self, feats, tags, mask): assert mask is not None score = torch.Tensor(feats.size(0)).fill_(0.) if feats.is_cuda: score = score.cuda() feats = feats.unsqueeze(3) mask = mask.float() trans = self.transitions.unsqueeze(2) add_start_tags = torch.empty(tags.size(0), 1).fill_(self.tag_to_ix[START_TAG]).type_as(tags) tags = torch.cat([add_start_tags, tags], dim=-1) for t in range(feats.size(1)): # recursion through the sequence mask_t = mask[:, t] emit_t = torch.cat([h[t, y[t + 1]] for h, y in zip(feats, tags)]) trans_t = torch.cat([trans[y[t + 1], y[t]] for y in tags]) score += (emit_t + trans_t) * mask_t last_tag = tags.gather(1, mask.sum(1).long().unsqueeze(1)).squeeze(1) score += self.transitions[self.tag_to_ix[STOP_TAG], last_tag] return score def _batch_viterbi_decode(self, feats, mask): # initialize backpointers and viterbi variables in log space bptr = torch.LongTensor() score = torch.Tensor(feats.size(0), self.tagset_size).fill_(-10000.) score[:, self.tag_to_ix[START_TAG]] = 0. if feats.is_cuda: score = score.cuda() bptr = bptr.cuda() mask = mask.float() for t in range(feats.size(1)): # recursion through the sequence mask_t = mask[:, t].unsqueeze(1) score_t = score.unsqueeze(1) + self.transitions # [B, 1, C] -&gt; [B, C, C] score_t, bptr_t = score_t.max(2) # best previous scores and tags score_t += feats[:, t] # plus emission scores bptr = torch.cat((bptr, bptr_t.unsqueeze(1)), 1) score = score_t * mask_t + score * (1 - mask_t) score += self.transitions[self.tag_to_ix[STOP_TAG]] best_score, best_tag = torch.max(score, 1) # back-tracking bptr = bptr.tolist() best_path = [[i] for i in best_tag.tolist()] for b in range(feats.size(0)): x = best_tag[b] # best tag y = int(mask[b].sum().item()) # no. of non-pad tokens for bptr_t in reversed(bptr[b]): x = bptr_t[x] best_path[b].append(x) best_path[b].pop() best_path[b].reverse() best_path = torch.LongTensor(best_path) if feats.is_cuda: best_path = best_path.cuda() return best_path def _get_bert_features(self, input_ids, token_type_ids, attention_mask): sequence_output, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) sequence_output = self.dropout(sequence_output) bert_feats = self.classifier(sequence_output) return bert_feats def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None): bert_feats = self._get_bert_features(input_ids, token_type_ids, attention_mask) if labels is not None: forward_score = self._batch_forward_alg(bert_feats, attention_mask) gold_score = self._batch_score_sentence(bert_feats, labels, attention_mask) return (forward_score - gold_score).mean() else: tag_seq = self._batch_viterbi_decode(bert_feats, attention_mask) return tag_seq
https://stackoverflow.com/questions/55929458/
Why doesn't the Discriminator's and Generators' loss change?
I'm trying to implement a Generative Adversarial Network (GAN) for the MNIST-Dataset. I use Pytorch for this. My problem is, that after one epoch the Discriminator's and the Generator's loss doesn't change. I already tried two other methods to build the network, but they cause all the same problem :/ import os import torch import matplotlib.pyplot as plt import matplotlib.gridspec as grd import numpy as np import torch.optim as optim import torch.nn as nn import torch.nn.functional as F import torchvision #Datasets from torchvision.utils import save_image import torchvision.transforms as transforms from torch.autograd import Variable import pylab #Parameter batch_size = 64 epochs = 50000 image_size = 784 hidden_size = 392 sample_dir = 'samples' save_dir = 'save' noise_size = 100 lr = 0.001 # Image processing transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5,),(0.5,))]) # Discriminator D = nn.Sequential( nn.Linear(image_size, hidden_size), nn.ReLU(), nn.Linear(hidden_size, 1), nn.Sigmoid() ) # Generator G = nn.Sequential( nn.Linear(noise_size, hidden_size), nn.ReLU(), nn.Linear(hidden_size, image_size), nn.Sigmoid() ) # Lossfunction and optimizer (sigmoid cross entropy with logits and Adam) criterion = nn.BCEWithLogitsLoss() d_optimizer = torch.optim.Adam(D.parameters(), lr = lr) g_optimizer = torch.optim.Adam(G.parameters(), lr = lr) def reset_grad(): d_optimizer.zero_grad() g_optimizer.zero_grad() # Statistics to be saved d_losses = np.zeros(epochs) g_losses = np.zeros(epochs) real_scores = np.zeros(epochs) fake_scores = np.zeros(epochs) # Start training total_step = len(data_loader) for epoch in range(epochs): for i, (images, _) in enumerate(data_loader): if images.shape[0] != 64: continue images = images.view(batch_size, -1).cuda() images = Variable(images) # Create the labels which are later used as input for the BCE loss real_labels = torch.ones(batch_size, 1).cuda() real_labels = Variable(real_labels) fake_labels = torch.zeros(batch_size, 1).cuda() fake_labels = Variable(fake_labels) # Train discriminator # Compute BCE_WithLogitsLoss using real images outputs = D(images) d_loss_real = criterion(outputs, real_labels) real_score = outputs # Compute BCE_WithLogitsLoss using fake images # First term of the loss is always zero since fake_labels == 0 z = torch.randn(batch_size, noise_size).cuda() z = Variable(z) fake_images = G(z) outputs = D(fake_images) d_loss_fake = criterion(outputs, fake_labels) fake_score = outputs # Backprop and optimize # If D is trained so well, then don't update d_loss = d_loss_real + d_loss_fake reset_grad() d_loss.backward() d_optimizer.step() # Train generator # Compute loss with fake images z = torch.randn(batch_size, noise_size).cuda() z = Variable(z) fake_images = G(z) outputs = D(fake_images) # We train G to maximize log(D(G(z)) instead of minimizing log(1 -D(G(z))) # For the reason, see the last paragraph of section 3. https://arxiv.org/pdf/1406.2661.pdf g_loss = criterion(outputs, real_labels) # Backprop and optimize # if G is trained so well, then don't update reset_grad() g_loss.backward() g_optimizer.step() # Update statistics d_losses[epoch] = d_losses[epoch]*(i/(i+1.)) + d_loss.item()*(1./(i+1.)) g_losses[epoch] = g_losses[epoch]*(i/(i+1.)) + g_loss.item()*(1./(i+1.)) real_scores[epoch] = real_scores[epoch]*(i/(i+1.)) + real_score.mean().item()*(1./(i+1.)) fake_scores[epoch] = fake_scores[epoch]*(i/(i+1.)) + fake_score.mean().item()*(1./(i+1.)) # print results print('Epoch [{}/{}], d_loss: {:.4f}, g_loss: {:.4f}, D(x): {:.2f}, D(G(z)): {:.2f}' .format(epoch, epochs, d_loss.item(), g_loss.item(), real_score.mean().item(), fake_score.mean().item())) The Generator's and Discriminator's loss should change from epoch to epoch, but they don't. Epoch [0/50000], d_loss: 1.0069, g_loss: 0.6927, D(x): 1.00, D(G(z)): 0.00 Epoch [1/50000], d_loss: 1.0065, g_loss: 0.6931, D(x): 1.00, D(G(z)): 0.00 Epoch [2/50000], d_loss: 1.0064, g_loss: 0.6931, D(x): 1.00, D(G(z)): 0.00 Epoch [3/50000], d_loss: 1.0064, g_loss: 0.6931, D(x): 1.00, D(G(z)): 0.00 Epoch [4/50000], d_loss: 1.0064, g_loss: 0.6931, D(x): 1.00, D(G(z)): 0.00 Epoch [5/50000], d_loss: 1.0064, g_loss: 0.6931, D(x): 1.00, D(G(z)): 0.00 Thank's for your help.
I found out the solution of the problem. BCEWithLogitsLoss() and Sigmoid() doesn't work together, because BCEWithLogitsLoss() includes the Sigmoid activation. So you can use BCEWithLogitsLoss() without Sigmoid() or you can use Sigmoid() and BCELoss()
https://stackoverflow.com/questions/55936611/
pytorch parallelize for loop of Cross Validation
I have a cuda9-docker with tensorflow and pytorch installed, I am doing cross validation on an image dataset. Currently I am using a for loop to do the cross validation. Something like for data_train, data_test in sklearn.kfold(5, all_data): train(data_train) test(data_test) But the for loop takes too long, will the following code work to parallelize the for loop? Maybe there is already a solution. But this is not Data Parallelization. from multiprocessing import Pool def f(trainset, testset): train_result = train(trainset) test_result = test(testset) save_train_result() save_test_result() if __name__ == '__main__': with Pool(5) as p: print(p.map(f, sklearn.cvfold(5, all_data))) I am not sure if the multiprocessing will only paralize the cpu or both cpu and gpu? This might be easiler than doing parallel in side a model i guess like https://discuss.pytorch.org/t/parallelize-simple-for-loop-for-single-gpu/33701 since in my case, there is no need to communicate across each process?
You use try horovod with PyTorch. ResNet50 example is here: https://github.com/horovod/horovod/blob/master/examples/pytorch/pytorch_imagenet_resnet50.py horovod-related changes should be small and isolated.
https://stackoverflow.com/questions/55938914/
FastAI v1 PyTorch Custom Model
i have been trying to use fastai with a custom torch model. My code is as follow: X_train = np.load(dirpath + 'X_train.npy') X_valid = np.load(dirpath + 'X_valid.npy') Y_train = np.load(dirpath + 'Y_train.npy') Y_valid = np.load(dirpath + 'Y_valid.npy') X_train's shape is : (240, 122, 96), and Y_train's shape is : (240,1) Then i convert these to torch tensors , # Converting data to torch tensors def to_torch_data(x,np_type,tch_type): return torch.from_numpy(x.astype(np_type)).to(tch_type) X_train = to_torch_data(X_train,float,torch.float32) X_valid = to_torch_data(X_valid,float,torch.float32) Y_train = to_torch_data(Y_train,float,torch.float32) Y_valid = to_torch_data(Y_valid,float,torch.float32) Creating TensorDataSets for fastai DataBunch wrapper, # Creating torch tensor datasets so that data can be used # on ImageDataBunch function for fastai train_ds = tdatautils.TensorDataset(X_train,Y_train) valid_ds = tdatautils.TensorDataset(X_valid,Y_valid) # Creating DataBunch object to be used as data in fastai methods. batch_size = 24 my_data_bunch = DataBunch.create(train_ds,valid_ds,bs=batch_size) And this is my custom torch model : # Creating corresponding torch model import torch.nn.functional as F class Net(nn.Module): def __init__(self,droprate=0,activationF=None): super(Net, self).__init__() self.lstm_0 = nn.LSTM(96, 720) self.activation_0 = nn.ELU() self.dropout_0 = nn.Dropout(p=droprate) self.lstm_1 = nn.LSTM(720,480) self.activation_1 = nn.ELU() self.batch_norm_1 = nn.BatchNorm1d(122) self.fc_2 = nn.Linear(480,128) self.dropout_2 = nn.Dropout(p=droprate) self.last = nn.Linear(128,1) self.last_act = nn.ReLU() def forward(self, x): out,hid1 = self.lstm_0(x) out = self.dropout_0(self.activation_0(out)) out,hid2 = self.lstm_1(out) out = out[:,-1,:] out = self.batch_norm_1(self.activation_1(out)) out = self.dropout_2(self.fc_2(out)) out = self.last_act(self.last(out)) return out #create instance of model net = Net(droprate=train_droprate,activationF=train_activation) #.cuda() print(net) After all these, i run the learn for lr_find method. And i get this error : Empty Traceback (most recent call last) C:\Anaconda3\envs\fastai\lib\site-packages\torch\utils\data\dataloader.py in _try_get_batch(self, timeout) 510 try: --&gt; 511 data = self.data_queue.get(timeout=timeout) 512 return (True, data) C:\Anaconda3\envs\fastai\lib\queue.py in get(self, block, timeout) 171 if remaining &lt;= 0.0: --&gt; 172 raise Empty 173 self.not_empty.wait(remaining) Empty: During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) &lt;ipython-input-35-e4b7603c0a82&gt; in &lt;module&gt; ----&gt; 1 my_learner.lr_find() ~\Desktop\fastai\fastai\fastai\train.py in lr_find(learn, start_lr, end_lr, num_it, stop_div, wd) 30 cb = LRFinder(learn, start_lr, end_lr, num_it, stop_div) 31 epochs = int(np.ceil(num_it/len(learn.data.train_dl))) ---&gt; 32 learn.fit(epochs, start_lr, callbacks=[cb], wd=wd) 33 34 def to_fp16(learn:Learner, loss_scale:float=None, max_noskip:int=1000, dynamic:bool=True, clip:float=None, ~\Desktop\fastai\fastai\fastai\basic_train.py in fit(self, epochs, lr, wd, callbacks) 197 callbacks = [cb(self) for cb in self.callback_fns + listify(defaults.extra_callback_fns)] + listify(callbacks) 198 if defaults.extra_callbacks is not None: callbacks += defaults.extra_callbacks --&gt; 199 fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks) 200 201 def create_opt(self, lr:Floats, wd:Floats=0.)-&gt;None: ~\Desktop\fastai\fastai\fastai\basic_train.py in fit(epochs, learn, callbacks, metrics) 97 cb_handler.set_dl(learn.data.train_dl) 98 cb_handler.on_epoch_begin() ---&gt; 99 for xb,yb in progress_bar(learn.data.train_dl, parent=pbar): 100 xb, yb = cb_handler.on_batch_begin(xb, yb) 101 loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler) C:\Anaconda3\envs\fastai\lib\site-packages\fastprogress\fastprogress.py in __iter__(self) 70 self.update(0) 71 try: ---&gt; 72 for i,o in enumerate(self._gen): 73 if i &gt;= self.total: break 74 yield o ~\Desktop\fastai\fastai\fastai\basic_data.py in __iter__(self) 73 def __iter__(self): 74 "Process and returns items from `DataLoader`." ---&gt; 75 for b in self.dl: yield self.proc_batch(b) 76 77 @classmethod C:\Anaconda3\envs\fastai\lib\site-packages\torch\utils\data\dataloader.py in __next__(self) 574 while True: 575 assert (not self.shutdown and self.batches_outstanding &gt; 0) --&gt; 576 idx, batch = self._get_batch() 577 self.batches_outstanding -= 1 578 if idx != self.rcvd_idx: C:\Anaconda3\envs\fastai\lib\site-packages\torch\utils\data\dataloader.py in _get_batch(self) 541 elif self.pin_memory: 542 while self.pin_memory_thread.is_alive(): --&gt; 543 success, data = self._try_get_batch() 544 if success: 545 return data C:\Anaconda3\envs\fastai\lib\site-packages\torch\utils\data\dataloader.py in _try_get_batch(self, timeout) 517 if not all(w.is_alive() for w in self.workers): 518 pids_str = ', '.join(str(w.pid) for w in self.workers if not w.is_alive()) --&gt; 519 raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) 520 if isinstance(e, queue.Empty): 521 return (False, None) RuntimeError: DataLoader worker (pid(s) 9584, 7236, 5108, 932, 13228, 13992, 4576, 13204) exited unexpectedly I have researched about DataLoader but couldn't find anything useful.
Although I didn't understand the error message you posted, I see one problem in your code. out = out[:,-1,:] # batch_size x 480 out = self.batch_norm_1(self.activation_1(out)) But you declared batch_norm_1 as: self.batch_norm_1 = nn.BatchNorm1d(122) Which should be: self.batch_norm_1 = nn.BatchNorm1d(480)
https://stackoverflow.com/questions/55943259/
Training and testing CNN with pytorch. With and without model.eval()
I have two questions:- I am trying to train a convolution neural network initialized with some pre trained weights (Netwrok contains batch normalization layers as well) (taking reference from here). Before training I want to calculate a validation error using loss_fn = torch.nn.MSELoss().cuda(). And in the reference, the author is using model.eval() before calculating the validation error. But with that result, the CNN model is off from what it should be however when I comment out model.eval(), the output is good (what it should be with pre-trained weights). What could be reason behind it as I have read on many posts that model.eval should be used before testing the model and model.train() before training it. While calculating the validation error with pre-trained weights and above mentioned loss function what should be the batch size. Shouldn't it be 1 as i want output on each of my input, calculate error with ground truth and in the end take average of all results. If i use higher batch size error is increased. So question is can i use higher batch size if yes what should be the right way. In given code i have given err = float(loss_local) / num_samples but i observed without averaging i.e err = float(loss_local). Error is different for different batch size. I am doing this without model.eval right now. batch_size = 1 data_path = 'path_to_data' dtype = torch.FloatTensor weight_file = 'path_to_weight_file' val_loader = torch.utils.data.DataLoader(NyuDepthLoader(data_path, val_lists),batch_size=batch_size, shuffle=True, drop_last=True) model = Model(batch_size) model.load_state_dict(load_weights(model, weight_file, dtype)) loss_fn = torch.nn.MSELoss().cuda() # model.eval() with torch.no_grad(): for input, depth in val_loader: input_var = Variable(input.type(dtype)) depth_var = Variable(depth.type(dtype)) output = model(input_var) input_rgb_image = input_var[0].data.permute(1, 2, 0).cpu().numpy().astype(np.uint8) input_gt_depth_image = depth_var[0][0].data.cpu().numpy().astype(np.float32) pred_depth_image = output[0].data.squeeze().cpu().numpy().astype(np.float32) print (format(type(depth_var))) pred_depth_image_resize = cv2.resize(pred_depth_image, dsize=(608, 456), interpolation=cv2.INTER_LINEAR) target_depth_transform = transforms.Compose([flow_transforms.ArrayToTensor()]) pred_depth_image_tensor = target_depth_transform(pred_depth_image_resize) #both inputs to loss_fn are 'torch.Tensor' loss_local += loss_fn(pred_depth_image_tensor, depth_var) num_samples += 1 print ('num_samples {}'.format(num_samples)) err = float(loss_local) / num_samples print('val_error before train:', err)
What could be reason behind it as I have read on many posts that model.eval should be used before testing the model and model.train() before training it. Note: testing the model is called inference. As explained in the official documentation: Remember that you must call model.eval() to set dropout and batch normalization layers to evaluation mode before running inference. Failing to do this will yield inconsistent inference results. So this code must be present once you load the model from a file and do inference. # Model class must be defined somewhere model = torch.load(PATH) model.eval() This is because dropout works as a regularization for preventing overfitting during training, it is not needed for inference. Same for the batch norms. When you use eval() this just sets module train label to False and affects only certain types of modules in particular Dropout and BatchNorm.
https://stackoverflow.com/questions/55948551/
Transfer learning with OpenNMT
I'm training a transformer model with OpenNMT-py on MIDI music files, but results are poor because I only have access to a small dataset pertaining to the style I want to study. To help the model learn something useful, I would like to use a much larger dataset of other styles of music for a pre-training and then fine-tune the results using the small dataset. I was thinking of freezing the encoder side of the transformer after the pre-training and letting the decoder part free to do the fine-tuning. How would one do this with OpenNMT-py?
Please be more specific about your questions and show some code which will help you to get a productive response from the SO community. If I were in your place and wanted to freeze a neural network component, I would simply do: for name, param in self.encoder.named_parameters(): param.requires_grad = False Here I assume you have a NN module like as follows. class Net(nn.Module): def __init__(self, params): super(Net, self).__init__() self.encoder = TransformerEncoder(num_layers, d_model, heads, d_ff, dropout, embeddings, max_relative_positions) def foward(self): # write your code
https://stackoverflow.com/questions/55954232/
GAN, discriminator output only 0 or 1
I'm trying to train SRGAN. (Super Resolution GAN) However, the discriminator's output converge to 0 or 1 whatever the input is. Discriminator's loss function is only D_loss = 0.5*(D_net(fake) + 1 - D_net(real)) D_net(fake) and D_net(real) both becomes 0 or 1. (sigmoid) How can I fix it? for epoch_idx in range(epoch_num): for batch_idx, data in enumerate(data_loader): D_net.zero_grad() #### make real, low, fake real = data[0] for img_idx in range(batch_size): low[img_idx] = trans_low_res(real[img_idx]) fake = G_net(Variable(low).cuda()) #### get Discriminator loss and train Discriminator real_D_out = D_net(Variable(real).cuda()).mean() fake_D_out = D_net(Variable(fake).cuda()).mean() D_loss = 0.5*(fake_D_out + 1 - real_D_out) D_loss.backward() D_optim.step() #### train Generator G_net.zero_grad() #### get new fake D out with updated Discriminator fake_D_out = D_net(Variable(fake).cuda()).mean() G_loss = generator_criterion(fake_D_out.cuda(), fake.cuda(), real.cuda()) G_loss.backward() G_optim.step() Batch : [10/6700] Discriminator_Loss: 0.0860 Generator_Loss : 0.1393 Batch : [20/6700] Discriminator_Loss: 0.0037 Generator_Loss : 0.1282 Batch : [30/6700] Discriminator_Loss: 0.0009 Generator_Loss : 0.0838 Batch : [40/6700] Discriminator_Loss: 0.0002 Generator_Loss : 0.0735 Batch : [50/6700] Discriminator_Loss: 0.0001 Generator_Loss : 0.0648 Batch : [60/6700] Discriminator_Loss: 0.5000 Generator_Loss : 0.0634 Batch : [70/6700] Discriminator_Loss: 0.5000 Generator_Loss : 0.0706 Batch : [80/6700] Discriminator_Loss: 0.5000 Generator_Loss : 0.0691 Batch : [90/6700] Discriminator_Loss: 0.5000 Generator_Loss : 0.0538 ...
I am not sure if I understand your problem correctly. You meant that the sigmoid output from the discriminator is either 0 or 1? In your loss function: D_loss = 0.5 * (fake_D_out + 1 - real_D_out), you are directly optimizing on the sigmoid output and looks like the discriminator overfits to your data that it can accurately predict 0 and 1 for fake and real examples respectively. There are some GAN hacks suggested by experts in this subject matters. You can find a list of tips and tricks here. I would like to suggest you use soft-labels rather than hard-labels (see ref). You can use BCEWithLogitsLoss() and compute loss based on soft labels instead of hard labels. Difference between hard and soft labels: # hard labels real = 1 fake = 0 # soft labels real = np.random.uniform(0.7, 1.0) # 1 fake = np.random.uniform(0.0, 0.3) # 0
https://stackoverflow.com/questions/55957396/
Is there a difference between "torch.nn.CTCLoss" supported by PYTORCH and "CTCLoss" supported by torch_baidu_ctc?
Is there a difference between "torch.nn.CTCLoss" supported by PYTORCH and "CTCLoss" supported by torch_baidu_ctc? i think, I didn't notice any difference when I compared the tutorial code. Does anyone know the true? Tutorial code is located below. import torch from torch_baidu_ctc import ctc_loss, CTCLoss # Activations. Shape T x N x D. # T -&gt; max number of frames/timesteps # N -&gt; minibatch size # D -&gt; number of output labels (including the CTC blank) x = torch.rand(10, 3, 6) # Target labels y = torch.tensor([ # 1st sample 1, 1, 2, 5, 2, # 2nd 1, 5, 2, # 3rd 4, 4, 2, 3, ], dtype=torch.int, ) # Activations lengths xs = torch.tensor([10, 6, 9], dtype=torch.int) # Target lengths ys = torch.tensor([5, 3, 4], dtype=torch.int) # By default, the costs (negative log-likelihood) of all samples are summed. # This is equivalent to: # ctc_loss(x, y, xs, ys, average_frames=False, reduction="sum") loss1 = ctc_loss(x, y, xs, ys) # You can also average the cost of each sample among the number of frames. # The averaged costs are then summed. loss2 = ctc_loss(x, y, xs, ys, average_frames=True) # Instead of summing the costs of each sample, you can perform # other `reductions`: "none", "sum", or "mean" # # Return an array with the loss of each individual sample losses = ctc_loss(x, y, xs, ys, reduction="none") # # Compute the mean of the individual losses loss3 = ctc_loss(x, y, xs, ys, reduction="mean") # # First, normalize loss by number of frames, later average losses loss4 = ctc_loss(x, y, xs, ys, average_frames=True, reduction="mean") # Finally, there's also a nn.Module to use this loss. ctc = CTCLoss(average_frames=True, reduction="mean", blank=0) loss4_2 = ctc(x, y, xs, ys) # Note: the `blank` option is also available for `ctc_loss`. # By default it is 0. torch.nn.CTCLoss T = 50 # Input sequence length C = 20 # Number of classes (excluding blank) N = 16 # Batch size S = 30 # Target sequence length of longest target in batch S_min = 10 # Minimum target length, for demonstration purposes # Initialize random batch of input vectors, for *size = (T,N,C) input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() # Initialize random batch of targets (0 = blank, 1:C+1 = classes) target = torch.randint(low=1, high=C+1, size=(N, S), dtype=torch.long) input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long) ctc_loss = nn.CTCLoss() loss = ctc_loss(input, target, input_lengths, target_lengths) loss.backward() I am Korean. English is not my first language. So I'm not good at English. If there's anything that hasn't been delivered well, please leave a comment. I'll change the sentence as soon as possible.
CTC loss only part of PyTorch since the 1.0 version and it is a better way to go because it is natively part of PyTorch. If you are using PyTorch 1.0 or newer, use torch.nn.CTCLoss. warp-ctc does not seem to be maintained, the last commits changing the core code are from 2017. Later, they only fixed bindings for (an already obsolete version of) TensorFlow.
https://stackoverflow.com/questions/55962112/
Best Way to Overcome Early Convergence for Machine Learning Model
I have a machine learning model built that tries to predict weather data, and in this case I am doing a prediction on whether or not it will rain tomorrow (a binary prediction of Yes/No). In the dataset there is about 50 input variables, and I have 65,000 entries in the dataset. I am currently running a RNN with a single hidden layer, with 35 nodes in the hidden layer. I am using PyTorch's NLLLoss as my loss function, and Adaboost for the optimization function. I've tried many different learning rates, and 0.01 seems to be working fairly well. After running for 150 epochs, I notice that I start to converge around .80 accuracy for my test data. However, I would wish for this to be even higher. However, it seems like the model is stuck oscillating around some sort of saddle or local minimum. (A graph of this is below) What are the most effective ways to get out of this "valley" that the model seems to be stuck in?
Not sure why exactly you are using only one hidden layer and what is the shape of your history data but here are the things you can try: Try more than one hidden layer Experiment with LSTM and GRU layer and combination of these layers together with RNN. Shape of your data i.e. the history you look at to predict the weather. Make sure your features are scaled properly since you have about 50 input variables.
https://stackoverflow.com/questions/55973335/
How to fine tune BERT on its own tasks?
I wanted to pre-train BERT with the data from my own language since multilingual (which includes my language) model of BERT is not successful. Since whole pre-training costs a lot, I decided to fine tune it on its own 2 tasks: masked language model and next sentence prediction. There are previous implementation on different tasks (NER, sentiment analysis etc.), but I couldn't find any fine tuning on its own tasks. Is there an implementation that I couldn't see? If not, where should I start? I need some initial help.
A wonderful resource for BERT is: https://github.com/huggingface/pytorch-pretrained-BERT. This repository contains op-for-op PyTorch reimplementations, pre-trained models and fine-tuning examples for Google's BERT model. You can find the language model fine-tuning examples in the following link. The three example scripts in this folder can be used to fine-tune a pre-trained BERT model using the pretraining objective (the combination of masked language modeling and next sentence prediction loss). https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning By the way, BERT multilingual is available for 104 languages (ref), and it is found to be surprisingly effective in many cross-lingual NLP tasks (ref). So, make sure you use BERT appropriately in your task.
https://stackoverflow.com/questions/55973414/
How to normalize set of images between (-1,1)
i have a dataset on images and i would like to normalize them betwwen (-1,1) before feeding them to NN how can i do that ? x=sample #Normalized Data normalized = (x-min(x))/(max(x)-min(x)) # Histogram of example data and normalized data par(mfrow=c(1,2)) hist(x, breaks=10, xlab="Data", col="lightblue", main="") hist(normalized, breaks=10, xlab="Normalized Data", col="lightblue", main="") i found this code online but it did not solve my problem since i have image dataset
Assuming your image img_array is an np.array : normalized_input = (img_array - np.amin(img_array)) / (np.amax(img_array) - np.amin(img_array)) Will normalize your data between 0 and 1. Then, 2*normalized_input-1 will shift it between -1 and 1 If you want to normalize multiple images, you can make it a function : def normalize_negative_one(img): normalized_input = (img - np.amin(img)) / (np.amax(img) - np.amin(img)) return 2*normalized_input - 1 Then iterate over a e.g. list, tuple of images called imgs : for i,img in enumerate(imgs): imgs[i] = normalize_negative_one(img)
https://stackoverflow.com/questions/55980579/
Why TensorBoard summary is not updating?
I use tensorboard with pytorch1.1 to log loss values. I use writer.add_scalar("loss", loss.item(), global_step) in every for- loop body. However, the plotting graph does not update while the training is processing. Every time I want to see the latest loss, I have to restart the tensorboard server. The code is here import torch import torch.nn as nn import torch.nn.functional as F import torchvision from torch.utils.tensorboard import SummaryWriter from torchvision import datasets, transforms # Writer will output to ./runs/ directory by default writer = SummaryWriter() transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))] ) trainset = datasets.MNIST("mnist_train", train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) model = torchvision.models.resnet50(False) # Have ResNet model take in grayscale rather than RGB model.conv1 = nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False) model.fc = nn.Linear(2048, 10, True) criterion = nn.CrossEntropyLoss() epochs = 100 opt = torch.optim.Adam(model.parameters()) niter = 0 for epoch in range(epochs): for step, (x, y) in enumerate(trainloader): yp = model(x) loss = criterion(yp, y) opt.zero_grad() loss.backward() opt.step() writer.add_scalar("loss", loss.item(), niter) niter += 1 print(loss.item()) grid = torchvision.utils.make_grid(images) writer.add_image("images", grid, 0) writer.add_graph(model, images) writer.close() The training is still going on, and the global steps has already been 3594. However, the tensorboard still shows around 1900.
Also for those who have multiple event log files for a single run, you need to start your tensorboard with --reload_multifile True
https://stackoverflow.com/questions/55980785/
why can't I reimplement my tensorflow model with pytorch?
I am developing a model in tensorflow and find that it is good on my specific evaluation method. But when I transfer to pytorch, I can't achieve the same results. I have checked the model architecture, the weight init method, the lr schedule, the weight decay, momentum and epsilon used in BN layer, the optimizer, and the data preprocessing. All things are the same. But I can't get the same results as in tensorflow. Anybody have met the same problem?
I did a similar conversion recently. First you need to make sure that the forward path produces the same results: disable all randomness, initialize with the same values, give it a very small input and compare. If there is a discrepancy, disable parts of the network and compare enabling layers one by one. When the forward path is confirmed, check the loss, gradients, and updates after one forward-backward cycle.
https://stackoverflow.com/questions/55980918/
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed
I get: RuntimeError: Assertion `cur_target >= 0 &amp;&amp; cur_target &lt; n_classes' failed. at /opt/conda/conda-bld/pytorch_1550796191843/work/aten/src/THNN/generic/ClassNLLCriterion.c:93 When running this code: criterion = nn.CrossEntropyLoss() #Define the optimizer optimizer=optim.SGD(net.parameters(),lr=0.01,momentum=0.9) epochs=20 for epoch in range(epochs): print ("epoch #", epoch) running_loss=0.0 for i, data in enumerate(train_loader,0): inputs,labels=data inputs,labels= inputs.to(device),labels.to(device) optimizer.zero_grad() #train output=net(inputs) loss=criterion(output,labels) print ("loss: ", loss.item()) running_loss+=loss.item() loss.backward() optimizer.step() print ('Finished Training')
The exception says that one of your labels is out of bounds. Maybe they start from 1 instead of 0? Try printing them out.
https://stackoverflow.com/questions/55981101/
How to correctly access elements in a 3D-Pytorch-Tensor?
I am trying to access multiple elements in a 3D-Pytorch-Tensor, but the number of elements that are returned is wrong. This is my code: import torch a = torch.FloatTensor(4,3,2) print("a = {}".format(a)) print("a[:][:][0] = {}".format(a[:][:][0])) This is the output: a = tensor([[[-4.8569e+36, 3.0760e-41], [ 2.7953e+20, 1.6928e+22], [ 3.1692e-40, 7.2945e-15]], [[ 2.5011e+24, 1.3173e-39], [ 1.7229e-07, 4.1262e-08], [ 4.1490e-08, 6.4103e-10]], [[ 3.1728e-40, 5.8258e-40], [ 2.8776e+32, 6.7805e-10], [ 3.1764e-40, 5.4229e+08]], [[ 7.2424e-37, 1.3697e+07], [-2.0362e-33, 1.8146e+11], [ 3.1836e-40, 1.9670e+34]]]) a[:][:][0] = tensor([[-4.8569e+36, 3.0760e-41], [ 2.7953e+20, 1.6928e+22], [ 3.1692e-40, 7.2945e-15]]) I would expect something like this: a[:][:][0] = tensor([[-4.8569e+36, 2.7953e+20, 3.1692e-40, 2.5011e+24, 1.7229e-07, 4.1490e-08, 3.1728e-40, 2.8776e+32, 3.1764e-40, 7.2424e-37, -2.0362e-33, 3.1836e-40]]) Can anyone explain to me how I can come to this result? Thank you very much in advance! I get exactly the expected result on performing: for i in range(4): for j in range(3): print("a[{}][{}][0] = {}".format(i,j, a[i][j][0]))
Short answer, you need to use a[:, :, 0] More explanation: When you do a[:] it returns a itself. So a[:][:][0] is same as doing a[0] which will give you the elements at the 0th position of the first axis (hence the size is (3,2)). What you want are elements from the 0th position of the last axis for which you need to do a[:, :, 0].
https://stackoverflow.com/questions/55982067/
conv2d function in pytorch
I'm trying to use the function torch.conv2d from Pytorch but can't get a result I understand... Here is a simple example where the kernel (filt) is the same size as the input (im) to explain what I'm looking for. import pytorch filt = torch.rand(3, 3) im = torch.rand(3, 3) I want to compute a simple convolution with no padding, so the result should be a scalar (i.e. a 1x1 tensor). I tried this with conv2d: # I have to convert image and kernel to 4 dimensions tensors to use conv2d im_torch = im.reshape((im_height, filt_height, 1, 1)) filt_torch = filt.reshape((filt_height, im_height, 1, 1)) out = torch.nn.functional.conv2d(im_torch, filt_torch, stride=1, padding=0) print(out) But the result is not what I expected: tensor([[[[0.6067]], [[0.3564]], [[0.5397]]], [[[0.2557]], [[0.0493]], [[0.2562]]], [[[0.6067]], [[0.3564]], [[0.5397]]]]) To give an idea of what I'd like, I want to reproduce scipy convolve2d behavior: import scipy.signal out_scipy = scipy.signal.convolve2d(im.detach().numpy(), filt.detach().numpy(), 'valid') print(out_scipy) which prints: array([[1.195723]], dtype=float32)
The tensor shape of your input and the filter should be: (batch, dim_ch, width, height) and NOT: (width, height, 1, 1) e.g. import torch import torch.nn.functional as F x = torch.randn(1,1,4,4); y = torch.randn(1,1,4,4); z = F.conv2d(x,y); Output shape of z: torch.Size([1,1,1,1])
https://stackoverflow.com/questions/55994955/
Inplace arithmetic operation versus normal arithmetic operation in PyTorch Tensors
I am trying to build Linear regression using Pytorch framework and while implementing Gradient Descent, I observed two different outputs based on how I use arithmetic operation in Python code. Below is the code: #X and Y are input and target labels respectively X = torch.randn(100,1)*10 Y = X + 3*torch.randn(100,1) +2 plt.scatter(X.numpy(),Y.numpy()) #Initialiation of weight and bias w = torch.tensor(1.0,requires_grad=True) b = torch.tensor(1.0,requires_grad=True) #forward pass def forward_feed(x): y = w*x +b return y #Parameters Learning epochs = 100 lr = 0.00008 loss_list = [] for epoch in range(epochs): print('epoch',epoch) Y_pred = forward_feed(X) loss = torch.sum((Y - Y_pred)**2) loss_list.append(loss) loss.backward() with torch.no_grad(): w -= lr*w.grad b -= lr*b.grad w.grad.zero_() b.grad.zero_() If I use this code, I get the expected results i.e my code is able to estimate weight and bias. However, If I change the gradient descent code line like below: w =w- lr*w.grad b =b- lr*b.grad I get the below error: AttributeError Traceback (most recent call last) &lt;ipython-input-199-84b86804d4d5&gt; in &lt;module&gt;() ---&gt; 41 w.grad.zero_() 42 b.grad.zero_() AttributeError: 'NoneType' object has no attribute 'zero_' Can anyone please help me with this? I did try checking answers on google and found a related link: https://github.com/pytorch/pytorch/issues/7731. But this is exactly opposite to what I am facing. As per this link, they say that inplace assignment is causing a problem because tensors share the same storage. However, For my code, Inplace operation works not the normal operation.
I think the reason is simple. When you do: w = w - lr * w.grad b = b - lr * b.grad The w and b in the left-hand side are two new tensors and their .grad is None. However, when you do inplace operation, you do not create any new tensor, you just update the value of the concerned tensor. So, in this scenario, inplace operation is required.
https://stackoverflow.com/questions/56019560/
Vectorize to Apply Function to 3d Array
I am trying to apply a function to 3d torch tensor while the function is applied to 2d tensor which is read through the axis 1 of the 3d torch tensor. For example, I have a torch tensor of the shape (51, 128, 20100) (a variable with name autoencode_logprob) and the function(rawid2sentence) runs on the input of the shape (51, 20100). Right now I wrote the code to run with naive for loop, looping one by one with range(128). However, it’s too slow. Following is the code part that matters. autoencode_logprobs is the 3d tensor and I need to apply rawids2sentence function along its second axis. Any help to vectorize it? for i in range(128): output_sent = self.dictionary.rawids2sentence( autoencode_logprobs[:, i].max(1)[ 1].data.cpu().numpy(), oov_dicts[i], ) output_sent_encoding = ifst_model.encode([output_sent])
Since I do not know what rawids2sentence or encode function does, I can help you with to do the max operation. In the following statement, autoencode_logprobs[:, i].max(1)[1] You identify the index of the maximum values along dim=1 for each 51 x 20100 tensor. So, the output is a vector of size 51. You can perform the same operation in your full tensor of shape 51 x 128 x 20100 and get the output as 128 x 51 tensor. autoencode_logprobs.transpose(0, 1).max(2)[1] # 128 x 51 So, if your rawids2sentence or encode methods can tackle batch inputs, the above change should work for you without any loop.
https://stackoverflow.com/questions/56027238/
'An attempt has been made to start' in lr_find with fast.ai
I am running this small piece of code to identify learning rate: import cv2 from fastai.vision import * from fastai.callbacks.hooks import * path = untar_data(URLs.CAMVID) path_lbl = path/'labels' path_img = path/'images' fnames = get_image_files(path_img) lbl_names = get_image_files(path_lbl) img_f = fnames[0] img = open_image(img_f) get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' mask = open_mask(get_y_fn(img_f)) src_size = np.array(mask.shape[1:]) src_size,mask.data codes = np.loadtxt(path/'codes.txt', dtype=str); codes size = src_size//2 bs=4 src = (SegmentationItemList.from_folder(path_img) .split_by_fname_file('../valid.txt') .label_from_func(get_y_fn, classes=codes)) data = (src.transform(get_transforms(), size=size, tfm_y=True) .databunch(bs=bs) .normalize(imagenet_stats)) name2id = {v:k for k,v in enumerate(codes)} void_code = name2id['Void'] def acc_camvid(input, target): target = target.squeeze(1) mask = target != void_code return (input.argmax(dim=1)[mask]==target[mask]).float().mean() wd=1e-2 learn = unet_learner(data, models.resnet34, metrics=acc_camvid, wd=wd) lr_find(learn) print("end") And I get this error: RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. and also this one: Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.4\helpers\pydev\pydevd.py", line 1664, in &lt;module&gt; main() File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.4\helpers\pydev\pydevd.py", line 1658, in main globals = debugger.run(setup['file'], None, None, is_module) File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.4\helpers\pydev\pydevd.py", line 1068, in run pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.4\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Users/steve/Project/fastai_unet/main.py", line 32, in &lt;module&gt; lr_find(learn) File "C:\Users\steve\Miniconda3\lib\site-packages\fastai\train.py", line 32, in lr_find learn.fit(epochs, start_lr, callbacks=[cb], wd=wd) File "C:\Users\steve\Miniconda3\lib\site-packages\fastai\basic_train.py", line 199, in fit fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks) File "C:\Users\steve\Miniconda3\lib\site-packages\fastai\basic_train.py", line 99, in fit for xb,yb in progress_bar(learn.data.train_dl, parent=pbar): File "C:\Users\steve\Miniconda3\lib\site-packages\fastprogress\fastprogress.py", line 72, in __iter__ for i,o in enumerate(self._gen): File "C:\Users\steve\Miniconda3\lib\site-packages\fastai\basic_data.py", line 75, in __iter__ for b in self.dl: yield self.proc_batch(b) File "C:\Users\steve\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 193, in __iter__ return _DataLoaderIter(self) File "C:\Users\steve\Miniconda3\lib\site-packages\torch\utils\data\dataloader.py", line 469, in __init__ w.start() File "C:\Users\steve\Miniconda3\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "C:\Users\steve\Miniconda3\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\steve\Miniconda3\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\steve\Miniconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__ reduction.dump(process_obj, to_child) File "C:\Users\steve\Miniconda3\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) BrokenPipeError: [Errno 32] Broken pipe how can I fix this?
Oh, the solution is to wrap code in a method and invoke it: import cv2 from fastai.vision import * from fastai.callbacks.hooks import * def main(): path = untar_data(URLs.CAMVID) path_lbl = path/'labels' path_img = path/'images' fnames = get_image_files(path_img) lbl_names = get_image_files(path_lbl) get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' mask = open_mask(get_y_fn(img_f)) src_size = np.array(mask.shape[1:]) src_size,mask.data codes = np.loadtxt(path/'codes.txt', dtype=str); codes size = src_size//2 bs=4 src = (SegmentationItemList.from_folder(path_img) .split_by_fname_file('../valid.txt') .label_from_func(get_y_fn, classes=codes)) data = (src.transform(get_transforms(), size=size, tfm_y=True) .databunch(bs=bs) .normalize(imagenet_stats)) name2id = {v:k for k,v in enumerate(codes)} void_code = name2id['Void'] def acc_camvid(input, target): target = target.squeeze(1) mask = target != void_code return (input.argmax(dim=1)[mask]==target[mask]).float().mean() wd=1e-2 learn = unet_learner(data, models.resnet34, metrics=acc_camvid, wd=wd) lr_find(learn) print("end") if __name__ == '__main__': main()
https://stackoverflow.com/questions/56028489/
Validation loss not moving with MLP in Regression
Given input features as such, just raw numbers: tensor([0.2153, 0.2190, 0.0685, 0.2127, 0.2145, 0.1260, 0.1480, 0.1483, 0.1489, 0.1400, 0.1906, 0.1876, 0.1900, 0.1925, 0.0149, 0.1857, 0.1871, 0.2715, 0.1887, 0.1804, 0.1656, 0.1665, 0.1137, 0.1668, 0.1168, 0.0278, 0.1170, 0.1189, 0.1163, 0.2337, 0.2319, 0.2315, 0.2325, 0.0519, 0.0594, 0.0603, 0.0586, 0.0067, 0.0624, 0.2691, 0.0617, 0.2790, 0.2805, 0.2848, 0.2454, 0.1268, 0.2483, 0.2454, 0.2475], device='cuda:0') And the expected output is a single real number output, e.g. tensor(-34.8500, device='cuda:0') Full code on https://www.kaggle.com/alvations/pytorch-mlp-regression I've tried creating a simple 2 layer network with: class MLP(nn.Module): def __init__(self, input_size, output_size, hidden_size): super(MLP, self).__init__() self.linear = nn.Linear(input_size, hidden_size) self.classifier = nn.Linear(hidden_size, output_size) def forward(self, inputs, hidden=None, dropout=0.5): inputs = F.dropout(inputs, dropout) # Drop-in. # First Layer. output = F.relu(self.linear(inputs)) # Matrix manipulation magic. batch_size, sequence_len, hidden_size = output.shape # Technically, linear layer takes a 2-D matrix as input, so more manipulation... output = output.contiguous().view(batch_size * sequence_len, hidden_size) # Apply dropout. output = F.dropout(output, dropout) # Put it through the classifier # And reshape it to [batch_size x sequence_len x vocab_size] output = self.classifier(output).view(batch_size, sequence_len, -1) return output And training as such: # Training routine. def train(num_epochs, dataloader, valid_dataset, model, criterion, optimizer): losses = [] valid_losses = [] learning_rates = [] plt.ion() x_valid, y_valid = valid_dataset for _e in range(num_epochs): for batch in tqdm(dataloader): # Zero gradient. optimizer.zero_grad() #print(batch) this_x = torch.tensor(batch['x'].view(len(batch['x']), 1, -1)).to(device) this_y = torch.tensor(batch['y'].view(len(batch['y']), 1, 1)).to(device) # Feed forward. output = model(this_x) prediction, _ = torch.max(output, dim=1) loss = criterion(prediction, this_y.view(len(batch['y']), -1)) loss.backward() optimizer.step() losses.append(torch.sqrt(loss.float()).data) with torch.no_grad(): # Zero gradient. optimizer.zero_grad() output = model(x_valid.view(len(x_valid), 1, -1)) prediction, _ = torch.max(output, dim=1) loss = criterion(prediction, y_valid.view(len(y_valid), -1)) valid_losses.append(torch.sqrt(loss.float()).data) clear_output(wait=True) plt.plot(losses, label='Train') plt.plot(valid_losses, label='Valid') plt.legend() plt.pause(0.05) Tuning several hyperparameters, it looks like the model doesn't train well, the validation loss doesn't move at all e.g. hyperparams = Hyperparams(input_size=train_dataset.x.shape[1], output_size=1, hidden_size=150, loss_func=nn.MSELoss, learning_rate=1e-8, optimizer=optim.Adam, batch_size=500) And it's loss curve: Any idea what's wrong with the network? Am I training the regression model with the wrong loss? Or I've just not yet found the right hyperparameters? Or am I validating the model wrongly?
From the code you provided, it is tough to say why the validation loss is constant but I see several problems in your code. Why do you validate for each training mini-batch? Instead, you should validate your model after you do the training for one complete epoch (iterating over your full dataset once). So, the skeleton should be like: for _e in range(num_epochs): for batch in tqdm(train_dataloader): # training code with torch.no_grad(): for batch in tqdm(valid_dataloader): # validation code # plot your loss values Also, you can plot after each epoch, not after each mini-batch training. Did you check whether the model parameters are getting updated after optimizer.step() during training? How many validation examples do you have? Why don't you use mini-batch computation during validation? Why do you do: optimizer.zero_grad() during validation? It doesn't make sense because, during validation, you are not going to do anything related to optimization. You should use model.eval() during validation to turn off the dropouts. See PyTorch documentation to learn about .train() and .eval() methods. The learning rate is set to 1e-8, isn't it too small? Why don't you use the default learning rate for Adam (1e-3)? The following requires some reasoning. Why are you using such a large batch size? What is your training dataset size? You can directly plot the MSELoss, instead of taking the square root. My suggestion would be: use some existing resources for MLP in PyTorch. Don't do it from scratch if you do not have sufficient knowledge at this point. It would make you suffer a lot.
https://stackoverflow.com/questions/56069685/
How to convert keras LSTM to pytorch LSTM?
I have a very simple LSTM example written in Keras that I am trying to port to pytorch. But it does not seem to be able to learn at all. I am an absolute beginning so any advice is appreciated. KERAS: X_train_lmse has shape (1691, 1, 1), I am essentially running X(t) with X(t-1) as single feature lstm_model = Sequential() lstm_model.add(LSTM(7, input_shape=(1, X_train_lmse.shape[1]), activation='relu', kernel_initializer='lecun_uniform', return_sequences=False)) lstm_model.add(Dense(1)) lstm_model.compile(loss='mean_squared_error', optimizer='adam') early_stop = EarlyStopping(monitor='loss', patience=2, verbose=1) history_lstm_model = lstm_model.fit(X_train_lmse, y_train, epochs=100, batch_size=1, verbose=1, shuffle=False, callbacks=[early_stop]) Output: Epoch 1/100 1691/1691 [==============================] - 10s 6ms/step - loss: 0.0236 Epoch 2/100 1691/1691 [==============================] - 9s 5ms/step - loss: 0.0076 Epoch 3/100 ... PYTORCH: X_train_tensor has same shape as in keras (1691, 1, 1). I am specifying batch_first to be true below so I think it should be ok. class LSTM_model(nn.Module): def __init__(self): super(LSTM_model, self).__init__() self.lstm = nn.LSTM(input_size=1, hidden_size=7, num_layers=1, batch_first=True) self.dense = nn.Linear(7, 1) def forward(self, x): out, states = self.lstm(x) out = self.dense(out) return out lstm_model = LSTM_model() loss_function = nn.MSELoss() optimizer = optim.Adam(lstm_model.parameters()) for t in range(100): y_pred = lstm_model(X_train_tensor) loss = loss_function(y_pred, Y_train_tensor) optimizer.zero_grad() loss.backward() optimizer.step() print('Train Epoch ', t, ' Loss = ', loss) Output: Train Epoch 0 Loss = tensor(0.2834, grad_fn=&lt;MseLossBackward&gt;) Train Epoch 1 Loss = tensor(0.2812, grad_fn=&lt;MseLossBackward&gt;) Train Epoch 2 Loss = tensor(0.2790, grad_fn=&lt;MseLossBackward&gt;) Train Epoch 3 Loss = tensor(0.2768, grad_fn=&lt;MseLossBackward&gt;) Train Epoch 4 Loss = tensor(0.2746, grad_fn=&lt;MseLossBackward&gt;) Train Epoch 5 Loss = tensor(0.2725, grad_fn=&lt;MseLossBackward&gt;) Train Epoch 6 Loss = tensor(0.2704, grad_fn=&lt;MseLossBackward&gt;) Train Epoch 7 Loss = tensor(0.2683, grad_fn=&lt;MseLossBackward&gt;) ... As you can see, the error barely moves in Pytorch. Also each epoch runs much much faster than keras. I must be doing something stupid. I checked the input data and they look identical in both implementations. Thanks!
You miss the relu activation function in your PyTorch model (See Relu layer in PyTorch). Also, you seem to be using a customized kernel_initalizer for the weights. You can pass your initialization weights in the model call: ... y_pred = lstm_model(X_train_tensor, (hn, cn)) ...
https://stackoverflow.com/questions/56084625/
Windows installing pytorch 0.3
I need to install pytorch==0.3 (I'm using conda), but when I run the command line, conda says that the needed packages are not available. Is there a way I can install it (possibly without using ubuntu)?
peterjc123 released the version for windows here: https://anaconda.org/peterjc123/pytorch
https://stackoverflow.com/questions/56087752/
Input Tensors not being moved to GPU in pytorch
When running my code, I get the error: Input and parameter tensors are not at the same device, found input tensor at cpu and parameter tensor at cuda:0 even though I'm using .cuda() on my inputs. Google Colab link Code: use_cuda = True if use_cuda and torch.cuda.is_available(): model.cuda() def test(): model.eval() avgLoss = 0 for dataPoint in range(len(testData)): lstmInput = testData[dataPoint][0] lstmInput = torch.Tensor(lstmInput) lstmInput = lstmInput.view(len(testData[dataPoint][0]), 1, 5) label = testData[dataPoint][1] label = torch.Tensor(label) lstmInput = Variable(lstmInput) label = Variable(label) if use_cuda and torch.cuda.is_available(): lstmInput.cuda() label.cuda() pred_label = model(lstmInput) loss = loss_fn(label, pred_label) avgLoss += loss.item() return avgLoss / len(testData) def train(num_epochs): model.train() for epoch in range(num_epochs): avgLoss = 0.0 for datapoint in range(len(trainData)): model.hidden = model.init_hidden() optimizer.zero_grad() lstmInput = trainData[datapoint][0] lstmInput = torch.Tensor(lstmInput) lstmInput = lstmInput.view(len(trainData[datapoint][0]), 1, 5) label = torch.Tensor(trainData[datapoint][1]) label = label.view(1, 5) lstmInput = Variable(lstmInput) label = Variable(label) if use_cuda and torch.cuda.is_available(): print("happens") lstmInput.cuda() label.cuda() pred_label = model(lstmInput) loss = loss_fn(pred_label, label) # print(label, pred_label) avgLoss += loss.item() loss.backward() optimizer.step() print("Epoch: ", epoch, "MSELoss: ", avgLoss / len(trainData), "Test Acc: ", test())
The cuda()method returns the tensor on the right gpu so you need to assign it back to your input variable: lstmInput, label = lstimInput.cuda(), label.cuda()
https://stackoverflow.com/questions/56105521/
How to calculate gradients on a tensor in PyTorch?
I want to calculate the gradient of a tensor and however, it gives error as RunTimeerror: grad can be implicitly created only for scalar outputs and here is what I am trying to code: x = torch.full((2,3), 4,requires_grad=True) y = (2*x**2+3) y.backward() And at this point, it throws an error.
Since there is no summing up/reducing the loss-value , like .sum() Hence the issue could be fixed by: y.backward(torch.ones_like(x)) which performs a Jacobian-vector product with a tensor of all ones and get the gradient.
https://stackoverflow.com/questions/56111340/
What wrong when i load state_dict of resnet50.pth with pytorch
i load the resnet50.pth and KeyError of 'state_dict' pytorch version is 0.4.1 i tried delete/add torch.nn.parallel but it didn't help and resnet50.pth loaded from pytorch API related code model = ResNet(len(CLASSES), pretrained=args.use_imagenet_weights) if cuda_is_available: model = nn.DataParallel(model, device_ids=[2]).cuda() if args.model: print("Loading model " + args.model) state_dict = torch.load(args.model)['state_dict'] model.load_state_dict(state_dict) Traceback Loading model resnet50-19c8e357.pth Traceback (most recent call last): File "train.py", line 67, in &lt;module&gt; state_dict = torch.load(args.model)['state_dict'] KeyError: 'state_dict' when print(torch.load(args.model).keys()) odict_keys(['conv1.weight', 'bn1.running_mean', 'bn1.running_var', 'bn1.weight', 'bn1.bias', 'layer1.0.conv1.weight', 'layer1.0.bn1.running_mean', 'layer1.0.bn1.running_var', 'layer1.0.bn1.weight', 'layer1.0.bn1.bias', 'layer1.0.conv2.weight', 'layer1.0.bn2.running_mean', 'layer1.0.bn2.running_var', 'layer1.0.bn2.weight', 'layer1.0.bn2.bias', 'layer1.0.conv3.weight', 'layer1.0.bn3.running_mean', 'layer1.0.bn3.running_var', 'layer1.0.bn3.weight', 'layer1.0.bn3.bias', 'layer1.0.downsample.0.weight', 'layer1.0.downsample.1.running_mean', 'layer1.0.downsample.1.running_var', 'layer1.0.downsample.1.weight', 'layer1.0.downsample.1.bias', 'layer1.1.conv1.weight', 'layer1.1.bn1.running_mean', 'layer1.1.bn1.running_var', 'layer1.1.bn1.weight', 'layer1.1.bn1.bias', 'layer1.1.conv2.weight', 'layer1.1.bn2.running_mean', 'layer1.1.bn2.running_var', 'layer1.1.bn2.weight', 'layer1.1.bn2.bias', 'layer1.1.conv3.weight', 'layer1.1.bn3.running_mean', 'layer1.1.bn3.running_var', 'layer1.1.bn3.weight', 'layer1.1.bn3.bias', 'layer1.2.conv1.weight', 'layer1.2.bn1.running_mean', 'layer1.2.bn1.running_var', 'layer1.2.bn1.weight', 'layer1.2.bn1.bias', 'layer1.2.conv2.weight', 'layer1.2.bn2.running_mean', 'layer1.2.bn2.running_var', 'layer1.2.bn2.weight', 'layer1.2.bn2.bias', 'layer1.2.conv3.weight', 'layer1.2.bn3.running_mean', 'layer1.2.bn3.running_var', 'layer1.2.bn3.weight', 'layer1.2.bn3.bias', 'layer2.0.conv1.weight', 'layer2.0.bn1.running_mean', 'layer2.0.bn1.running_var', 'layer2.0.bn1.weight', 'layer2.0.bn1.bias', 'layer2.0.conv2.weight', 'layer2.0.bn2.running_mean', 'layer2.0.bn2.running_var', 'layer2.0.bn2.weight', 'layer2.0.bn2.bias', 'layer2.0.conv3.weight', 'layer2.0.bn3.running_mean', 'layer2.0.bn3.running_var', 'layer2.0.bn3.weight', 'layer2.0.bn3.bias', 'layer2.0.downsample.0.weight', 'layer2.0.downsample.1.running_mean', 'layer2.0.downsample.1.running_var', 'layer2.0.downsample.1.weight', 'layer2.0.downsample.1.bias', 'layer2.1.conv1.weight', 'layer2.1.bn1.running_mean', 'layer2.1.bn1.running_var', 'layer2.1.bn1.weight', 'layer2.1.bn1.bias', 'layer2.1.conv2.weight', 'layer2.1.bn2.running_mean', 'layer2.1.bn2.running_var', 'layer2.1.bn2.weight', 'layer2.1.bn2.bias', 'layer2.1.conv3.weight', 'layer2.1.bn3.running_mean', 'layer2.1.bn3.running_var', 'layer2.1.bn3.weight', 'layer2.1.bn3.bias', 'layer2.2.conv1.weight', 'layer2.2.bn1.running_mean', 'layer2.2.bn1.running_var', 'layer2.2.bn1.weight', 'layer2.2.bn1.bias', 'layer2.2.conv2.weight', 'layer2.2.bn2.running_mean', 'layer2.2.bn2.running_var', 'layer2.2.bn2.weight', 'layer2.2.bn2.bias', 'layer2.2.conv3.weight', 'layer2.2.bn3.running_mean', 'layer2.2.bn3.running_var', 'layer2.2.bn3.weight', 'layer2.2.bn3.bias', 'layer2.3.conv1.weight', 'layer2.3.bn1.running_mean', 'layer2.3.bn1.running_var', 'layer2.3.bn1.weight', 'layer2.3.bn1.bias', 'layer2.3.conv2.weight', 'layer2.3.bn2.running_mean', 'layer2.3.bn2.running_var', 'layer2.3.bn2.weight', 'layer2.3.bn2.bias', 'layer2.3.conv3.weight', 'layer2.3.bn3.running_mean', 'layer2.3.bn3.running_var', 'layer2.3.bn3.weight', 'layer2.3.bn3.bias', 'layer3.0.conv1.weight', 'layer3.0.bn1.running_mean', 'layer3.0.bn1.running_var', 'layer3.0.bn1.weight', 'layer3.0.bn1.bias', 'layer3.0.conv2.weight', 'layer3.0.bn2.running_mean', 'layer3.0.bn2.running_var', 'layer3.0.bn2.weight', 'layer3.0.bn2.bias', 'layer3.0.conv3.weight', 'layer3.0.bn3.running_mean', 'layer3.0.bn3.running_var', 'layer3.0.bn3.weight', 'layer3.0.bn3.bias', 'layer3.0.downsample.0.weight', 'layer3.0.downsample.1.running_mean', 'layer3.0.downsample.1.running_var', 'layer3.0.downsample.1.weight', 'layer3.0.downsample.1.bias', 'layer3.1.conv1.weight', 'layer3.1.bn1.running_mean', 'layer3.1.bn1.running_var', 'layer3.1.bn1.weight', 'layer3.1.bn1.bias', 'layer3.1.conv2.weight', 'layer3.1.bn2.running_mean', 'layer3.1.bn2.running_var', 'layer3.1.bn2.weight', 'layer3.1.bn2.bias', 'layer3.1.conv3.weight', 'layer3.1.bn3.running_mean', 'layer3.1.bn3.running_var', 'layer3.1.bn3.weight', 'layer3.1.bn3.bias', 'layer3.2.conv1.weight', 'layer3.2.bn1.running_mean', 'layer3.2.bn1.running_var', 'layer3.2.bn1.weight', 'layer3.2.bn1.bias', 'layer3.2.conv2.weight', 'layer3.2.bn2.running_mean', 'layer3.2.bn2.running_var', 'layer3.2.bn2.weight', 'layer3.2.bn2.bias', 'layer3.2.conv3.weight', 'layer3.2.bn3.running_mean', 'layer3.2.bn3.running_var', 'layer3.2.bn3.weight', 'layer3.2.bn3.bias', 'layer3.3.conv1.weight', 'layer3.3.bn1.running_mean', 'layer3.3.bn1.running_var', 'layer3.3.bn1.weight', 'layer3.3.bn1.bias', 'layer3.3.conv2.weight', 'layer3.3.bn2.running_mean', 'layer3.3.bn2.running_var', 'layer3.3.bn2.weight', 'layer3.3.bn2.bias', 'layer3.3.conv3.weight', 'layer3.3.bn3.running_mean', 'layer3.3.bn3.running_var', 'layer3.3.bn3.weight', 'layer3.3.bn3.bias', 'layer3.4.conv1.weight', 'layer3.4.bn1.running_mean', 'layer3.4.bn1.running_var', 'layer3.4.bn1.weight', 'layer3.4.bn1.bias', 'layer3.4.conv2.weight', 'layer3.4.bn2.running_mean', 'layer3.4.bn2.running_var', 'layer3.4.bn2.weight', 'layer3.4.bn2.bias', 'layer3.4.conv3.weight', 'layer3.4.bn3.running_mean', 'layer3.4.bn3.running_var', 'layer3.4.bn3.weight', 'layer3.4.bn3.bias', 'layer3.5.conv1.weight', 'layer3.5.bn1.running_mean', 'layer3.5.bn1.running_var', 'layer3.5.bn1.weight', 'layer3.5.bn1.bias', 'layer3.5.conv2.weight', 'layer3.5.bn2.running_mean', 'layer3.5.bn2.running_var', 'layer3.5.bn2.weight', 'layer3.5.bn2.bias', 'layer3.5.conv3.weight', 'layer3.5.bn3.running_mean', 'layer3.5.bn3.running_var', 'layer3.5.bn3.weight', 'layer3.5.bn3.bias', 'layer4.0.conv1.weight', 'layer4.0.bn1.running_mean', 'layer4.0.bn1.running_var', 'layer4.0.bn1.weight', 'layer4.0.bn1.bias', 'layer4.0.conv2.weight', 'layer4.0.bn2.running_mean', 'layer4.0.bn2.running_var', 'layer4.0.bn2.weight', 'layer4.0.bn2.bias', 'layer4.0.conv3.weight', 'layer4.0.bn3.running_mean', 'layer4.0.bn3.running_var', 'layer4.0.bn3.weight', 'layer4.0.bn3.bias', 'layer4.0.downsample.0.weight', 'layer4.0.downsample.1.running_mean', 'layer4.0.downsample.1.running_var', 'layer4.0.downsample.1.weight', 'layer4.0.downsample.1.bias', 'layer4.1.conv1.weight', 'layer4.1.bn1.running_mean', 'layer4.1.bn1.running_var', 'layer4.1.bn1.weight', 'layer4.1.bn1.bias', 'layer4.1.conv2.weight', 'layer4.1.bn2.running_mean', 'layer4.1.bn2.running_var', 'layer4.1.bn2.weight', 'layer4.1.bn2.bias', 'layer4.1.conv3.weight', 'layer4.1.bn3.running_mean', 'layer4.1.bn3.running_var', 'layer4.1.bn3.weight', 'layer4.1.bn3.bias', 'layer4.2.conv1.weight', 'layer4.2.bn1.running_mean', 'layer4.2.bn1.running_var', 'layer4.2.bn1.weight', 'layer4.2.bn1.bias', 'layer4.2.conv2.weight', 'layer4.2.bn2.running_mean', 'layer4.2.bn2.running_var', 'layer4.2.bn2.weight', 'layer4.2.bn2.bias', 'layer4.2.conv3.weight', 'layer4.2.bn3.running_mean', 'layer4.2.bn3.running_var', 'layer4.2.bn3.weight', 'layer4.2.bn3.bias', 'fc.weight', 'fc.bias']) just want to run plz
Did you perhaps mean the following? state_dict = torch.load(args.model['state_dict']) From your edit, it seems that your model is the model itself. There is no state_dict. So just use state_dict = torch.load(args.model)
https://stackoverflow.com/questions/56115736/
Adding parameters to a pretrained model
In Pytorch, we load the pretrained model as follows: net.load_state_dict(torch.load(path)['model_state_dict']) Then the network structure and the loaded model have to be exactly the same. However, is it possible to load the weights but then modify the network/add an extra parameter? Note: If we add an extra parameter to the model earlier before loading the weights, e.g. self.parameter = Parameter(torch.ones(5),requires_grad=True) we will get Missing key(s) in state_dict: error when loading the weights.
Let's create a model and save its' state. class Model1(nn.Module): def __init__(self): super(Model1, self).__init__() self.encoder = nn.LSTM(100, 50) def forward(self): pass model1 = Model1() torch.save(model1.state_dict(), 'filename.pt') # saving model Then create a second model which has a few layers common to the first model. Load the states of the first model and load it to the common layers of the second model. class Model2(nn.Module): def __init__(self): super(Model2, self).__init__() self.encoder = nn.LSTM(100, 50) self.linear = nn.Linear(50, 200) def forward(self): pass model1_dict = torch.load('filename.pt') model2 = Model2() model2_dict = model2.state_dict() # 1. filter out unnecessary keys filtered_dict = {k: v for k, v in model1_dict.items() if k in model2_dict} # 2. overwrite entries in the existing state dict model2_dict.update(filtered_dict) # 3. load the new state dict model2.load_state_dict(model2_dict)
https://stackoverflow.com/questions/56116892/
How to cover a label list under the multi-label classification context into one-hot encoding with pytorch?
I have a list of one batch data with multi-label for every sample. So how to covert it into torch.Tensor in one-hot encoding? For example, with batch_size=5 and class_num=6, label =[ [1,2,3], [4,6], [1], [1,4,5], [4] ] how to make it into one-hot encoding in pytorch? label_tensor=tensor([ [1,1,1,0,0,0], [0,0,0,1,0,1], [1,0,0,0,0,0], [1,0,0,1,1,0], [0,0,0,1,0,0] ])
If the batch size can be derived from len(labels): def to_onehot(labels, n_categories, dtype=torch.float32): batch_size = len(labels) one_hot_labels = torch.zeros(size=(batch_size, n_categories), dtype=dtype) for i, label in enumerate(labels): # Subtract 1 from each LongTensor because your # indexing starts at 1 and tensor indexing starts at 0 label = torch.LongTensor(label) - 1 one_hot_labels[i] = one_hot_labels[i].scatter_(dim=0, index=label, value=1.) return one_hot_labels and you have 6 categories and want the output to be a tensor of integers: to_onehot(labels, n_categories=6, dtype=torch.int64) tensor([[1, 1, 1, 0, 0, 0], [0, 0, 0, 1, 0, 1], [1, 0, 0, 0, 0, 0], [1, 0, 0, 1, 1, 0], [0, 0, 0, 1, 0, 0]]) I would stick to torch.float32 in case you want to use label smoothing, mix-up or something along those lines later.
https://stackoverflow.com/questions/56123419/
Saving Pytorch model.state_dict() to s3
I am trying to save a trained Pytorch model to S3. However, the torch.save(model.state_dict(), file_name) seems to support only local files. How can the state dict be saved to an S3 file? I'm using Torch 0.4.0
As discussed by Soumith Chintala, Pytorch doesn't have custom APIs to do this job. However you can use boto3 or Petastorm library to solve the problem. Here's a concrete example to write to an S3 object directly: import boto3 # Convert your existing model to JSON saved_model = model.to_json() # Write JSON object to S3 as "model.json" client = boto3.client('s3') client.put_object(Body=saved_model, Bucket='BUCKET_NAME', Key='model.json')
https://stackoverflow.com/questions/56144895/
Strange behavior of Inception_v3
I am trying to create a generative network based on the pre-trained Inception_v3. 1) I fix all the weights in the model 2) create a Variable whose size is (2, 3, 299, 299) 3) create targets of size (2, 1000) that I want my final layer activations to become as close as possible to by optimizing the Variable. (I do not set the batchsize of 1, because unlike VGG16, Inception_v3 doesn't take batchsize=1, but that's not the point). The following code should work, but gives me the error: «RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation». # minimalist code with Inception_v3 that throws the error: import torch from torch.autograd import Variable import torch.optim as optim import torch.nn as nn import torchvision torch.set_default_tensor_type('torch.FloatTensor') Iv3 = torchvision.models.inception_v3(pretrained=True) for i in Iv3.parameters(): i.requires_grad = False criterion = nn.CrossEntropyLoss() x = Variable(torch.randn(2, 3, 299, 299), requires_grad=True) target = torch.empty(2, dtype=torch.long).random_(1000) output = Iv3(x) loss = criterion(output[0], target) loss.backward() print(x.grad) This is very strange, because if I do the same thing with VGG16, everything works fine: # minimalist working code with VGG16: import torch from torch.autograd import Variable import torch.optim as optim import torch.nn as nn import torchvision # torch.cuda.empty_cache() # vgg16 = torchvision.models.vgg16(pretrained=True).cuda() # torch.set_default_tensor_type('torch.cuda.FloatTensor') torch.set_default_tensor_type('torch.FloatTensor') vgg16 = torchvision.models.vgg16(pretrained=True) for i in vgg16.parameters(): i.requires_grad = False criterion = nn.CrossEntropyLoss() x = Variable(torch.randn(2, 3, 229, 229), requires_grad=True) target = torch.empty(2, dtype=torch.long).random_(1000) output = vgg16(x) loss = criterion(output, target) loss.backward() print(x.grad) Please help.
Thanks to @iacolippo the issue is solved. Turns out the problem was due to Pytorch 1.0.0. No problem with Pytorch 0.4.1. though.
https://stackoverflow.com/questions/56146262/
How to use torch to speed up some common computations?
I am trying make some common computations, like matrix multiplication, but without gradient computation. An example of my computation is like import numpy as np from scipy.special import logsumexp var = 1e-8 a = np.random.randint(0,10,(128,20)) result = np.logsumexp(a, axis=1) / 2. + np.log(np.pi * var) I want to use torch (gpu) to speed up the computation. Here is the code import numpy as np import torch var = 1e-8 a = np.random.randint(0,10,(128,20)) a = torch.numpy_from(a).cuda() result = torch.logsumexp(a, dim=1)/ 2. + np.log(np.pi*var) but i have some questions: Could the above code speed up the computation? I don't know if it works. Do I need to convert all values into torch.tensor, like from var to torch.tensor(var).cuda() and from np.log(np.pi*var) to a torch.tensor? Do I need to convert all tensors into gpu by myself, especially for some intermediate variable? If the above code doesn't work, how can I speed up the computation with gpu?
You could use torch only to do the computations. import torch # optimization by passing device argument, tensor is created on gpu and hence move operation is saved # convert to float to use with logsumexp a = torch.randint(0,10, (128,20), device="cuda").float() result = torch.logsumexp(a, dim=1)/ 2. Answers to your some of your questions: Could the above code speed up the computation? It depends. If you have too many matrix multiplication, using gpu can give speed up. Do I need to convert all values into torch.tensor, like from var to torch.tensor(var).cuda() and from np.log(np.pi*var) to a torch.tensor? Yes Do I need to convert all tensors into gpu by myself, especially for some intermediate variable? Only leaf variables need to converted, intermediate variable will be placed on device on which the operations are done. For ex: if a and b are on gpu, then as a result of operation c=a+b, c will also be on gpu.
https://stackoverflow.com/questions/56166455/
How to convert some tensorflow code into pytorch version
I recently started using pytorch. I've been using tensorflow framework before. I have a piece of code that I implemented with tensorflow, which I now want to convert to the pytorch version. I'm new to pytorch and I'm not familiar with its functions and the transformation process is not smooth, so I'd like to consult. Here's the code I want to convert: def kl_loss_compute(logits1, logits2): """ KL loss """ pred1 = tf.nn.softmax(logits1) pred2 = tf.nn.softmax(logits2) loss = tf.reduce_mean(tf.reduce_sum(pred2 * tf.log(1e-8 + pred2 / (pred1 + 1e-8)), 1)) return loss python: 3.6, ubuntu: 16.04 logits1 and logits2 are FC layer's outputs. Their shape is [batch, n]
Here is my Implementation (I am taking an example of logits of dimension [3,5]): Tensorflow Version: import tensorflow as tf def kl_loss_compute(logits1, logits2): """ KL loss """ pred1 = tf.nn.softmax(logits1) print(pred1.eval()) pred2 = tf.nn.softmax(logits2) print(pred2.eval()) loss = tf.reduce_mean(tf.reduce_sum(pred2 * tf.log(1e-8 + pred2 / (pred1 + 1e-8)), 1)) return loss x1 = tf.random.normal([3, 5], dtype=tf.float32) x2 = tf.random.normal([3, 5], dtype=tf.float32) with tf.Session() as sess: x1 = sess.run(x1) print(x1) x2 = sess.run(x2) print(x2) print(30*'=') print(sess.run(kl_loss_compute(x1, x2))) Output: [[ 0.9801388 -0.2514422 -0.28299806 0.85130763 0.4565948 ] [-1.0744809 0.20301117 0.21026622 1.0385195 0.41147012] [ 1.2385081 1.1003486 -2.0818367 -1.0446491 1.8817908 ]] [[ 0.04036871 0.82306993 0.82962424 0.5209219 -0.10473887] [ 1.7777447 -0.6257034 -0.68985045 -1.1191329 -0.2600192 ] [ 0.03387258 0.44405013 0.08010675 0.9131149 0.6422863 ]] ============================== [[0.32828477 0.09580362 0.09282765 0.2886025 0.19448158] [0.04786159 0.17170973 0.17296004 0.39596024 0.21150835] [0.2556382 0.22265059 0.00923886 0.02606533 0.48640704]] [[0.12704821 0.27790183 0.27972925 0.20543297 0.10988771] [0.7349108 0.06644011 0.062312 0.04056362 0.09577343] [0.12818882 0.19319147 0.13425465 0.30881628 0.23554876]] 0.96658206 PyTorch Version: def kl_loss_compute(logits1, logits2): """ KL loss """ pred1 = torch.softmax(logits1, dim=-1, dtype=torch.float32) print(pred1) pred2 = torch.softmax(logits2, dim=-1, dtype=torch.float32) print(pred2) loss = torch.mean(torch.sum(pred2 * torch.log(1e-8 + pred2 / (pred1 + 1e-8)), -1)) return loss # same inputs are used here as above(see the inputs used in tensorflow code in the output) x = torch.Tensor([[ 0.9801388, -0.2514422 , -0.28299806 , 0.85130763, 0.4565948 ], [-1.0744809 , 0.20301117, 0.21026622, 1.0385195, 0.41147012], [ 1.2385081 , 1.1003486, -2.0818367, -1.0446491, 1.8817908 ]]) y = torch.Tensor([[ 0.04036871 , 0.82306993, 0.82962424, 0.5209219, -0.10473887], [ 1.7777447 ,-0.6257034, -0.68985045, -1.1191329, -0.2600192 ], [ 0.03387258 , 0.44405013 , 0.08010675, 0.9131149, 0.6422863 ]]) print(kl_loss_compute(x, y)) Output: tensor([[0.3283, 0.0958, 0.0928, 0.2886, 0.1945], [0.0479, 0.1717, 0.1730, 0.3960, 0.2115], [0.2556, 0.2227, 0.0092, 0.0261, 0.4864]]) tensor([[0.1270, 0.2779, 0.2797, 0.2054, 0.1099], [0.7349, 0.0664, 0.0623, 0.0406, 0.0958], [0.1282, 0.1932, 0.1343, 0.3088, 0.2355]]) tensor(0.9666)
https://stackoverflow.com/questions/56168482/
How to use multiprocessing in PyTorch?
I'm trying to use PyTorch with complex loss function. In order to accelerate the code, I hope that I can use the PyTorch multiprocessing package. The first trial, I put 10x1 features into the NN and get 10x4 output. After that, I want to pass 10x4 parameters into a function to do some calculation. (The calculation will be complex in the future.) After calculating, the function will return a 10x1 array in total. This array will be set as NN_energy and calculate loss function. Besides, I also want to know if there is another method to create a backward-able array to store the NN_energy array, instead of using NN_energy = net(Data_in)[0:10,0] Thanks a lot. Full Code: import torch import numpy as np from torch.autograd import Variable from torch import multiprocessing def func(msg,BOP): ans = (BOP[msg][0]+BOP[msg][1]/BOP[msg][2])*BOP[msg][3] return ans class Net(torch.nn.Module): def __init__(self, n_feature, n_hidden_1, n_hidden_2, n_output): super(Net, self).__init__() self.hidden_1 = torch.nn.Linear(n_feature , n_hidden_1) # hidden layer self.hidden_2 = torch.nn.Linear(n_hidden_1, n_hidden_2) # hidden layer self.predict = torch.nn.Linear(n_hidden_2, n_output ) # output layer def forward(self, x): x = torch.tanh(self.hidden_1(x)) # activation function for hidden layer x = torch.tanh(self.hidden_2(x)) # activation function for hidden layer x = self.predict(x) # linear output return x if __name__ == '__main__': # apply_async Data_in = Variable( torch.from_numpy( np.asarray(list(range( 0,10))).reshape(10,1) ).float() ) Ground_truth = Variable( torch.from_numpy( np.asarray(list(range(20,30))).reshape(10,1) ).float() ) net = Net( n_feature=1 , n_hidden_1=15 , n_hidden_2=15 , n_output=4 ) # define the network optimizer = torch.optim.Rprop( net.parameters() ) loss_func = torch.nn.MSELoss() # this is for regression mean squared loss NN_output = net(Data_in) args = range(0,10) pool = multiprocessing.Pool() return_data = pool.map( func, zip(args, NN_output) ) pool.close() pool.join() NN_energy = net(Data_in)[0:10,0] for i in range(0,10): NN_energy[i] = return_data[i] loss = torch.sqrt( loss_func( NN_energy , Ground_truth ) ) # must be (1. nn output, 2. target) print(loss) Error messages: File "C:\ProgramData\Anaconda3\lib\site-packages\torch\multiprocessing\reductions.py", line 126, in reduce_tensor raise RuntimeError("Cowardly refusing to serialize non-leaf tensor which requires_grad, " RuntimeError: Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd does not support crossing process boundaries. If you just want to transfer the data, call detach() on the tensor before serializing (e.g., putting it on the queue).
First of all, Torch Variable API is deprecated since a very long time, just don't use it. Next, torch.from_numpy( np.asarray(list(range( 0,10))).reshape(10,1) ).float() is wrong at many levels: np.asarray of list is useless since a copy will be performed anyway, and np.array takes list as input by design. Then, np.arange is available to return a range as numpy array, and it is also available on Torch. Next, specifying both dimension for reshape is useless and error prone, you could simply do reshape((-1, 1)), or even better unsqueeze(-1). Here is the simplified expression torch.arange(10, dtype=torch.float32, requires_grad=True).unsqueeze(-1). Using multiprocessing pool is a bad practice if using batch processing is possible. It will be both way more efficient and readable. Indeed, performing N small algebraic operations in parallel is always slower and a larger single algebraic operation, and even more on GPU. More importantly, computing the gradient is not supported by multiprocessing, hence the error that you get. Yet, this is partially true, because it is supports for tensors on cpu since 1.6.0. Have a lok, to the official release changelog. Could you post a more representative example of what func method could be to make sure you really need it ? NB: Distributed autograd as you are looking is now available in Pytorch as an experimental feature available in beta since 1.6.0. Have a look to the official documentation.
https://stackoverflow.com/questions/56174874/
DataParallel multi-gpu RuntimeError: chunk expects at least a 1-dimensional tensor
I am trying to run my model on multiple gpus using DataParallel by setting model = nn.DataParallel(model).cuda(), but everytime getting this error - RuntimeError: chunk expects at least a 1-dimensional tensor (chunk at /pytorch/aten/src/ATen/native/TensorShape.cpp:184). My code is correct. Does anyone know what's wrong? I have tried setting device_ids=[0,1] parameter and also CUDA_VISIBLE_DEVICES on the terminal. Also tried different batch sizes.
To identify the problem, you should check the shape of your input data for each mini-batch. The documentation says, nn.DataParallel splits the input tensor in dim0 and sends each chunk to the specified GPUs. From the error message, it seems you are trying to pass a 0-dimensional tensor. One possible reason can be if you have a mini-batch with n examples and you are running your program on more than n GPUs, then you will get this error. Let's consider the following scenario. Total training examples = 161, Batch size = 80, total mini-batches = 3 Number of GPUs specified for DataParallel = 3 Now, in the above scenario, in the 3rd mini-batch, there will be 1 example. So, it is not possible to send chunks to all the specific GPUs and you will receive the error message. So, please check if you are not a victim of this issue.
https://stackoverflow.com/questions/56177305/
How to calculate correct batch size for LSTM?
I have a daily time series data like below. CashIn CashOut Date 2016-01-01 0.0 6500.0 2016-01-02 0.0 23110.0 2016-01-03 0.0 7070.0 2016-01-04 0.0 18520.0 2016-01-05 20840.0 22200.0 . . . 2019-03-25 59880.0 25500.0 2019-03-26 49270.0 17860.0 2019-03-27 45160.0 48600.0 2019-03-28 39480.0 22840.0 2019-03-29 70260.0 25950.0 2019-03-30 19250.0 24350.0 2019-03-31 46870.0 14400.0 My total data size is 1186. I want to forecast the CashIn and CashOut values between 2019-04-01 and 2019-04-30 by using LSTM. I wrote a batch calculator like below. def get_batches(arr, batch_size, seq_length): batch_size_total = batch_size * seq_length n_batches = len(arr)//batch_size_total arr = arr[:n_batches * batch_size_total] arr = arr.reshape((batch_size, -1)) for n in range(0, arr.shape[1], seq_length): x = arr[:, n:n+seq_length] y = np.zeros_like(x) try: y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n+seq_length] except IndexError: y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0] yield x, y I am trying to divide this dataset by using get_batches function to batches which have 30 sequence length because I have daily time series and I want to predict the next 30 days. batches = get_batches(np.array(data_cashIn), 40, 30) If I write 39 instead of 40 as a parameter in get_bathces function then I am losing the last 16 daily data but I do not want to lose these data. How can I do this correctly?
I think that you'll always end up with a number that is not working. Because it's not the best practice. I suggest that you use the DataLoader which will easily load batches for you (and here's how you can have a custom dataset fed to the dataloder). By giving the batch_size to the Dataloader it will split your dataset into the maximum possible batches of batch_size with the last batch being &lt;=batch_size. In regard to the LSTM, use batch_first=True and have your batch in this shape (batch, seq, feature). This will allow you to drop the headache of giving a specific size, and the input_size must be equal to feature.
https://stackoverflow.com/questions/56201842/
Can’t convert np.ndarray of type numpy.bool_
I am wondering why I am getting this error when I used : Y_train_class = torch.tensor(Y_train_class.values) TypeError: can't convert np.ndarray of type numpy.bool_. The only supported types are: double, float, float16, int64, int32, and uint8. I have tried to convert my data to float but seems failed X_train = pd.read_csv('c:/Data/x_train_set_yu.csv', header= None) Y_train = pd.read_csv('c:/Data/y_train_set_yu.1.csv', header= None) Y_train_class = (Y_train &gt;= 550) print (Y_train_class) X_test = pd.read_csv('c:/Data/X_test.csv',header= None) X_train = torch.tensor(X_train.values) Y_train.astype(np.float32) Y_train_class.astype(np.float32) Y_train_class = torch.tensor(Y_train_class.values) TypeError: can't convert np.ndarray of type numpy.bool_. The only supported types are: double, float, float16, int64, int32, and uint8.
I got this type of error when my Y_train had a string value instead of integers. After replacing strings with integers my error got resolved.
https://stackoverflow.com/questions/56203536/
Getting the last layer from a pretrained pytorch for transfer learning?
This is what I did: list(tmp.state_dict().keys())[-1].split('.')[0] What is the proper way? My goal is to replace the last layer for the purpose of transfer learning.
You can simple follow these steps to get the last layer from a pretrained pytorch model: We can get the layers by using model.children(). Convert this into a list by using a list() command on it. Remove the last layer by indexing the list. Finally, use the PyTorch function nn.Sequential() to stack this modified list together into a new model. nn.Sequential(*list(model.children())[:-1]) You can read more about this from here.
https://stackoverflow.com/questions/56212588/
how to upload and read a zip file containing training and testing images data from google colab from my pc
I am new to google colab. I am implementing a pretrained vgg16 and resnet50 model using pytorch, but I am unable to load my file and read it as it returns an error of no directory found I have uploaded the data through file also I have used to upload it using from google.colab import files uploaded = files.upload() The file got uploaded but when I tried to unzip it because it is a zip file using !unzip content/cropped_months then it says no file found import torch import torch.nn as nn import torch.optim as optim from torchvision.transforms import * from torch.optim import lr_scheduler from torch.autograd import Variable import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copy from google.colab import files uploaded = files.upload() !unzip content/cropped_months data_dir = 'content/cropped_months' ​ #Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30),transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) ​ test_transforms = transforms.Compose([transforms.Resize(256),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])]) ​ #pass transform here-in train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) ​ #data loaders trainloader = torch.utils.data.DataLoader(train_data, batch_size=8, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=8, shuffle=True) ​ print("Classes: ") class_names = train_data.classes print(class_names) first error unzip: cannot find or open content/cropped_months, content/cropped_months.zip or content/cropped_months.ZIP. second error --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) in () 16 17 #pass transform here-in ---> 18 train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) 19 test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) 20 2 frames /usr/local/lib/python3.6/dist-packages/torchvision/datasets/folder.py in _find_classes(self, dir) 114 if sys.version_info >= (3, 5): 115 # Faster and available in Python 3.5 and above --> 116 classes = [d.name for d in os.scandir(dir) if d.is_dir()] 117 else: 118 classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))] FileNotFoundError: [Errno 2] No such file or directory: 'content/cropped_months (1)/train'
You are probably trying to access the wrong path. In my notebook, the file was uploaded to the working directory. Use google.colab.files to upload the zip. from google.colab import files files.upload() Upload your file. Google Colab will display where it was saved: Saving dummy.zip to dummy.zip Then just run !unzip: !unzip dummy.zip
https://stackoverflow.com/questions/56218134/
RuntimeError: expected type torch.cuda.FloatTensor but got torch.FloatTensor
I keep getting the error message below. I cannot seem to pinpoint to the tensor mentioned. Below you'll find the trainer.py and main.py modules. The model I am developing is GAN on CelebA dataset. I am running the code on a remote server so have spent a handful amount of time debugging my model. This is the full error message: Traceback (most recent call last): File "main.py", line 52, in &lt;module&gt; main(opt) File "main.py", line 47, in main trainer.train(train_loader) File "/home/path/trainer.py", line 45, in train d_loss_cls = F.binary_cross_entropy_with_logits(out_cls, label_org, size_average=False) / out_cls.size(0) File "/home/path/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2077, in binary_cross_entropy_with_logits return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum) RuntimeError: expected type torch.cuda.FloatTensor but got torch.FloatTensor trainer.py from tqdm import tqdm import torch import torch.nn.functional as F from model import Discriminator, Generator from tensorboardX import SummaryWriter class Trainer(): def __init__(self, opt): # Generator self.G = Generator(64, 5, 6) # Discriminator self.D = Discriminator(128, 64, 5, 6) # Generator optimizer self.g_optimizer = torch.optim.Adam(self.G.parameters(), opt.lr) self.d_optimizer = torch.optim.Adam(self.D.parameters(), opt.lr) self.opt = opt if self.opt.cuda: self.G = self.G.cuda() self.D = self.D.cuda() def train(self, data_loader): """Function to train the model """ print('Training model') writer_d = SummaryWriter('runs/disc') # discriminator writer writer_g = SummaryWriter('runs/gen') # generator writer print('Start training...') for epoch in tqdm(range(self.opt.epochs)): for x_real, label_org in tqdm(data_loader): pass # Generate target domain labels randomly. rand_idx = torch.randperm(label_org.size(0)) label_trg = label_org[rand_idx] c_org = label_org.clone() c_trg = label_org.clone() if self.opt.cuda: x_real = x_real.cuda() # Input images c_org = c_org.cuda() # Original domain labels c_trg = c_trg.cuda() # Target domain labels label_org = label_org.cuda() # Labels for computing classification loss label_trg = label_trg.cuda() # Labels for computing classification loss out_src, out_cls = self.D(x_real) d_loss_real = - torch.mean(out_src) d_loss_cls = F.binary_cross_entropy_with_logits(out_cls, label_org, size_average=False) / out_cls.size(0) # Compute loss with fake images x_fake = self.G(x_real, c_trg) out_src, out_cls = self.D(x_fake.detach()) d_loss_fake = torch.mean(out_src) # Compute loss for gradient penalty alpha = torch.rand(x_real.size(0), 1, 1, 1).cuda() x_hat = (alpha * x_real.data + (1 - alpha) * x_fake.data).requires_grad_(True) out_src, _ = self.D(x_hat) # Backward and optimize d_loss = d_loss_real + d_loss_fake + d_loss_cls self.g_optimizer.zero_grad() self.d_optimizer.zero_grad() d_loss.backward() self.d_optimizer.step() if (i + 1) % 2 == 0: # Original-to-target domain x_fake = self.G(x_real, c_trg) out_src, out_cls = self.D(x_fake) g_loss_fake = - torch.mean(out_src) g_loss_cls = F.binary_cross_entropy_with_logits(out_cls, label_trg, size_average=False) / out_cls.size(0) # Target-to-original domain x_reconst = self.G(x_fake, c_org) g_loss_rec = torch.mean(torch.abs(x_real - x_reconst)) # Backward and optimize g_loss = g_loss_fake + g_loss_rec self.g_optimizer.zero_grad() self.d_optimizer.zero_grad() g_loss.backward() self.g_optimizer.step() # write loss to tensorboard writer_d.add_scalar('data/loss', d_loss, epoch) writer_d.add_scalar('data/loss', g_loss, epoch) print('Finished Training') def test(self, data_loader): with torch.no_grad(): for i, (x_real, c_org) in enumerate(data_loader): # Prepare input images and target domain labels. if self.opt.cuda: x_real = x_real.cuda() # Translate images. x_fake_list = [x_real] for c_trg in c_trg_list: x_fake_list.append(self.G(x_real, c_trg)) main.py import argparse import random import torch from torch import nn import torch.optim as optim from torch.utils.data import DataLoader import torchvision.transforms as transforms from preprocess import pre_process from celeb_dataset import CelebDataset from trainer import Trainer # Setting up the argument parser parser = argparse.ArgumentParser() parser.add_argument('--workers', type=int, help='number of data loading workers', default=4) parser.add_argument('--batchSize', type=int, default=8, help='input batch size') parser.add_argument('--epochs', type=int, default=20, help='number of epochs to train') parser.add_argument('--lr', type=float, default=0.0002, help='learning rate') parser.add_argument('--cuda', action='store_true', help='enables cuda') parser.add_argument('--manualSeed', type=int, help='manual seed') parser.add_argument('--dataset_path', type=str, default='./data/celeba', help='dataset path') opt = parser.parse_args() print(opt) if opt.manualSeed is None: opt.manualSeed = random.randint(1, 10000) print("Random Seed: ", opt.manualSeed) def main(opt): # Setup the parameters for the training/testing params = { 'batch_size': opt.batchSize, 'shuffle': True, 'num_workers': opt.workers } # preprocess and setup dataset and datalader processed_data = pre_process(opt.dataset_path) train_dataset = CelebDataset(processed_data[:-2000]) test_dataset = CelebDataset(processed_data[2000:]) train_loader = DataLoader(train_dataset, **params) test_loader = DataLoader(test_dataset, **params) trainer = Trainer(opt) trainer.train(train_loader) trainer.test(test_loader) if __name__ == "__main__": main(opt)
You are getting that error because one of out_cls, label_org is not on the GPU. Where does your code enact the parser.add_argument('--cuda', action='store_true', help='enables cuda') option? Perhaps something like: trainer = Trainer(opt) if opt.cuda: trainer = trainer.cuda()
https://stackoverflow.com/questions/56240001/
AttributeError: module 'torch' has no attribute '_six'. Bert model in Pytorch
I tried to load pre-trained model by using BertModel class in pytorch. I have _six.py under torch, but it still shows module 'torch' has no attribute '_six' import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM # Load pre-trained model (weights) model = BertModel.from_pretrained('bert-base-uncased') model.eval() ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __setattr__(self, name, value) 551 .format(torch.typename(value), name)) 552 modules[name] = value --&gt; 553 else: 554 buffers = self.__dict__.get('_buffers') 555 if buffers is not None and name in buffers: ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in register_parameter(self, name, param) 140 raise KeyError("parameter name can't be empty string \"\"") 141 elif hasattr(self, name) and name not in self._parameters: --&gt; 142 raise KeyError("attribute '{}' already exists".format(name)) 143 144 if param is None: AttributeError: module 'torch' has no attribute '_six'
In jupyter notebook simply restarting kernel works fine
https://stackoverflow.com/questions/56241856/
How can I force torch.jit.trace to compule my module by ignoring hooks?
I have a module containing hook, and I would like to compile it with jit's trace: compiled_model = torch.jit.trace(model, torch.rand(1, 3, 256, 256)) But I get the error: ValueError: Modules that have hooks assigned can't be compiled How can I force trace to ignore the hooks ?
If you want to bypass trace's check, you can recursively remove all the hooks from your model. This can be done by iterating over the children: from collections import OrderedDict def remove_hooks(model): model._backward_hooks = OrderedDict() model._forward_hooks = OrderedDict() model._forward_pre_hooks = OrderedDict() for child in model.children(): remove_hooks(child) Then you can force the compilation: remove_hooks(model) compiled_model = torch.jit.trace(model, torch.rand(1, 3, 256, 256)) But if the hook are actually doing real work and you want to keep them in the trace (which was my case) you can just comment torch's raise in torch/jit/__init__.py the lines : if orig._backward_hooks or orig._forward_hooks or orig._forward_pre_hooks: raise ValueError("Modules that have hooks assigned can't be compiled") It worked for me and I managed to compile a fastai model.
https://stackoverflow.com/questions/56242857/
Expected target size (50, 88), got torch.Size([50, 288, 88])
I am trying to train my neuronal network. Train in the model is correct, but I can't calculate loss. The output and the target have the same dimension. I had tried to use torch.stack, but I can't because the size of the each input is (252, x) where x is the same in the 252 elements, but is different for the others inputs. I use a custom Dataset: class MusicDataSet(Dataset): def __init__(self, transform=None): self.ms, self.target, self.tam = sd.cargarDatos() self.mean, self.std = self.NormalizationValues() def __len__(self): return self.tam def __getitem__(self, idx): #Normalize inp = (self.ms[idx]-self.mean)/self.std inp = torch.from_numpy(inp).float() inp = inp.t() inp = inp.to('cuda') target= torch.from_numpy(self.target[idx]) target = target.long() target = target.t() target = target.to('cuda') return inp, target I must say that list can't be cast with something like: target = torch.Tensor() or torch.stack() because this (252, x), as I have already said. def music_collate_fn(batch): data = [item[0] for item in batch] data = pad_sequence(data, batch_first=True) target = [item[0] for item in batch] target = pad_sequence(target, batch_first=True) return data, target musicSet = mds.MusicDataSet() train_loader = torch.utils.data.DataLoader(musicSet,batch_size=50, collate_fn = music_collate_fn, shuffle=False) input_dim = 252 hidden_dim = (512,1024,512) output_dim = 88 mlp = rn.MLP(input_dim, hidden_dim, output_dim).to(device) optimizer = torch.optim.RMSprop(mlp.parameters(), lr = learning_rate) criterion = nn.CrossEntropyLoss() for batch_idx, (x,y) in enumerate(train_loader): outputs = mlp(x.to(device)) loss = criterion(outputs, y) optimizer.zero_grad() loss.backward() optimizer.step() The size of output and target is the same, output: torch.Size([50, 288, 88]) target: torch.Size([50, 288, 88]) But the next error apears when I try to calculate loss: File "&lt;ipython-input-205-3c47d7aa11a4&gt;", line 32, in &lt;module&gt; loss = criterion(outputs, y) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\loss.py", line 904, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\functional.py", line 1970, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "C:\ProgramData\Anaconda3\lib\site-packages\torch \nn\functional.py", line 1800, in nll_loss out_size, target.size())) ValueError: Expected target size (50, 88), got torch.Size([50, 288, 88])
I think you are using CrossEntropyLoss incorrectly. See the documentation here. In particular, if the input is of shape [NxCxd] then target should be of shape [Nxd], and value in target are integer between 0 and C-1 i.e you can just provide the class labels and it is not required to one-hot encode the target variable. Error message also states that same.
https://stackoverflow.com/questions/56243672/
Reshape and pad a tensor given a list of lengths
I have given a 2d tensor in of shape a x b like the following (where a = 9 and each of A1, A2, ..., C2 represents a b-dimensional vector): Furthermore, I have an array of lengths, where sum(lengths) = a and each entry is a positive integer: Then I would like to obtain a 3d output tensor out, where the first lengths[0] entries of in form the first row, the next lengths[1] entries of in form the second row, and so on. That is, the output tensor should have the shape len(lengths) x max(lengths) x b, and be padded with zeros (each 0 in the below picture represents a b-dimensional zero vector): As this is part of a neural network that is trained using backpropagation, all operations used must be differentiable. How can this be achieved (ideally, with good performance) using PyTorch?
Here is my implementation using torch.nn.utils.rnn.pad_sequence(): in_tensor = torch.rand((9, 3)) print(in_tensor) print(36*'=') lengths = torch.tensor([3, 4, 2]) cum_len = 0 y = [] for idx, val in enumerate(lengths): y.append(in_tensor[cum_len : cum_len+val]) cum_len += val print(torch.nn.utils.rnn.pad_sequence(y, batch_first=True))) output: # in_tensor of shape (9 x 3) tensor([[0.9169, 0.3549, 0.6211], [0.4832, 0.5475, 0.8862], [0.8708, 0.5462, 0.9374], [0.4605, 0.1167, 0.5842], [0.1670, 0.2862, 0.0378], [0.2438, 0.5742, 0.4907], [0.1045, 0.5294, 0.5262], [0.0805, 0.2065, 0.2080], [0.6417, 0.4479, 0.0688]]) ==================================== # out tensor of shape (len(lengths) x max(lengths) x b), in this case b is 3 tensor([[[0.9169, 0.3549, 0.6211], [0.4832, 0.5475, 0.8862], [0.8708, 0.5462, 0.9374], [0.0000, 0.0000, 0.0000]], [[0.4605, 0.1167, 0.5842], [0.1670, 0.2862, 0.0378], [0.2438, 0.5742, 0.4907], [0.1045, 0.5294, 0.5262]], [[0.0805, 0.2065, 0.2080], [0.6417, 0.4479, 0.0688], [0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000]]])
https://stackoverflow.com/questions/56258174/
How to understand this on pytorch website?
I notice this on pytorch official website: https://pytorch.org/docs/stable/nn.html If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU, 3) input data has dtype torch.float16, 4) V100 GPU is used, and 5) input data is not in PackedSequence format. Then, persistent algorithm can be selected to improve performance. Could anyone explain it? Thanks.
This refers a to very low level performance optimization of GPU cache usage, which is explained more in-depth here (note: this is not a PyTorch material, but I believe it does a good enough job at explaining). In other words, if all the bullets are satisfied, PyTorch will default to a different algorithm under the hood, hopefully providing higher RNN performance.
https://stackoverflow.com/questions/56285876/
Pytorch: no CUDA-capable device is detected on Linux
When trying to run some Pytorch code I get this error: THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=74 error=38 : no CUDA-capable device is detected Traceback (most recent call last): File "demo.py", line 173, in test pca = torch.FloatTensor( np.load('../basics/U_lrw1.npy')[:,:6]).cuda() RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:74 I a running a cloud virtual machine using the 'Google Deep Learning VM' Version: tf-gpu.1-13.m25 Based on: Debian GNU/Linux 9.9 (stretch) (GNU/Linux 4.9.0-9-amd64 x86_64\n) Linux tf-gpu-interruptible 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1 (2019-04-12) x86_64 Environment info: $ nvidia-smi Sun May 26 05:32:33 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.72 Driver Version: 410.72 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 42C P0 74W / 149W | 0MiB / 11441MiB | 100% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ $ echo $CUDA_PATH $ echo $LD_LIBRARY_PATH /usr/local/cuda/lib64:/usr/local/nccl2/lib:/usr/local/cuda/extras/CUPTI/lib64 $ env | grep CUDA CUDA_VISIBLE_DEVICES=0 $ pip freeze DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2. 7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. audioread==2.1.7 backports.functools-lru-cache==1.5 certifi==2019.3.9 chardet==3.0.4 cloudpickle==1.1.1 cycler==0.10.0 dask==1.2.2 decorator==4.4.0 dlib==19.17.0 enum34==1.1.6 filelock==3.0.12 funcsigs==1.0.2 future==0.17.1 gdown==3.8.1 idna==2.8 joblib==0.13.2 kiwisolver==1.1.0 librosa==0.6.3 llvmlite==0.28.0
I didn't get the main reason for your problem. But I noticed one thing, GPU-Util 100%, while there are no processes running behind. You can try out in the following directions. sudo nvidia-smi -pm 1 which enables in persistence mode. This might solve your problem. The combination of ECC with non persistence mode can lead to 100% Utilization of GPU. You can also disable ECC with the command nvidia -smi -e 0 Or best will be restart once again the whole process from the starting i.e reboot the Operating System once again. Note: I'm not sure whether it will work for you or not. I had faced similar issue earlier so I am just telling based on my experience. Hope this will help you.
https://stackoverflow.com/questions/56311034/
How to initialize the weights of different layers of nn.Sequential block in different styles in pytorch?
Let's suppose I have a nn.Sequential block, it has 2 linear layers. I want to initialize the weights of first layer by uniform distribution but want to initialize the weights of second layer as constant 2.0. net = nn.Sequential() net.add_module('Linear_1', nn.Linear(2, 5, bias = False)) net.add_module('Linear_2', nn.Linear(5, 5, bias = False)
Here is one way of doing so: import torch import torch.nn as nn net = nn.Sequential() ll1 = nn.Linear(2, 5, bias = False) torch.nn.init.uniform_(ll1.weight, a=0, b=1) # a: lower_bound, b: upper_bound net.add_module('Linear_1', ll1) print(ll1.weight) ll2 = nn.Linear(5, 5, bias = False) torch.nn.init.constant_(ll2.weight, 2.0) net.add_module('Linear_2', ll2) print(ll2.weight) print(net) Output: Parameter containing: tensor([[0.2549, 0.7823], [0.3439, 0.4721], [0.0709, 0.6447], [0.3969, 0.7849], [0.7631, 0.5465]], requires_grad=True) Parameter containing: tensor([[2., 2., 2., 2., 2.], [2., 2., 2., 2., 2.], [2., 2., 2., 2., 2.], [2., 2., 2., 2., 2.], [2., 2., 2., 2., 2.]], requires_grad=True) Sequential( (Linear_1): Linear(in_features=2, out_features=5, bias=False) (Linear_2): Linear(in_features=5, out_features=5, bias=False) )
https://stackoverflow.com/questions/56312886/