instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
LSTM output just a variation of input data
I'm building a LSTM and I want to predict s_max with variable q_max but the network just seem to alter the input data and give that as an output. I've tried increasing hidden size and epochs but was not successful. I assume there's a problem in the way I've structured the data or the way the network is set up. Here is the figure of the prediction my model makes: I literally just want to fit to training data so that I know it can learn a simple problem. Here is my model: class LSTM(nn.Module): def __init__(self, num_classes, input_size, hidden_size, num_layers): super(LSTM, self).__init__() self.num_classes = num_classes self.num_layers = num_layers self.input_size = input_size self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, batch_first=True) self.fc = nn.Linear(hidden_size, num_classes) def forward(self, x): h_0 = Variable(torch.zeros(self.num_layers, x.size(0), self.hidden_size)) c_0 = Variable(torch.zeros(self.num_layers, x.size(0), self.hidden_size)) ula, (h_out, _) = self.lstm(x, (h_0, c_0)) h_out = h_out.view(-1, self.hidden_size) out = self.fc(h_out) return out Data preprocessing: def data_manipulator(data): df = pd.read_hdf(data) df = df.iloc[:, [1, 4]] scaler = MinMaxScaler() scaler = scaler.fit_transform(df) df = scaler return pd.DataFrame(df) def sliding_windows(data, seq_length): y = np.ones([len(data)-seq_length-1,1]) x = np.ones([len(data)-seq_length-1,seq_length,1]) for i in range(len(data)-seq_length-1): x[i] = np.array(data.iloc[i:i + seq_length,0]).reshape(-1,1) # ex. [1406, 5, 1] y[i] = data.iloc[i + seq_length, 1] # ex. [1406, 1] return torch.tensor(x, dtype=torch.float), torch.tensor(y, dtype=torch.float) Setup, training, plot: data_files = glob.glob('data/*.hdf') seq_length = 5 df = data_manipulator(data_files[0]) x, y = sliding_windows(df, seq_length) lstm = LSTM(num_classes= 1,input_size=1, hidden_size = 1, num_layers = 1) criterion = torch.nn.MSELoss() optimizer = torch.optim.Adam(lstm.parameters(), lr=0.001) num_epochs = 2000 for epoch in range(num_epochs): optimizer.zero_grad() outputs = lstm(x) loss = criterion(outputs, y) loss.backward() optimizer.step() if epoch % 100 == 0: print("Epoch: %d, loss: %1.5f" % (epoch, loss.item())) lstm.eval() output2 = lstm(x).detach().numpy() plt.plot(df[0], label='q_max train') plt.plot(df[1], label='s_max train') plt.plot(output2, label='s_max output with q_max train as input') plt.legend() plt.show() Train output: Epoch: 0, loss: 0.52164 Epoch: 100, loss: 0.10143 Epoch: 200, loss: 0.04956 Epoch: 300, loss: 0.02736 Epoch: 400, loss: 0.02732 Epoch: 500, loss: 0.02727 Epoch: 600, loss: 0.02722 Epoch: 700, loss: 0.02714 Epoch: 800, loss: 0.02704 Epoch: 900, loss: 0.02689 Epoch: 1000, loss: 0.02663
After speaking to my project supervisor, there are a couple of things I hadn't thought about. First of all, the forward pass returns h_out instead of the predicted value ula. Secondly, my function def sliding_windows(data, seq_length): is a "many to one" network while what he was after was a "many to many" which better suits this application and I'll be working to change the data input and output arcitecture.
https://stackoverflow.com/questions/68508314/
pytorch - reciprocal of torch.gather
Given an input tensor x and a tensor of indices idxs, I want to retrieve all elements of x whose index is not present in idxs. That is, taking the opposite of the torch.gather function output. Example with torch.gather: >>> x = torch.arange(30).reshape(3,10) >>> idxs = torch.tensor([[1,2,3], [4,5,6], [7,8,9]], dtype=torch.long) >>> torch.gather(x, 1, idxs) tensor([[ 1, 2, 3], [14, 15, 16], [27, 28, 29]]) What indeed I want to achieve is tensor([[ 0, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26]]) What could it be an effective and efficient implementation, possibly employing torch utilities? I wouldn't like to use any for-loops. I'm assuming idxs has only unique elements in its deepest dimension. For example idxs would be the result of calling torch.topk.
You could be looking to construct a tensor of shape (x.size(0), x.size(1)-idxs.size(1)) (here (3, 7)). Which would correspond to the complementary indices of idxs, with regard to the shape of x, i.e.: tensor([[0, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6]]) I propose to first build a tensor shaped like x that would reveal the positions we want to keep and those we want to discard, a sort of mask. This can be done using torch.scatter. This essentially scatters 0s at desired location, namely m[i, idxs[i][j]] = 0: >>> m = torch.ones_like(x).scatter(1, idxs, 0) tensor([[1, 0, 0, 0, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0, 0, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0]]) Then grab the non zeros (the complementary part of idxs). Select the 2nd indices on axis=1, and reshape according to the target tensor: >>> idxs_ = m.nonzero()[:, 1].reshape(-1, x.size(1) - idxs.size(1)) tensor([[0, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6]]) Now you know what to do, right? Same as for the torch.gather example you gave, but this time with idxs_: >>> torch.gather(x, 1, idxs_) tensor([[ 0, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26]]) In summary: >>> idxs_ = torch.ones_like(x).scatter(1, idxs, 0) \ .nonzero()[:, 1].reshape(-1, x.size(1) - idxs.size(1)) >>> torch.gather(x, 1, idxs_)
https://stackoverflow.com/questions/68510107/
Why torch.nn.Conv2d has different result between '(n, n)' and 'n' arguments?
input = torch.randn(8, 3, 50, 100) m = nn.Conv2d(3, 3, kernel_size=(3, 3), padding=(1, 1)) m2 = nn.Conv2d(3, 3, kernel_size=3, padding=1) output = m(input) output2 = m2(input) torch.equal(output, output2) >> False I suppose above m and m2 Conv2d should have exactly same output value but practically not, what is the reason?
You have initialized two nn.Conv2d with identical settings, that's true. Initialization of the weights however is done randomly! You have here two different layers m and m2. Namely m.weight and m2.weigth have different components, same goes for m.bias and m2.bias. One way to have get the same results, is to copy the underlying parameters of the model: >>> m.weight = m2.weight >>> m.bias = m2.bias Which, of course, results in torch.equal(m(input), m2(input)) being True.
https://stackoverflow.com/questions/68511961/
Weighted Random Samplers in pytorch
I am new to samplers and don't understand why we should use a weighted random sampler. Can anyone explain it to me? Also, should we use a weighted random sampler for the validation set?
This is very much a PyTorch-independent question and, as such might appear a bit off-topic. Take a classification task, your dataset may contain more instances of a certain class making this class overrepresented. This can usually lead to some issues. Indeed, during training, your model will be presented with more instances from one class than the others. In that sense, it can become biased towards that prominent class. To counter that you can use a weighted sampler that will effectively level the unequal number of instances such that, on average, during one epoch, the model will have seen as many examples belonging to each of your classes. This will allow having balanced learning with respect to your class, independently from the fact that you may have different numbers of instances per class. To answer your second question, I don't think you should be using a weighted sampler on your validation. There is no need to adopt a specific sampling policy. The point of validation is to see the performance of your fixed model on unseen data. Similar to the test set, where you won't have access to the class statistics to actually use a weighted sampler.
https://stackoverflow.com/questions/68515188/
Excessive CPU RAM being used by Pytorch even inside .cuda() mode
I am having issue with excessive CPU RAM usage with this coding even inside .cuda() mode Could anyone advise ?
Problem is now solved using this github commit
https://stackoverflow.com/questions/68517398/
Embedding: argument indices must be a Tensor, not a list
I am trying to train a RNN, but I am having trouble with my embedding. I am getting the following error message: TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list The code in the forward method starts like that: def forward(self, word_indices: [int]): print("sentences") print(len(word_indices)) print(word_indices) word_ind_tensor = torch.tensor(word_indices, device="cpu") print(word_ind_tensor) print(word_ind_tensor.size()) embeds_word = self.embedding_word(word_indices) The output of all of that is: sentences 29 [261, 15, 5149, 44, 287, 688, 1125, 4147, 9874, 582, 15, 9875, 3, 2, 6732, 34, 2, 6733, 9, 2, 485, 7, 6734, 3, 741, 2, 2179, 1571, 1] tensor([ 261, 15, 5149, 44, 287, 688, 1125, 4147, 9874, 582, 15, 9875, 3, 2, 6732, 34, 2, 6733, 9, 2, 485, 7, 6734, 3, 741, 2, 2179, 1571, 1]) torch.Size([29]) Traceback (most recent call last): File "/home/lukas/Documents/HU/Materialen/21SoSe-Studienprojekt/flair-Studienprojekt/TestModel.py", line 68, in <module> embeddings_storage_mode = "CPU") #auf cuda ändern File "/home/lukas/Documents/HU/Materialen/21SoSe-Studienprojekt/flair-Studienprojekt/flair/trainers/trainer.py", line 423, in train loss = self.model.forward_loss(batch_step) File "/home/lukas/Documents/HU/Materialen/21SoSe-Studienprojekt/flair-Studienprojekt/flair/models/sandbox/srl_tagger.py", line 122, in forward_loss features = self.forward(word_indices = sent_word_ind, frame_indices = sent_frame_ind) File "/home/lukas/Documents/HU/Materialen/21SoSe-Studienprojekt/flair-Studienprojekt/flair/models/sandbox/srl_tagger.py", line 147, in forward embeds_word = self.embedding_word(word_indices) File "/home/lukas/miniconda3/envs/studienprojekt/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/lukas/miniconda3/envs/studienprojekt/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/home/lukas/miniconda3/envs/studienprojekt/lib/python3.7/site-packages/torch/nn/functional.py", line 1724, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list I originally initialised the embedding the following way: self.embedding_word = torch.nn.Embedding(self.word_dict_size, embedding_size) word_dict_size and embedding_size are both integers. Is there something obviously I did wrong or is that a deeper mistake?
You're passing in a list to self.embedding_word: word_indices, not the tensor you just created for that purpose word_ind_tensor.
https://stackoverflow.com/questions/68518393/
Python multiprocessing on multiple CPUs, GPUs
I have 8 GPUs, 64 CPU cores (multiprocessing.cpu_count()=64) I am trying to get inference of multiple video files using a deep learning model. I want some files to get processed on each of the 8 GPUs. For each GPU, I want a different 6 CPU cores utilized. Below python filename: inference_{gpu_id}.py Input1: GPU_id Input2: Files to process for GPU_id from torch.multiprocessing import Pool, Process, set_start_method try: set_start_method('spawn', force=True) except RuntimeError: pass model = load_model(device='cuda:' + gpu_id) def pooling_func(file): preds = [] cap = cv2.VideoCapture(file) while(cap.isOpened()): ret, frame = cap.read() count += 1 if ret == True: frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) pred = model(frame)[0] preds.append(pred) else: break cap.release() np.save(file[:-4]+'.npy', preds) def process_files(): # all files to process on gpu_id files = np.load(gpu_id + '_files.npy') # I am hoping to use 6 cores for this gpu_id, # and a different 6 cores for a different GPU id pool = Pool(6) r = list(tqdm(pool.imap(pooling_func, files), total = len(files))) pool.close() pool.join() if __name__ == '__main__': import multiprocessing multiprocessing.freeze_support() process_files() I am hoping to run inference_{gpu_id}.py files on all GPUs simultaneously Currently, I am able to successfully run it on one GPU, 6 cores, But when I try to run it on all GPUs together, only GPU 0 runs, all others stop giving below error message. RuntimeError: CUDA error: invalid device ordinal. The script I am running: CUDA_VISIBLE_DEVICES=0 inference_0.py CUDA_VISIBLE_DEVICES=1 inference_1.py ... CUDA_VISIBLE_DEVICES=7 inference_7.py
The following is originally an answer to a question you asked but later deleted. Consider this, if you are not using the CUDA_VISIBLE_DEVICES flag, then all GPUs will be available to your PyTorch process. This means torch.cuda.device_count will return 8 (assuming your version setup is valid). And you will be able to get access to each one of those 8 GPUs with torch.device, via torch.device('cuda:0'), torch.device('cuda:1'), ..., and torch.device('cuda:8'). Now if you are only planning on using one and want to restrict your process to one. then CUDA_VISIBLE_DEVICES=i (where i is the device ordinal) will make it so. In this case torch.cuda will only have access to a single device through torch.device('cuda:0'). It doesn't matter what the actual device ordinal is, the way you access it is through torch.device('cuda:0'). If you allow access to more than one device: let's say n°0, n°4, and n°2, then you would use CUDA_VISIBLE_DEVICES=0,4,2. Consequently you refer to your cuda devices via d0 = torch.device('cuda:0'), d1 = torch.device('cuda:1'), and d2 = torch.device('cuda:2'). In the same order as you defined them with the flag, i.e.: d0 -> GPU n°0, d1 -> GPU n°4, and d2 -> GPU n°2. This makes it so you can use the same code and run it on different GPUs without having to change the underlying code where you are referring to the device ordinal. In summary, what you need to look at is the number of devices you need to run your code. In your case: 1 is enough. You will refer to it with torch.device('cuda:0'). When running your code, however, you will need to specify what that cuda:0 device is, with the flag: > CUDA_VISIBLE_DEVICES=0 inference.py > CUDA_VISIBLE_DEVICES=1 inference.py ... > CUDA_VISIBLE_DEVICES=7 inference.py Do note 'cuda' will default to 'cuda:0'.
https://stackoverflow.com/questions/68518683/
RuntimeError: expand(torch.cuda.FloatTensor{[3, 3, 3, 3]}, size=[]): the number of sizes provided (0) must be >= number of dimensions in the tensor(4)
Why [3, 3, 3, 3] for the variable w ?
Problem is solved using this github commit
https://stackoverflow.com/questions/68525151/
Error "_init__() takes 1 positional argument but 2 were given" when trying to build a CNN model using PyTorch
I have recently started to learn coding with PyTorch. While I was trying to build a CNN model for the FashionMNIST dataset, I encountered the following problem : TypeError Traceback (most recent call last) in () ----> 1 model = CNN (K) TypeError: init() takes 1 positional argument but 2 were given I have read the answers to similar questions but still, I am not able to solve my problem. I would be deeply grateful if anyone could help me in this regard. Here is the code: train_dataset = torchvision.datasets.FashionMNIST (root = '.', train = True, transform = transforms.ToTensor (), download= True) test_dataset = torchvision.datasets.FashionMNIST (root = '.', train= False, transform = transforms.ToTensor (), download = True) K = len (set (train_dataset.targets.numpy ())) class CNN (nn.Module): def __int__ (self, K): super (CNN, self).__int__ () self.conv_layers = nn.Sequential ( nn.Conv2d (in_channels= 1, out_channels= 32, kernel_size= 3, stride = 2), nn.ReLU (), nn.Conv2d (in_channels= 32, out_channels= 64, kernel_size= 3, stride = 2), nn.ReLU (), nn.Conv2d (in_channels= 64, out_channels= 128, kernel_size= 3, stride= 2), nn.ReLU () ) self.dense_layers = nn.Sequential (nn.Dropout (0.2), nn.Linear (128 * 2 * 2, 512), nn.ReLU (), nn.Dropout (0.2), nn.Linear (512, K) ) def forward (self, x): out = self.conv_layers (x) out = out.view (out.size (0), -1) out = self.dense_layers (out) return out
There is a typo in your initializer method head: it should be def __init__, not def __int__.
https://stackoverflow.com/questions/68527455/
Pytorch dimension change
Is there any methods to change [1,512,1,1] to [1,512,2,2] tensor. I know it is not possible just by changing the dimensions. Are there any ways using concat or stack with PyTorch (torch.stack, torch.cat) I make tensor with following code a = torch.rand([1,512,1,1]) How can I change this to tensor with dimension [1,512,2,2]
That would be with torch.repeat, this will copy the data: >>> a = a.repeat(1, 1, 2, 2) If you do not wish to copy the data, then use torch.expand: >>> a = a.expand(-1, -1, 2, 2)
https://stackoverflow.com/questions/68528008/
When I used pytorch , I got the error:"" IndexError: index 4 is out of bounds for axis 0 with size 4 ""
the following code shows IndexError: index 4 is out of bounds for axis 0 with size 4: I don't know what's wrong with it. How can I solve this error? Thanks! The following is the problem when I run Pycharm: Epoch 1/10 Traceback (most recent call last): File "C:/Users/ABCDfile/PycharmProjects/pythonProject1/main.py", line 376, in train(model, train_loader, torch.optim.Adam(model.parameters(), lr=0.01), nn.BCEWithLogitsLoss(), 10, "_bce_e10") File "C:/Users/ABCDfile/PycharmProjects/pythonProject1/main.py", line 251, in train iou = IOU(output, y) File "C:/Users/ABCDfile/PycharmProjects/pythonProject1/main.py", line 301, in IOU if prediction[l][0][n][m] == 1 and groundtruth[l][0][n][m] == 1: IndexError: index 4 is out of bounds for axis 0 with size 4 import torch from torch import nn from torch.utils.data import Dataset, DataLoader from os import listdir import numpy as np import cv2 as cv import torch.nn.functional as F class NucleusDataset(Dataset): def __init__(self, tot): # tot = train or test super().__init__() self.root = 'C:/Users/Desktop/Nucleus dataset/' self.allfolderlist = listdir(self.root) print("There are " + str(len(self.allfolderlist)) + " data totally") self.folderlist = [] self.tot = tot if self.tot == 'train': print("Get training dataset") for n in range(int(len(self.allfolderlist) / 2)): self.folderlist.append(self.allfolderlist[n]) print("There are " + str(len(self.folderlist)) + " data in training dataset") elif self.tot == 'test': print("Get testing dataset") for n in range(int(len(self.allfolderlist) / 2), int(len(self.allfolderlist))): self.folderlist.append(self.allfolderlist[n]) print("There are " + str(len(self.folderlist)) + " data in testing dataset") else: print("Choose train or test") def __len__(self): return len(self.folderlist) def __getitem__(self, index): foldername = self.folderlist[index] filename = foldername.split(".")[0] + ".png" img = cv.imread(self.root + foldername + "/images/" + filename) img = cv.resize(img, (224, 224), interpolation=cv.INTER_LINEAR) img_np = np.array(img, dtype=np.float32) flat_img_np = np.empty(shape=(3, 224, 224)) for x in range(224): for y in range(224): flat_img_np[0][x][y] = (img_np[x][y][0] + img_np[x][y][1] + img_np[x][y][2]) / 765 sum = 0 for x in range(224): for y in range(224): sum += flat_img_np[0][x][y] flat_img_np = flat_img_np * 0.5 / sum * 224 * 224 outputpath = self.root + foldername + "/masks/" isfirst = True for objectpic in listdir(outputpath): obimg = cv.imread(outputpath + objectpic) obimg = cv.resize(obimg, (224, 224), interpolation=cv.INTER_LINEAR) if isfirst: obimg_np = np.array(obimg, dtype=np.float32) isfirst = False else: obimg_np += np.array(obimg, dtype=np.float32) obimg_np / 2 flat_obimg_np = np.empty(shape=(1, 224, 224)) for x in range(224): for y in range(224): if obimg_np[x][y][0] == 255: flat_obimg_np[0][x][y] = 1 else: flat_obimg_np[0][x][y] = 0 return flat_img_np, flat_obimg_np class SegNet(nn.Module): def __init__(self,input_nbr,label_nbr): super(SegNet, self).__init__() batchNorm_momentum = 0.1 self.conv11 = nn.Conv2d(input_nbr, 64, kernel_size=3, padding=1) self.bn11 = nn.BatchNorm2d(64, momentum= batchNorm_momentum) self.conv12 = nn.Conv2d(64, 64, kernel_size=3, padding=1) self.bn12 = nn.BatchNorm2d(64, momentum= batchNorm_momentum) self.conv21 = nn.Conv2d(64, 128, kernel_size=3, padding=1) self.bn21 = nn.BatchNorm2d(128, momentum= batchNorm_momentum) self.conv22 = nn.Conv2d(128, 128, kernel_size=3, padding=1) self.bn22 = nn.BatchNorm2d(128, momentum= batchNorm_momentum) self.conv31 = nn.Conv2d(128, 256, kernel_size=3, padding=1) self.bn31 = nn.BatchNorm2d(256, momentum= batchNorm_momentum) self.conv32 = nn.Conv2d(256, 256, kernel_size=3, padding=1) self.bn32 = nn.BatchNorm2d(256, momentum= batchNorm_momentum) self.conv33 = nn.Conv2d(256, 256, kernel_size=3, padding=1) self.bn33 = nn.BatchNorm2d(256, momentum= batchNorm_momentum) self.conv41 = nn.Conv2d(256, 512, kernel_size=3, padding=1) self.bn41 = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv42 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.bn42 = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv43 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.bn43 = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv51 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.bn51 = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv52 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.bn52 = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv53 = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.bn53 = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv53d = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.bn53d = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv52d = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.bn52d = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv51d = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.bn51d = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv43d = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.bn43d = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv42d = nn.Conv2d(512, 512, kernel_size=3, padding=1) self.bn42d = nn.BatchNorm2d(512, momentum= batchNorm_momentum) self.conv41d = nn.Conv2d(512, 256, kernel_size=3, padding=1) self.bn41d = nn.BatchNorm2d(256, momentum= batchNorm_momentum) self.conv33d = nn.Conv2d(256, 256, kernel_size=3, padding=1) self.bn33d = nn.BatchNorm2d(256, momentum= batchNorm_momentum) self.conv32d = nn.Conv2d(256, 256, kernel_size=3, padding=1) self.bn32d = nn.BatchNorm2d(256, momentum= batchNorm_momentum) self.conv31d = nn.Conv2d(256, 128, kernel_size=3, padding=1) self.bn31d = nn.BatchNorm2d(128, momentum= batchNorm_momentum) self.conv22d = nn.Conv2d(128, 128, kernel_size=3, padding=1) self.bn22d = nn.BatchNorm2d(128, momentum= batchNorm_momentum) self.conv21d = nn.Conv2d(128, 64, kernel_size=3, padding=1) self.bn21d = nn.BatchNorm2d(64, momentum= batchNorm_momentum) self.conv12d = nn.Conv2d(64, 64, kernel_size=3, padding=1) self.bn12d = nn.BatchNorm2d(64, momentum= batchNorm_momentum) self.conv11d = nn.Conv2d(64, label_nbr, kernel_size=3, padding=1) def forward(self, x): # Stage 1 x11 = F.relu(self.bn11(self.conv11(x))) x12 = F.relu(self.bn12(self.conv12(x11))) x1p, id1 = F.max_pool2d(x12,kernel_size=2, stride=2,return_indices=True) # Stage 2 x21 = F.relu(self.bn21(self.conv21(x1p))) x22 = F.relu(self.bn22(self.conv22(x21))) x2p, id2 = F.max_pool2d(x22,kernel_size=2, stride=2,return_indices=True) # Stage 3 x31 = F.relu(self.bn31(self.conv31(x2p))) x32 = F.relu(self.bn32(self.conv32(x31))) x33 = F.relu(self.bn33(self.conv33(x32))) x3p, id3 = F.max_pool2d(x33,kernel_size=2, stride=2,return_indices=True) # Stage 4 x41 = F.relu(self.bn41(self.conv41(x3p))) x42 = F.relu(self.bn42(self.conv42(x41))) x43 = F.relu(self.bn43(self.conv43(x42))) x4p, id4 = F.max_pool2d(x43,kernel_size=2, stride=2,return_indices=True) # Stage 5 x51 = F.relu(self.bn51(self.conv51(x4p))) x52 = F.relu(self.bn52(self.conv52(x51))) x53 = F.relu(self.bn53(self.conv53(x52))) x5p, id5 = F.max_pool2d(x53,kernel_size=2, stride=2,return_indices=True) # Stage 5d x5d = F.max_unpool2d(x5p, id5, kernel_size=2, stride=2) x53d = F.relu(self.bn53d(self.conv53d(x5d))) x52d = F.relu(self.bn52d(self.conv52d(x53d))) x51d = F.relu(self.bn51d(self.conv51d(x52d))) # Stage 4d x4d = F.max_unpool2d(x51d, id4, kernel_size=2, stride=2) x43d = F.relu(self.bn43d(self.conv43d(x4d))) x42d = F.relu(self.bn42d(self.conv42d(x43d))) x41d = F.relu(self.bn41d(self.conv41d(x42d))) # Stage 3d x3d = F.max_unpool2d(x41d, id3, kernel_size=2, stride=2) x33d = F.relu(self.bn33d(self.conv33d(x3d))) x32d = F.relu(self.bn32d(self.conv32d(x33d))) x31d = F.relu(self.bn31d(self.conv31d(x32d))) # Stage 2d x2d = F.max_unpool2d(x31d, id2, kernel_size=2, stride=2) x22d = F.relu(self.bn22d(self.conv22d(x2d))) x21d = F.relu(self.bn21d(self.conv21d(x22d))) # Stage 1d x1d = F.max_unpool2d(x21d, id1, kernel_size=2, stride=2) x12d = F.relu(self.bn12d(self.conv12d(x1d))) x11d = self.conv11d(x12d) return x11d def load_from_segnet(self, model_path): s_dict = self.state_dict()# create a copy of the state dict th = torch.load(model_path).state_dict() # load the weigths # for name in th: # s_dict[corresp_name[name]] = th[name] self.load_state_dict(th) def train(model, dataloader, optimizer, loss_fn, epochs, filename): model.cuda() model.train(True) # Set trainind mode = true f = open("C:/Users/Desktop/limf" + filename + ".txt", 'a') for epoch in range(epochs): print('-' * 10) print('Epoch {}/{}'.format(epoch + 1, epochs)) step = 0 eloss = 0 eiou = 0 emae = 0 ef = 0 for x, y in dataloader: x = x.float() x.requires_grad = True x = x.cuda() y = y.float() y = y.cuda() step += 1 optimizer.zero_grad() output = model(x) output = output.cuda() loss = loss_fn(output, y) iou = IOU(output, y) mae = MAE(output, y) fmeasure = Fmeasure(output, y) y = y.cpu().detach().numpy() output = output.cpu().detach().numpy() output = 1 * (output[:, :, :, :] > 0.5) ys = "" os = "" for n in range(224): for m in range(224): ys += str(int(y[0][0][n][m])) os += str(int(output[0][0][n][m])) print(ys + " " + os, file=f) ys = "" os = "" print("----------", file=f) eloss += loss eiou += iou emae += mae ef += fmeasure loss.backward() optimizer.step() print('Current step: {} Loss: {} IOU: {} MAE: {} F: {}'.format(step, loss, iou, mae, fmeasure), file=f) eloss /= step eiou /= step emae /= step ef /= step print('-----Epoch {} finish Loss: {} IOU: {} MAE: {} F: {}'.format(epoch, eloss, eiou, emae, ef), file=f) print('-----Epoch {} finish Loss: {} IOU: {} MAE: {} F: {}'.format(epoch, eloss, eiou, emae, ef)) f.close() def IOU(prediction, groundtruth, bs=15): prediction = prediction.cpu().detach().numpy() groundtruth = groundtruth.cpu().detach().numpy() prediction = 1 * (prediction[:, :, :, :] > 0.5) intersection = 0 union = 0 for l in range(bs): for n in range(224): for m in range(224): if prediction[l][0][n][m] == 1 and groundtruth[l][0][n][m] == 1: intersection += 1 if prediction[l][0][n][m] == 1 or groundtruth[l][0][n][m] == 1: union += 1 iou_score = intersection / union return iou_score def MAE(prediction, groundtruth, bs=15): prediction = prediction.cpu().detach().numpy() groundtruth = groundtruth.cpu().detach().numpy() prediction = 1 * (prediction[:, :, :, :] > 0.5) error = 0 for l in range(bs): for x in range(224): for y in range(224): if prediction[l][0][x][y] != groundtruth[l][0][x][y]: error += 1 return error / 224 / 224 / bs def Fmeasure(prediction, groundtruth, bs=15, b=1): prediction = prediction.cpu().detach().numpy() groundtruth = groundtruth.cpu().detach().numpy() prediction = 1 * (prediction[:, :, :, :] > 0.5) TP = 0 FP = 0 FN = 0 for l in range(bs): for x in range(224): for y in range(224): if prediction[l][0][x][y] == 1 and groundtruth[l][0][x][y] == 1: TP += 1 if prediction[l][0][x][y] == 1 and groundtruth[l][0][x][y] == 0: FP += 1 if prediction[l][0][x][y] == 0 and groundtruth[l][0][x][y] == 1: FN += 1 if (TP + FP) == 0: precision = 0 else: precision = TP / (TP + FP) if (TP + FN) == 0: recall = 0 else: recall = TP / (TP + FN) if precision + recall == 0: return 0 return ((1 + b * b) * precision * recall) / (b * b * (precision + recall)) if __name__ == '__main__': # see whether gpu can be used # device = torch.device('cuda' if torch.cuda.is_available() else 'CPU') # print("device: "+str(device)) # load dataset train_dataset = NucleusDataset("train") bs = 4 # batch size train_loader = DataLoader(train_dataset, batch_size=bs, num_workers=1, drop_last=True, shuffle=False) model = SegNet(3,1) train(model, train_loader, torch.optim.Adam(model.parameters(), lr=0.01), nn.BCEWithLogitsLoss(), 10, "_bce_e10") How can I solve this error?
Your batch size is 4, and you are using the default value for bs when calling IOU, which means bs=15, not 4. Therefore call IOU by passing the batch size: IOU(output, y, bs=bs). Better yet, you could remove the bs argument, and define bs as y.size(0) inside the function.
https://stackoverflow.com/questions/68531579/
Calculating gradients in Custom training loop, difference in performace TF vs Torch
I have attempted to translate pytorch implementation of a NN model which calculates forces and energies in molecular structures to TensorFlow. This needed a custom training loop and custom loss function so I implemented to different one step training functions below. First using Nested Gradient Tapes. def calc_gradients(D_train_batch, E_train_batch, F_train_batch, opt): #set up gradient tape scope in order to track gradients of both d(Loss)/d(Weights) #and d(output)/d(input) with tf.GradientTape() as tape1: with tf.GradientTape() as tape2: #set gradient tape to watch Tensor tape2.watch(D_train_batch) #pass D thru model to get predicted energy vals E_pred = model(D_train_batch, training=True) df_dD_train_batch = tape2.gradient(E_pred, D_train_batch) #matrix mult of -Grad_D(f) x Grad_r(D) F_pred = -tf.einsum('ijkl,il->ijk', dD_dr_train_batch, df_dD_train_batch) #calculate loss value loss = force_energy_loss(E_pred, F_pred, E_train_batch, F_train_batch) grads = tape1.gradient(loss, model.trainable_weights) opt.apply_gradients(zip(grads, model.trainable_weights)) Other attempt with gradient tape (persistent = true) def calc_gradients_persistent(D_train_batch, E_train_batch, F_train_batch, opt): #set up gradient tape scope in order to track gradients of both d(Loss)/d(Weights) #and d(output)/d(input) with tf.GradientTape(persistent = True) as outer: #set gradient tape to watch Tensor outer.watch(D_train_batch) #output values from model, set trainable to be true to get #model.trainable_weights out E_pred = model(D_train_batch, training=True) #set gradient tape to watch trainable weights outer.watch(model.trainable_weights) #get gradient of output (f/E_pred) w.r.t input (D/D_train_batch) and cast to double df_dD_train_batch = outer.gradient(E_pred, D_train_batch) #matrix mult of -Grad_D(f) x Grad_r(D) F_pred = -tf.einsum('ijkl,il->ijk', dD_dr_train_batch, df_dD_train_batch) #calculate loss value loss = force_energy_loss(E_pred, F_pred, E_train_batch, F_train_batch) #get gradient of loss w.r.t to trainable weights for back propogation grads = outer.gradient(loss, model.trainable_weights) #updates weights using the optimizer and the gradients (grads) opt.apply_gradients(zip(grads, model.trainable_weights)) These were attempted translations of the pytorch code # Forward pass: Predict energies from the descriptor input E_train_pred_batch = model(D_train_batch) # Get derivatives of model output with respect to input variables. The # torch.autograd.grad-function can be used for this, as it returns the # gradients of the input with respect to outputs. It is very important # to set the create_graph=True in this case. Without it the derivatives # of the NN parameters with respect to the loss from the force error # will not be populated (=the force error will not affect the # training), but the model will still run fine without errors. df_dD_train_batch = torch.autograd.grad( outputs=E_train_pred_batch, inputs=D_train_batch, grad_outputs=torch.ones_like(E_train_pred_batch), create_graph=True, )[0] # Get derivatives of input variables (=descriptor) with respect to atom # positions = forces F_train_pred_batch = -torch.einsum('ijkl,il->ijk', dD_dr_train_batch, df_dD_train_batch) # Zero gradients, perform a backward pass, and update the weights. # D_train_batch.grad.data.zero_() optimizer.zero_grad() loss = energy_force_loss(E_train_pred_batch, E_train_batch, F_train_pred_batch, F_train_batch) loss.backward() optimizer.step() which is from the tutorial for the Dscribe library at https://singroup.github.io/dscribe/latest/tutorials/machine_learning/forces_and_energies.html Question Using either versions of the TF implementation there is a huge loss in prediction accuracy compared to running the pytorch version. I was wondering, have I maybe misunderstood the pytorch code and translated incorrectly and if so where is my discrepancy? P.S Model directly computes energies E, from which we use the gradient of E w.r.t D in order to calculate the forces F. The loss function is a weighted sum of MSE of both Force and energies.
These methods are in fact the same, my error was somewhere else which was creating differing results. For anyone whose trying to implement the TensorFlow versions, the nested gradient tapes are about 2x faster, at least in this scenario and also ensure to wrap the functions in an @tf.function in order to use graphs over eager execution, The speed up is about 10x.
https://stackoverflow.com/questions/68532424/
Whats the purpose of torch.positive?
From the documentation: torch.positive(input) → Tensor Returns input. Throws a runtime error if input is a bool tensor. It just returns the input and throws error if its a bool tensor, but that's not an efficient nor readable way of checking if a tensor is bool.
It seems like pytorch added pytorch.positive in parity with numpy which has a function of the same name. So back to your question, positive is a unary operator which basically multiplies everything by +1. This is not a particularly useful operation, but is symmetric to negative which multiplies everything by -1. >>> import numpy as np >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.positive(a) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.negative(a) array([ 0, -1, -2, -3, -4, -5, -6, -7, -8, -9]) So why make a positive function that effectively returns (a copy of) the array you passed? It can be used to write general code where e.g. numpy.positive and numpy.negative can be used dynamically based on some condition.
https://stackoverflow.com/questions/68532510/
PyTorch random_split() is returning wrong sized loader
I have a custom dataset loader for my dataset. I want to split the dataset into 70% train data, 20% validation data, and 10% test data. I have 16,488 data. So, my train data is supposed to be 11,542. But it's becoming 770 train data, 220 validation data, and 110 test data. I've tried but couldn't figure out the problem. class Dataset(Dataset): def __init__(self, directory, transform, preload=False, device: torch.device = torch.device('cpu'), **kwargs): self.device = device self.directory = directory self.transform = transform self.labels = [] self.images = [] self.preload = preload for i, file in enumerate(os.listdir(self.directory)): file_labels = parse('{}_{}_{age}_{gender}.jpg', file) if file_labels is None: continue if self.preload: image = Image.open(os.path.join(self.directory, file)).convert('RGB') if self.transform is not None: image = self.transform(image).to(self.device) else: image = os.path.join(self.directory, file) self.images.append(image) gender_to_class_id = { 'm': 0, 'f': 1 } gender = gender_to_class_id[file_labels['gender']] age = int(file_labels['age']) self.labels.append({ 'age': age, 'gender': gender }) pass def __len__(self): return len(self.labels) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() image = self.images[idx] if not self.preload: image = Image.open(image).convert('RGB') if self.transform is not None: image = self.transform(image).to(self.device) labels = { 'age': self.labels[idx]['age'], 'gender': self.labels[idx]['gender'], } return image.to(self.device), labels def get_loaders(self, transform, train_size=0.7, validate_size=0.2, test_size=0.1, batch_size=15, **kwargs): if round(train_size + validate_size + test_size, 1) > 1.0: sys.exit("Sum of the percentages should be less than 1. it's " + str( train_size + validate_size + test_size) + " now!") train_len = int(len(self) * train_size) validate_len = int(len(self) * validate_size) test_len = int(len(self) * test_size) others_len = len(self) - train_len - validate_len - test_len self.trainDataset, self.validateDataset, self.testDataset, _ = torch.utils.data.random_split( self, [train_len, validate_len, test_len, others_len] ) train_loader = DataLoader(self.trainDataset, batch_size=batch_size) validate_loader = DataLoader(self.validateDataset, batch_size=batch_size) test_loader = DataLoader(self.testDataset, batch_size=batch_size) return train_loader, validate_loader, test_loader
It seems that you are giving batch_size=15 As a dataloader is iterable, it maybe simply giving you the len() of the 1st batch. It also explains why you are getting train data = 770, where it is supposed to be 11,542. Because, 16488 / 15 * 0.7 = 769.44 ≈ 770 Assigning batch_size = 1 should do the trick. 16488 / 1 * 0.7 = 11541.6 ≈ 11542
https://stackoverflow.com/questions/68536085/
Why does Pytorch autograd need a scalar?
I am working through "Deep Learning for Coders with fastai & Pytorch". Chapter 4 introduces the autograd function from the PyTorch library on a trivial example. x = tensor([3.,4.,10.]).requires_grad_() def f(q): return sum(q**2) y = f(x) y.backward() My question boils down to this: the result of y = f(x) is tensor(125., grad_fn=AddBackward0), but what does that even mean? Why would I sum the values of three completely different inputs? I get that using .backward() in this case is shorthand for .backward(tensor[1.,1.,1.]) in this scenario, but I don't see how summing 3 unrelated numbers in a list helps get the gradient for anything. What am I not understanding? I'm not looking for a grad-level explanation here. The subtitle for the book I'm using is AI Applications Without a Ph.D. My experience with gradients is from school is that I should be getting a FUNCTION back, but I understand that isn't the case with Autograd. A graph of this short example would be helpful, but the ones I see online usually include too many parameters or weights and biases to be useful, my mind gets lost in the paths.
TLDR; the derivative of a sum of functions is the sum of their derivatives Let x be your input vector made of x_i (where i in [0,n]), y = x**2 and L = sum(y_i). You are looking to compute dL/dx, a vector of the same size as x whose components are the dL/dx_j (where j in [0,n]). For j in [0,n], dL/dx_j is simply dy_j/dx_j (derivative of the sum is the sum of derivates and only one of them is different to zero), which is d(x_j**2)/dx_j, i.e. 2*x_j. Therefore, dL/dx = [2*x_j where j in [0,n]]. This is the result you get in x.grad when either computing the gradient of x as: y = f(x) y.backward() or the gradient of each components of x separately: y = x**2 y.backward(torch.ones_like(x))
https://stackoverflow.com/questions/68536392/
AttributeError: module 'torch.nn' has no attribute 'ReflectionPad3d'
I am practicing on padding layers in PyTorch. 1d and 2d reflection padding works well. When I try to run the example given in 3d padding, the error, given in the title, happened. m = nn.ReflectionPad3d(1) input = torch.arange(8, dtype=torch.float).reshape(1, 1, 2, 2, 2) m(input) What can be the reason for this error?
Unfortunately there is no ReflectionPad3d in the official release yet. The Documentation you are referring to is addressed to the unstable developer preview. Have a look at the padding layers section of the newest stable version 1.9.0 to see which are the usable layers. Since the official issue on that topic is already closed, i am sure that it will make its way into the next release.
https://stackoverflow.com/questions/68537302/
When modifying a pre-trained model in pytorch, does the old weight get re-initialized?
I'm modifying a pretrained efficient-net model in pytorch. I'm doing the following in order: Create the default model, load the imagenet weights. Then, change the number of channels in the first layer, and delete few layers while adding few. from efficientnet_pytorch import EfficientNet from efficientnet_pytorch.utils import Conv2dStaticSamePadding PATH = "../input/efficientnet-pytorch/efficientnet-b0-08094119.pth" model = EfficientNet.from_name('efficientnet-b0') model.load_state_dict(torch.load(PATH)) # augment model with 4 channels model._conv_stem = Conv2dStaticSamePadding(4, 32, kernel_size = (3,3), stride = (2,2), bias = False, image_size = 512) model._fc = torch.nn.Linear(in_features=1280, out_features=2, bias=True) My question is: what will happen to the original weights that I loaded? Will it be there or all of the model will get randomly initialized?
If you are re-defining some of the layers, which you seem to be doing with model._conv_stem and model._fc. Then, yes those will be randomly initialized, thus meaning the loaded weights will no longer be used on those layers. The rest of the model will of course stay untouched and will use the loaded weights.
https://stackoverflow.com/questions/68537629/
RuntimeError: mat1 and mat2 shapes cannot be multiplied (28x28 and 784x64)
import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super().__init__() self.fc1=nn.Linear(28*28,64) self.fc2=nn.Linear(64,64) self.fc3=nn.Linear(64,64) self.fc4=nn.Linear(64,10) def forward(self,x): x=F.relu(self.fc1(x)) x=F.relu(self.fc2(x)) x=F.relu(self.fc3(x)) x=self.fc4(x) return F.log_softmax(x,dim=1) net=Net() print(net) X=torch.rand((28,28)) X=X.unsqueeze(0) output=net(X) print(output) RuntimeError: mat1 and mat2 shapes cannot be multiplied (28x28 and 784x64) When i use X=X.view(-1,28*28) in place of X=X.unsqueeze(0), it gives me the desired result. I cannot completely understand the differnce between using unsqueeze() and view() here.
You are currently working with two dimensions only: axis=0 is your batch and axis=1 is your features. Reshaping with X.view(-1, 28*28) will flatten all axes but the last leaving your with a shape of the form (batch_size, feature_size). While X.unsqueeze(0) will just add an additional axis not affecting the overall form of your tensor X, it is not a reshape. By that I mean if X is shaped (batch_size, channel, height, width) which is likely for an input image. Then X.unsqueeze(0) will be shaped (1, batch_size, channel, height, width). Since you are using a fully-connected network, you are and should be using the former, that is to reshape X into a 2-dimensional tensor.
https://stackoverflow.com/questions/68542593/
How to efficiently draw a plot of a torch.nn model?
I'm exploring neural networks, and I want to model some pictures with neural network. Picture is a function that maps pixel coordinates to color, so I make my network also with 2 input variables (x, y) and 1 (shade) to 3 (R, G, B) output coordinates. For example, like this: import torch.nn as nn net = nn.Sequential( nn.Linear(2, 2), nn.Sigmoid(), nn.Linear(2, 1), ) Now, I plot it like this: import matplotlib.pyplot as plt import numpy as np def draw_image1(f): image = [] y = 1 delta = 0.005 while y > 0: x = 0 row = [] while x < 1: row.append(f(x, y)) x += delta image.append(row) y -= delta plt.imshow(image, extent=[0, 1, 0, 1], cmap='winter') plt.draw() draw_image1(lambda x, y: net(torch.Tensor([x, y])).item()) But it looks ugly and is slow because it uses Python lists instead of numpy arrays or tensors. I have another version of code that draws images from functions, which looks better and is 100x faster: def draw_image2(f): x = np.linspace(0, 1, num = 200) y = np.linspace(0, 1, num = 200) X, Y = np.meshgrid(x, y) image = f(X, Y) plt.imshow(image, extent=[0, 1, 0, 1], cmap='winter') plt.draw() It works for functions that use numpy operations (like lambda x: x + y), but when I plug in my net in the same way as for previous function (draw_image2(lambda x, y: net(torch.Tensor([x, y])).item())), I get RuntimeError: mat1 and mat2 shapes cannot be multiplied (400x200 and 2x2), which I understand as my neural net complaining that it wants to be fed data in smaller pieces. Is there any proper way to plot pytorch neural network output?
To feed a whole batch into nn.Linear(i, o), the input typically has the shape (b, i) where b is the size of the batch. If we take a look at the documentation you can actually use additional "batch"-dimensions in between. Actually since pytorch was primarily made for deep learning that is based on stochastic gradietn descent, pretty much all modules of pytorch require you to have at least one batch dimension. So you could easily modify your second plotting function to something like: import torch import torch.nn as nn import matplotlib.pyplot as plt net = nn.Sequential( nn.Linear(2, 2), nn.Sigmoid(), nn.Linear(2, 1), ) def draw_image2(f): device = torch.device('cpu') # or use your gpu alternatively with torch.no_grad(): # disable building evaluation graph if you don't need it x = torch.linspace(0, 1, 200) y = torch.linspace(0, 1, 200) X, Y = torch.meshgrid(x, y) # the data dimension should be the last (2), as per documentation inp = torch.stack([X, Y], dim=2).to(device) # shape = (200, 200, 2) image = f(inp) # shape = (200, 200, 1) image = image[..., 0].detach().cpu() # shape (200, 200) plt.imshow(image, extent=[0, 1, 0, 1], cmap='winter') plt.show() return image draw_image2(net) Note that the with torch.no_grad() is not necessary for it to work, but it will save you some time. Depending on your network architecture it might also be worth to set your network to eval mode (net.eval()) first. Finally the .to(device)/.cpu() is also not necessary if you're not using your GPU.
https://stackoverflow.com/questions/68543275/
Learnable weighted sum of tensors
I'm trying to implement deep supervision strategy in an encoder-decoder architecture using PyTorch. The idea is to do the weighted sum of the results of three convolution layers (with a learnable parameters Wi). Suppose we have three tensors: A, B and C of identical shapes: (64, 48, 48, 48). My goal is to do a weighted linear sum of these three tensors: (w0 * A + w1 * B + w2 * C) with w0, w1, w2 should be learnable parameters by the network. Maybe I have to use torch.nn.Linear(in_features, out_features), but I dont know what will be in and out features in this case. Any suggestions please?
You could define a custom parameter tensor and store the w_i in it. Then compute the weighted sum of the matrices with the weights. Register the custom parameter like so: W = nn.Parameter(torch.rand(3)) You can either compute the sum by hand: w0, w1, w2 = W res = w0*A + w1*B + w2*B Or instead use torch.einsum for conciseness. Do note this approach doesn't depend on the number of components in your linear sum: X = torch.stack([A, B, C]) res = torch.einsum('mbchw,m->bcw', X, W) Good thing you pointed out nn.Linear, you can actually pull it off with this layer. Notice that nn.Linear can take an n-dimensional tensor as input: (batch_size, *, in_features) and will output (batch_size ,*, out_features), where * can be any number of dimensions. In your case in_features is the number of weights and out_features is 1. Looking at it differently: you only require 1 neuron to compute the weighted sum. One important thing though is that the "feature" dimension must be last, i.e. the stack must be done on the last dimension: W = nn.Linear(3, 1) res = W(torch.stack([A, B, C], dim=-1)) And the output shape will be: >>> res.shape torch.Size([64, 48, 48, 48, 1])
https://stackoverflow.com/questions/68547474/
Understanding pytorch graph generation
If I run the code: import torch x = torch.ones(5) # input tensor y = torch.zeros(3) # expected output w = torch.randn(5, 3, requires_grad=True) b = torch.randn(3, requires_grad=True) z = torch.matmul(x, w)+b loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) loss.backward() loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) loss.backward() pytorch spits the error "Trying to backward through the graph a second time" at me. My understanding is that calling the loss calculation line again doesn't actually change the computational graph, which is why I get this error. However, when I call the code: import torch x = torch.ones(5) # input tensor y = torch.zeros(3) # expected output w = torch.randn(5, 3, requires_grad=True) b = torch.randn(3, requires_grad=True) z = torch.matmul(x, w)+b loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) loss.backward() z = torch.matmul(x, w)+b loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) loss.backward() it works fine (without error), and I don't understand why this is the case, in either case, I haven't made any change to the computational graph?
This is a good question. In my opinion, this is particularly important in order to fully grasp this feature of PyTorch. Which is paramount when dealing with complex setups, whether it involves multiple backward passes or partial backward passes. In both examples your computational graph is: y ---------------------------->| b ----------->| | w ------->| | x --> x @ w + b = z --> BCE(z, y) = loss However, the "computational graph" as we call it is just a representation of the dependencies that exist in the computation of that result. The way this result is tied to the tensors that lead to the final computation, i.e. the intermediate results of the graph. When you compute loss, a link remains between loss and all other tensors, this is needed in order to compute the backward pass. First scenario In your first example you compute loss, which by itself creates a "computational graph". Notice the grad_fn attribute appearing on your loss variable. This is the callback function used to navigate back up the graph. In your case F.binary_cross_entropy_with_logits will output a grad_fn=<BinaryCrossEntropyWithLogitsBackward>. This being said, you successfully compute the backward pass by calling backward(), doing so backpropagates up the graph using the graph_fn's functions and updating the parameters' grad attribute. Then you define loss using the same z, the one that is tied to the previous graph. You're essentially going from the previous computational graph above to the following one: y ---------------------------->| b ----------->| | w ------->| | x --> x @ w + b = z --> BCE(z, y) = loss \--> BCE(z, y) = loss # 2nd definition of loss The second definition of loss overwrites the previous value for loss, yes. However, it won't affect the first portion of the graph which still exists: as I explained z is still tied to the initial tensors x, w, and b. By default, during a backward pass, the activations are freed. This means you won't be able to perform a second pass. To sum up your first example, the second loss.backward() will go through loss's (the new one) grad_fn, then reach the initial z whose activations have already been freed. This results in the error you've encountered: Trying to backward pass through the graph a second time Second scenario In the second example, you redefine the whole network by recomputing z from the leaf tensor x and consequently loss with intermediate output z and leaf tensor y. Conceptually, the state of the computation graphs is: y ---------------------------->| b ----------->| | w ------->| | x --> x @ w + b = z --> BCE(z, y) = loss \-> x @ w + b = z --> BCE(z, y) = loss # 2nd definition of loss This means that by calling loss.backward a first time you do a backward pass on the initial graph. Then, after having redefined both z and loss, you end up creating a new graph altogether: 2nd branch of the illustration above. The 2nd backward pass ends up working since you're not on the same graph.
https://stackoverflow.com/questions/68547874/
Is there a way to use torch.nn.DataParallel with CPU?
I'm trying to change some PyTorch code so that it can run on the CPU. The model was trained with torch.nn.DataParallel() so when I load the pre-trained model and try using it I must use nn.DataParallel() which I am currently doing like this: device = torch.device("cuda:0") net = nn.DataParallel(net, device_ids=[0]) net.load_state_dict(torch.load(PATH)) net.to(device) However after I switched my torch device to cpu like this: device = torch.device('cpu') net = nn.DataParallel(net, device_ids=[0]) net.load_state_dict(torch.load(PATH)) net.to(device) I got this error: File "C:\My\Program\win-py362-venv\lib\site-packages\torch\nn\parallel\data_parallel.py", line 156, in forward "them on device: {}".format(self.src_device_obj, t.device)) RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu I'm assuming that it's still looking for CUDA because that's what device_ids is set to but is there a way to make it use the CPU? This post from the PyTorch repo makes me think that I can but it doesn't explain how. If not is there any other way to use a model trained with DataParallel on your CPU?
When you use torch.nn.DataParallel() it implements data parallelism at the module level. According to the doc: The parallelized module must have its parameters and buffers on device_ids[0] before running this DataParallel module. So even though you are doing .to(torch.device('cpu')) it is still expecting to pass the data to a GPU. However since DataParallel is a container you can bypass it and get just the original module by doing this: net = net.module.to(device) Now it will access the original module you defined before you applied the DataParallel container.
https://stackoverflow.com/questions/68551032/
Convert tuple of arrays into tensors to then stack them in pytorch
I have this tuple called train, containing 2 arrays, first (10000,10), second (1000): (array([[0.0727882 , 0.82148589, 0.9932996 , ..., 0.9604997 , 0.48725072, 0.87095636], [0.28299425, 0.94904277, 0.69887889, ..., 0.59392614, 0.96375439, 0.23708264], [0.44746802, 0.46455956, 0.99537243, ..., 0.03077313, 0.60441346, 0.5284877 ], ..., [0.74851845, 0.59469311, 0.20880812, ..., 0.82080042, 0.16033365, 0.94729764], [0.56686195, 0.35784948, 0.15531381, ..., 0.95415527, 0.88907735, 0.39981913], [0.61606041, 0.30158736, 0.65476444, ..., 0.0637397 , 0.76772078, 0.85285724]]), array([ 9.78050432, 21.84804394, 13.14748592, ..., 17.86811178, 14.94744237, 9.80791838])) I've tried this to them stack them but there is a shape mismatch seq = torch.as_tensor(train[0], dtype=None, device=None) label = torch.as_tensor(train[1], dtype=None, device=None) #seq.size() = torch.Size([10000,10]) #label.size() = torch.Size([10000]) My goal is to stack 10000 tensors of len(10) with the 10000 tensors label. Be able to treat a seq as single tensor like people do with images. Where one instance would look like this like this: [tensor(0.0727882 , 0.82148589, 0.9932996 , ..., 0.9604997 , 0.48725072, 0.87095636]), tensor(9.78050432)] Thanks you,
Where/what is your error exactly? Because, to get your desired output it looks like you could just run: stack = [[seq[i],label[i]] for i in range(seq.shape[0])] But, if you want a sequence of size [10000,11], then you need to expand the dims of the label tensor to be concatenatable (made that word up) along the second axis: label = torch.unsqueeze(label,1) stack = torch.cat([seq,label],1)
https://stackoverflow.com/questions/68551961/
ValueError: Expected input batch_size (24) to match target batch_size (8)
Got many links to solve this read different stackoverflow answer related to this but not able to figure it out . My image size is torch.Size([8, 3, 16, 16]). My architechture is as below class Net(nn.Module): def __init__(self): super(Net, self).__init__() # linear layer (784 -> 1 hidden node) self.fc1 = nn.Linear(16 * 16, 768) self.fc2 = nn.Linear(768, 64) self.fc3 = nn.Linear(64, 10) self.dropout = nn.Dropout(p=.5) def forward(self, x): # flatten image input x = x.view(-1, 16 * 16) # add hidden layer, with relu activation function x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = F.log_softmax(self.fc3(x), dim=1) return x # specify loss function criterion = nn.NLLLoss() # specify optimizer optimizer = torch.optim.Adam(model.parameters(), lr=.003) # number of epochs to train the model n_epochs = 30 # suggest training between 20-50 epochs model.train() # prep model for training for epoch in range(n_epochs): # monitor training loss train_loss = 0.0 ################### # train the model # ################### for data, target in trainloader: # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update running training loss train_loss += loss.item()*data.size(0) # print training statistics # calculate average loss over an epoch train_loss = train_loss/len(trainloader.dataset) print('Epoch: {} \tTraining Loss: {:.6f}'.format( epoch+1, train_loss )) i am getting value error as ValueError: Expected input batch_size (24) to match target batch_size (8). how to fix it . My batch size is 8 and input image size is (16*16).And i have 10 class classification here .
Your input images have 3 channels, therefore your input feature size is 16*16*3, not 16*16. Currently, you consider each channel as separate instances, leading to a classifier output - after x.view(-1, 16*16) flattening - of (24, 16*16). Clearly, the batch size doesn't match because it is supposed to be 8, not 8*3 = 24. You could either: Switch to a CNN to handle multi-channel inputs (here 3 channels). Use a self.fc1 with 16*16*3 input features. If the input is RGB, maybe even convert to 1-channel grayscale map.
https://stackoverflow.com/questions/68552575/
How to add a custom localization loss function in PyTorch?
I have a PyTorch network that predicts the location of devices using Wi-Fi RSS data. So the output layer contains two neurons corresponding to x and y coordinates. I want to use mean localization error as the loss function. ie. loss = mean(sqrt((x_predicted - X_real)^2 + (y_predicted - y_real)^2)) The equation finds the error distance between predicted and real locations. How can I include this instead of MSE?
As you can see in the tutorial, just implement a criterion function(you can name it however you like) and use that: def custom_loss(output, label): return torch.mean(torch.sqrt(torch.sum((output - label)**2))) and the in the code(stolen from the linked tutorial): for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = custom_loss(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') HTH
https://stackoverflow.com/questions/68556396/
Setting `remove_unused_columns=False` causes error in HuggingFace Trainer class
I am training a model using HuggingFace Trainer class. The following code does a decent job: !pip install datasets !pip install transformers from datasets import load_dataset from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer, AutoTokenizer dataset = load_dataset('glue', 'mnli') model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=3) tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True) def preprocess_function(examples): return tokenizer(examples["premise"], examples["hypothesis"], truncation=True, padding=True) encoded_dataset = dataset.map(preprocess_function, batched=True) args = TrainingArguments( "test-glue", learning_rate=3e-5, per_device_train_batch_size=8, num_train_epochs=3, remove_unused_columns=True ) trainer = Trainer( model, args, train_dataset=encoded_dataset["train"], tokenizer=tokenizer ) trainer.train() However, setting remove_unused_columns=False results in the following error: ValueError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in convert_to_tensors(self, tensor_type, prepend_batch_axis) 704 if not is_tensor(value): --> 705 tensor = as_tensor(value) 706 ValueError: too many dimensions 'str' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) 8 frames /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in convert_to_tensors(self, tensor_type, prepend_batch_axis) 720 ) 721 raise ValueError( --> 722 "Unable to create tensor, you should probably activate truncation and/or padding " 723 "with 'padding=True' 'truncation=True' to have batched tensors with the same length." 724 ) ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Any suggestions are highly appreciated.
It fails because the value in line 705 is a list of str, which points to hypothesis. And hypothesis is one of the ignored_columns in trainer.py. /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in convert_to_tensors(self, tensor_type, prepend_batch_axis) 704 if not is_tensor(value): --> 705 tensor = as_tensor(value) See the below snippet from trainer.py for the remove_unused_columns flag: def _remove_unused_columns(self, dataset: "datasets.Dataset", description: Optional[str] = None): if not self.args.remove_unused_columns: return dataset if self._signature_columns is None: # Inspect model forward signature to keep only the arguments it accepts. signature = inspect.signature(self.model.forward) self._signature_columns = list(signature.parameters.keys()) # Labels may be named label or label_ids, the default data collator handles that. self._signature_columns += ["label", "label_ids"] columns = [k for k in self._signature_columns if k in dataset.column_names] ignored_columns = list(set(dataset.column_names) - set(self._signature_columns)) There could be a potential pull request on HuggingFace to provide a fallback option in case the flag is False. But in general, it looks like that the flag implementation is not complete for e.g. it can't be used with Tensorflow. On the contrary, it doesn't hurt to keep it True, unless there is some special need.
https://stackoverflow.com/questions/68557028/
PyTorch: element-wise max over all data points with the same output bin index
I am using PyTorch (1.8). Is there a clever way to take an element-wise max over all data points with the same output index? Let's say I have a data tensor of size (N, M), and an index tensor of size (N,) containing indices [0, K). Now I want to bin the data tensor into a tensor of size (K, M) according to the index values, but if two or more datapoints are binned into the same slot, then I want to keep an element-wise max. I've seen a naive approach like the one below, but doesn't give the element-wise max but just stores whatever is binned last. data = torch.randn((N, M)) index = torch.randint(K, (N,)) output = torch.zeros((K, M)) output[index] = data At the moment I am implementing a custom cuda kernel to solve this issue, but would like to know if this can be solved with standard PyTorch. Edit: Minimal example: data = torch.tensor([[10,1],[9,2],[8,3],[7,4],[6,5]]) index = torch.tensor([2,1,0,1,2], dtype=torch.long) # something happens # expected output: # [[8, 3], [9, 4], [10, 5]]
PyTorch doesn't seem to have a native implementation for this yet, but there is a repository which does exactly this. PyTorch Scatter What I was describing seems to correspond with scatter_max. from torch_scatter import scatter_max scatter_max(data, index, dim=0)
https://stackoverflow.com/questions/68559006/
Adam Optimizer not Updating Values
I am trying to use Adam optimizer to obtain certain values outside of a neural network. My technique wasn't working so I created a simple example to see if it works: a = np.array([[0.0,1.0,2.0,3.0,4.0], [0.0,1.0,2.0,3.0,4.0]]) b = np.array([[0.1,0.2,0.0,0.0,0.0], [0.0,0.5,0.0,0.0,0.0]]) a = torch.from_numpy(a) b = torch.from_numpy(b) a.requires_grad = True b.requires_grad = True optimizer = torch.optim.Adam( [b], lr=0.01, weight_decay=0.001 ) iterations = 200 for i in range(iterations ): loss = torch.sqrt(((a.detach() - b.detach()) ** 2).sum(1)).mean() loss.requires_grad = True optimizer.zero_grad() loss.backward() optimizer.step() if i % 10 == 0: print(b) print("loss:", loss) My intuition was b should get close to a as much as possible to reduce loss. But I see no change in any of the values of b and loss stays exactly the same. What am I missing here? Thanks.
You are detaching b, meaning the gradient won't flow all the way to b when backpropagating, i.e. b won't change! Additionally, you don't need to state requires_grad = True on the loss, as this is done automatically since one of the operands has the requires_grad flag on. loss = torch.sqrt(((a.detach() - b) ** 2).sum(1)).mean()
https://stackoverflow.com/questions/68562067/
Operate loss with respect to every single datapoint in batch
When using a 64 size batch, I need to fine-operate the loss value with respect to every single data point. I know I can use reduction='none' when creating a loss function object then I can get a fine granularity loss value. But it's better to be a regular loss object without setting reduction='none', to keep consistency with other code. It there any way to operate finer loss value without reduction='none'?
Why don't you wrap the function with your predefined options? def custom_loss(*args, **kwargs): return some_builtin(*args, **kwargs, reduction='none') Where some_builtin would be a builtin PyTorch loss, e.g. torch.functional.l1_loss, torch.functional.mse_loss, ...
https://stackoverflow.com/questions/68570166/
how to replace torch.Tensor to a value in python
my predictions in a pytorch are coming as torch([0]) , torch([1])....,torch([25]) for respective 26 alphabets i.e. A,B,C....Z. my prediction are coming as torch([0]) which i want as A and so on . Any idea how to do this conversion .
To convert indices of the alphabet to the actual letters, you can: alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' # the Alphabet pred = torch.randint(0, 26, (30,)) # your prediction, int tensor with values in range[0, 25] # convert to characters pred_string = ''.join(alphabet[c_] for c_ in pred) the output would be something like: 'KEFOTIJBNTAPWHSBXUIQKTTJNSCNDF' This will also work for pred with a single element, in which case the conversion can done more compactly: alphabet[pred.item()]
https://stackoverflow.com/questions/68573682/
GAN Model Code Modification — 3 Channels to 1 Channel
This model is designed for processing 3-channel images (RGB) while I need to handle some black and white image data (grayscale), so I’d like to change the “ch” parameter to “1” instead of “3.” The full code is available here — https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html If we just change this parameter — “nc = 3” --> “nc = 1” — without adjusting generator’s and discriminator’s code blocks, executing just gives an error message: RuntimeError: Given groups=1, weight of size [64, 1, 4, 4], expected input[128, 3, 64, 64] to have 1 channels, but got 3 channels instead Is there a guide on how to modify this or, perhaps, calculate these values manually using this formula (shape section)? Please advise.
A grayscale image is a "special case" of a color image: a pixel has a gray color iff the red channel equals the green equals the blue. Thus a pixel with values [200, 10, 30] will be green-ish in color, while a pixel with values [180, 180, 180] will have a gray color. Therefore, the simplest way to process gray scale images using a pre-trained RGB model is to duplicate the single channel of the grayscale image 3 times to generate RGB-like image with three channels that has gray colors.
https://stackoverflow.com/questions/68573859/
CUDA: Out of memory error on 128 images dataset
I'm trying to train YOLOR on coco128 dataset in Google Colab on coco128 dataset. The training set contains 112 images. The validation set contains 8 images. The testing set contains 8 images. But, it throws cuda out of memory error. How could it be?? the dataset has only 128 images in total. Using torch 1.7.0 CUDA:0 (Tesla T4, 15109MB) Namespace(adam=False, batch_size=8, bucket='', cache_images=False, cfg='cfg/yolor_p6.cfg', data='data/coco128.yaml', device='0', epochs=300, evolve=False, exist_ok=False, global_rank=-1, hyp='./data/hyp.scratch.1280.yaml', image_weights=False, img_size=[1280, 1280], local_rank=-1, log_imgs=16, multi_scale=False, name='yolor_p6', noautoanchor=False, nosave=False, notest=False, project='runs/train', rect=False, resume=False, save_dir='runs/train/yolor_p613', single_cls=False, sync_bn=False, total_batch_size=8, weights='', workers=8, world_size=1) Start Tensorboard with "tensorboard --logdir runs/train", view at http://localhost:6006/ 2021-07-29 13:35:48.259076: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 Hyperparameters {'lr0': 0.01, 'lrf': 0.2, 'momentum': 0.937, 'weight_decay': 0.0005, 'warmup_epochs': 3.0, 'warmup_momentum': 0.8, 'warmup_bias_lr': 0.1, 'box': 0.05, 'cls': 0.5, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'fl_gamma': 0.0, 'hsv_h': 0.015, 'hsv_s': 0.7, 'hsv_v': 0.4, 'degrees': 0.0, 'translate': 0.5, 'scale': 0.5, 'shear': 0.0, 'perspective': 0.0, 'flipud': 0.0, 'fliplr': 0.5, 'mosaic': 1.0, 'mixup': 0.0} Model Summary: 665 layers, 37265016 parameters, 37265016 gradients, 81.564040600 GFLOPS Optimizer groups: 145 .bias, 145 conv.weight, 149 other Scanning labels ../coco128/train2017.cache3 (110 found, 0 missing, 2 empty, 0 duplicate, for 112 images): 112it [00:00, 11214.18it/s] Scanning labels ../coco128/val2017.cache3 (8 found, 0 missing, 0 empty, 0 duplicate, for 8 images): 8it [00:00, 4100.00it/s] NumExpr defaulting to 2 threads. Image sizes 1280 train, 1280 test Using 2 dataloader workers Logging results to runs/train/yolor_p613 Starting training for 300 epochs... Epoch gpu_mem box obj cls total targets img_size 0% 0/14 [00:00<?, ?it/s]Traceback (most recent call last): File "train.py", line 539, in <module> train(hyp, opt, device, tb_writer, wandb) File "train.py", line 289, in train pred = model(imgs) # forward File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/content/drive/MyDrive/YOLOR/yolor/models/models.py", line 543, in forward return self.forward_once(x) File "/content/drive/MyDrive/YOLOR/yolor/models/models.py", line 604, in forward_once x = module(x) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/activation.py", line 394, in forward return F.silu(input, inplace=self.inplace) File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 1741, in silu return torch._C._nn.silu(input) RuntimeError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 14.76 GiB total capacity; 13.70 GiB already allocated; 67.75 MiB free; 13.76 GiB reserved in total by PyTorch) 0% 0/14 [00:03<?, ?it/s]
vRAM usage has nothing to do with how many train/val examples there are, but rather model, image size, and batch size. 1280x1280 is a massive image size - on a 16gb GPU you will probably only be able to train at 1 or 2 batch size. Either use a lower resolution/smaller model, a GPU with more vRAM, or decrease your batch size. Also try NVIDIA AMP
https://stackoverflow.com/questions/68577169/
pytorch summary fails with huggingface model
I want a summary of a PyTorch model downloaded from huggingface. Am I doing something wrong here? from torchinfo import summary from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2) summary(model, input_size=(16, 512)) Gives the error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 257 if isinstance(x, (list, tuple)): --> 258 _ = model.to(device)(*x, **kwargs) 259 elif isinstance(x, dict): 11 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1530 output_hidden_states=output_hidden_states, -> 1531 return_dict=return_dict, 1532 ) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1070 -> 1071 result = forward_call(*input, **kwargs) 1072 if _global_forward_hooks or self._forward_hooks: /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 988 inputs_embeds=inputs_embeds, --> 989 past_key_values_length=past_key_values_length, 990 ) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1070 -> 1071 result = forward_call(*input, **kwargs) 1072 if _global_forward_hooks or self._forward_hooks: /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 214 if inputs_embeds is None: --> 215 inputs_embeds = self.word_embeddings(input_ids) 216 token_type_embeddings = self.token_type_embeddings(token_type_ids) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1070 -> 1071 result = forward_call(*input, **kwargs) 1072 if _global_forward_hooks or self._forward_hooks: /usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input) 159 input, self.weight, self.padding_idx, self.max_norm, --> 160 self.norm_type, self.scale_grad_by_freq, self.sparse) 161 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2042 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2043 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2044 RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding) The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) <ipython-input-8-4f70d4e6fa82> in <module>() 5 else: 6 # Can't get this working ----> 7 summary(model, input_size=(16, 512)) #, device='cpu') 8 #print(model) /usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs) 190 ) 191 summary_list = forward_pass( --> 192 model, x, batch_dim, cache_forward_pass, device, **kwargs 193 ) 194 formatting = FormattingOptions(depth, verbose, col_names, col_width, row_settings) /usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 268 "Failed to run torchinfo. See above stack traces for more details. " 269 f"Executed layers up to: {executed_layers}" --> 270 ) from e 271 finally: 272 if hooks is not None: RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []
There's a bug [also reported] in torchinfo library [torchinfo.py] in the last line shown. When dtypes is None, it is by default creating torch.float tensors whereas forward method of bert model uses torch.nn.embedding which expects only int/long tensors. def process_input( input_data: Optional[INPUT_DATA_TYPE], input_size: Optional[INPUT_SIZE_TYPE], batch_dim: Optional[int], device: Union[torch.device, str], dtypes: Optional[List[torch.dtype]] = None, ) -> Tuple[CORRECTED_INPUT_DATA_TYPE, Any]: """Reads sample input data to get the input size.""" if input_size is not None: if dtypes is None: dtypes = [torch.float] * len(input_size) If you try modifying the line to the following, it works fine. dtypes = [torch.int] * len(input_size) EDIT (Direct solution w/o changing their internal code): from torchinfo import summary from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2) summary(model, input_size=(2, 512), dtypes=['torch.IntTensor']) Alternate: For a simple summary, you could use print(model) instead of summary function.
https://stackoverflow.com/questions/68577198/
When to use torch.no_grad() is safe in forward propagation? Why does it hurt my model badly?
I have trained a CNN model whose forward-prop is like: *Part1*: learnable preprocess​ *Part2*: Mixup which does not need to calculate gradient *Part3*: CNN backbone and classifier head Both part1 and part3 need to calculate the gradient and need update weights when back-prop, but part2 is just a simple mixup and don't need gradient, so I tried wrapped this Mixup with torch.no_grad() to save computational resource and speed up training, which it indeed speed my training a lot, but the model`s prediction accuracy drops a lot. I'm wondering if Mixup does not need to calculate the gradient, why wrap it with torch.no_grad() hurt the model`s ability so much, is it due to loss of the learned weights of Part1, or something like break the chain between Part1 and Part2? Edit: Thanks @Ivan for your reply and it sounds reasonable, I also have the same thought but don't know how to prove it. In my experiment when I apply torch.no_grad() on Part2, the GPU memory consumption drops a lot, and training is much faster, so I guess this Part2 still needs gradient even it does not have learnable parameters. So can we conclude that torch.no_grad() should not be applied between 2 or more learnable blocks, otherwise it would drop the learning ability of blocks before this no_grad() part?
but part2 is just simple mixup and don't need gradient It actually does! In order to compute the gradient flow and backpropagate successfully to part1 of your model (which is learnable, according to you) you need to compute the gradients on part2 as well. Even though there are no learnable parameters on part2 of your model. What I'm assuming happened when you applied torch.no_grad() on part2 is that only part3 of your model was able to learn while part1 stayed untouched. Edit So can we conclude that torch.no_grad() should not be applied between 2 or more learnable blocks, otherwise it would drop the learning ability of blocks before this no_grad() part? The reasoning is simple: to compute the gradient on part1 you need to compute the gradient on intermediate results, irrespective of the fact that you won't use those gradients to update the tensors on part2. So indeed, you are correct.
https://stackoverflow.com/questions/68579174/
Pytorch model gradients no updating with some custom code
I've put together some computation which I'm trying to compute a loss on the result, and compute the gradients of all the parameters of the model w.r.t. that loss. The problem is that nestled in the computation is a tunable model that I want to be able to tune (eventually). Right now I am just trying to confirm that I can see the gradients of the model parameters when they are updated with backward(), which I cannot, This is the problem. Below I post code, the output, and the desired output. class ExpModelTunable(torch.nn.Module): def __init__(self): super(ExpModelTunable, self).__init__() self.alpha = torch.nn.Parameter( torch.tensor(1.0, requires_grad=True) ) self.beta = torch.nn.Parameter( torch.tensor(1.0, requires_grad=True) ) def forward(self, t): return self.alpha * torch.exp( - self.beta * t ) def func_f(t, t_list): mu = torch.tensor(0.13191110355, requires_grad=True) running_sum = torch.sum( torch.tensor( [ f(t-ti) for ti in t_list ], requires_grad=True ) ) return mu + running_sum def pytorch_objective_tunable(u, t_list): global U steps = torch.linspace(t_list[-1].item(),u.item(),100, requires_grad=True) func_values = torch.tensor( [ func_f(steps[i], t_list) for i in range(len(steps)) ], requires_grad=True ) return torch.log(U) + torch.trapz(func_values, steps) def newton_method(function, func, initial, t_list, iteration=200, convergence=0.0001): for i in range(iteration): previous_data = initial.clone() value = function(initial, t_list) initial.data -= (value / func(initial.item(), t_list)).data if torch.abs(initial - previous_data) < torch.tensor(convergence): return initial return initial # return our final after iteration # call starts f = ExpModelTunable() U = torch.rand(1, requires_grad=True) initial_x = torch.tensor([.1], requires_grad=True) t_list = torch.tensor([0.0], requires_grad=True) result = newton_method(pytorch_objective_tunable, func_f, initial_x, t_list) print("Next Arrival at ", result.item()) This prints, the output is correct, all good here: Next Arrival at 4.500311374664307. My problem occures here: loss = result - torch.tensor(1) loss.backward() print( result.grad ) for param in f.parameters(): print(param.grad) output: tensor([1.]) None #this should not be None None #this should not be None So we can see the result variable's gradient is updating, but the model f's parameters' gradients aren't getting updated. I tried to go back through all the computation, all the code is here, and make sure any and everything has requires_grad=True but still I can't get it to work. This should work right? Anyone have any tips? Thanks.
There are a few issues with your code. Straight off you can tell if the model can at least initiate a backpropagation by looking at your output tensor: >>> result tensor([...], requires_grad=True) It doesn't have a grad_fn, so you already know it's not connected to a graph. Now for debugging the issues, here are some tips: First, you should never mutate .data or use .item if you're planning on backpropagating. This will essentially kill the graph! As any operation performed after won't be attached to a graph. You actually don't need to use requires_grad most of the time. Do note nn.Parameter will assign requires_grad=True to the tensor by default. When working with list comprehensions inside your PyTorch pipeline, you can wrap the list with a torch.stack which is very effective to keep it tidy. I wouldn't use a global if I was you... Here is the corrected version: class ExpModelTunable(nn.Module): def __init__(self): super(ExpModelTunable, self).__init__() self.alpha = nn.Parameter(torch.ones(1)) self.beta = nn.Parameter(torch.ones(1)) def forward(self, t): return self.alpha * torch.exp(-self.beta*t) f = ExpModelTunable() def func_f(t, t_list): mu = torch.tensor(0.13191110355) running_sum = torch.stack([f(t-ti) for ti in t_list]).sum() return mu + running_sum def pytorch_objective_tunable(u, t_list): global U steps = torch.linspace(t_list[-1].item(), u.item(), 100) func_values = torch.stack([func_f(steps[i], t_list) for i in range(len(steps))]) return torch.log(U) + torch.trapz(func_values, steps) # return torch.trapz(func_values, steps) def newton_method(function, func, initial, t_list, iteration=1, convergence=0.0001): for i in range(iteration): previous_data = initial.clone() value = function(initial, t_list) initial -= (value / func(initial, t_list)) if torch.abs(initial - previous_data) < torch.tensor(convergence): return initial return initial # return our final after iteration U = torch.rand(1, requires_grad=True) initial_x = torch.tensor([.1]) t_list = torch.tensor([0.0], requires_grad=True) result = newton_method(pytorch_objective_tunable, func_f, initial_x, t_list) Notice now the grad_fn attached to result: >>> result tensor([...], grad_fn=<SubBackward0>)
https://stackoverflow.com/questions/68581464/
Load model by from_pretrained() then model.train()?
I got one question about torch. I load pre-training model like: model_name = "bert-base-uncased" model = BertTokenizer.from_pretrained(model_name) and I read To train the model, you should first set it back in training mode with model.train(). but I don't understand how it does work. when I read document of from_pretrained(), there isn't any explanation about train(). How it works?
.train() is a method of torch.nn.Module. It notifies the module to switch to the training mode, see documentation. What exactly happens under the hood is up to the actual Module, in many modules it doesn't change anything, so without knowing your network we cannot say what examctly happens. But for instance in torch.nn.BatchNormNd it has an effect.
https://stackoverflow.com/questions/68585059/
pytorch summary fails with huggingface model II: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
I want a summary of a PyTorch model downloaded from huggingface: from torchinfo import summary from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2) summary(model, input_size=(16, 512), dtypes=['torch.IntTensor']) (See SO for why the dtypes is needed.) However, I am getting the error Expected all tensors to be on the same device, ... even though I have not provided any tensors. See the output below. How can I fix this? --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 257 if isinstance(x, (list, tuple)): --> 258 _ = model.to(device)(*x, **kwargs) 259 elif isinstance(x, dict): 11 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1530 output_hidden_states=output_hidden_states, -> 1531 return_dict=return_dict, 1532 ) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1070 -> 1071 result = forward_call(*input, **kwargs) 1072 if _global_forward_hooks or self._forward_hooks: /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 988 inputs_embeds=inputs_embeds, --> 989 past_key_values_length=past_key_values_length, 990 ) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1070 -> 1071 result = forward_call(*input, **kwargs) 1072 if _global_forward_hooks or self._forward_hooks: /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 214 if inputs_embeds is None: --> 215 inputs_embeds = self.word_embeddings(input_ids) 216 token_type_embeddings = self.token_type_embeddings(token_type_ids) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1070 -> 1071 result = forward_call(*input, **kwargs) 1072 if _global_forward_hooks or self._forward_hooks: /usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input) 159 input, self.weight, self.padding_idx, self.max_norm, --> 160 self.norm_type, self.scale_grad_by_freq, self.sparse) 161 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2042 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2043 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2044 RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking arugment for argument index in method wrapper_index_select) The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) <ipython-input-13-d6f4e53beef7> in <module>() 3 else: 4 # Can't get this working. See https://stackoverflow.com/questions/68577198/pytorch-summary-fails-with-huggingface-model ----> 5 summary(model, input_size=(16, 512), dtypes=['torch.IntTensor']) 6 print(model) /usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs) 190 ) 191 summary_list = forward_pass( --> 192 model, x, batch_dim, cache_forward_pass, device, **kwargs 193 ) 194 formatting = FormattingOptions(depth, verbose, col_names, col_width, row_settings) /usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 268 "Failed to run torchinfo. See above stack traces for more details. " 269 f"Executed layers up to: {executed_layers}" --> 270 ) from e 271 finally: 272 if hooks is not None: RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: [] Output from transformers-cli: - `transformers` version: 4.9.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.5.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
A working solution (or workaround?) is kind of obvious: summary(model, input_size=(16, 512), dtypes=['torch.IntTensor'], device='cpu')
https://stackoverflow.com/questions/68585678/
Specify GCC version for nvcc without root priviledges
I am using a GPU cluster where the submitted jobs are managed by Slurm. I don't have admin / root priviledges on that server. I am currently trying to build a project that contains .cpp and .cu files. I do that by calling TORCH_CUDA_ARCH_LIST=7.2 CC=gcc-7 CXX=g++-7 python setup.py install, as the cluster uses CUDA 10.1 and runs V100 GPUs (hence the gencode is sm_70). However, the build crashes with the following error message: building <filename> extension gcc-7 -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes (...): error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ error: command '/<somepath>/anaconda3/envs/pytorch14/bin/nvcc' failed with exit status 1 So, as one can see by the gcc-7 call in the 2nd line, the python script is using the right compiler, but unfortunately, the nvcc call uses the system-wide gcc symlink, which is: /usr/bin/gcc: symbolic link to gcc-9. I have found a couple of answers online (including this and this) and have tried the suggested steps. But: as I don't have root access, I cannot create a new symlink / change the existing symlink to another installed gcc version, e.g. /usr/bin/gcc-7: doing ln -s /usr/bin/gcc-7 /usr/bin/gcc gives me a ln: failed to create symbolic link '/usr/bin/gcc': File exists error, and copying the files into /usr/local/bin, as suggested in other answers on SO, wont work either because of the missing priviledges. I'm really at a loss here and feel that this might be a dead end. Does anybody have any suggestions? For reference, this is what my setup.py looks like: from setuptools import setup from torch.utils.cpp_extension import BuildExtension, CUDAExtension setup( name='noise_cuda', ext_modules=[ CUDAExtension('noise_cuda', [ 'noise_cuda.cpp', 'noise_cuda_kernel.cu', ]), ], cmdclass={ 'build_ext': BuildExtension })
I'm not a pytorch user, but if I read the docs right, this should work: import sysconfig from setuptools import setup from torch.utils.cpp_extension import BuildExtension, CUDAExtension setup( name='noise_cuda', ext_modules=[ CUDAExtension('noise_cuda', [ 'noise_cuda.cpp', 'noise_cuda_kernel.cu', ], extra_compile_args={'cxx': sysconfig.get_config_var('CFLAGS').split(), 'nvcc': ['-ccbin=/usr/bin/gcc-7']}), ], cmdclass={ 'build_ext': BuildExtension })
https://stackoverflow.com/questions/68587087/
How do I find the number of elements taking up memory for an expanded view of a pytorch tensor?
Tensor.expand() returns a new view of an underlying tensor, but doesn't actually allocate more memory for the expanded view. If I have a tensor that is the result of calling expand() (some unknown number of times), how can I tell how many cells are actually allocated in memory for the tensor (in my actual use-case, I really just care about knowing whether or not that number is 1)? Is there something like what I'm calling elements_in_memory as used in the following?: import torch t = torch.tensor(4.0) t2 = t.expand(3, 4) t3 = t2.unsqueeze(0).expand(5, 3, 4) # I'm looking for something like this (which doesn't work) assert t.elements_in_memory == 1 assert t2.elements_in_memory == 1 assert t3.elements_in_memory == 1 Some things I've tried: t.data_ptr refers to the first element of the underlying tensor in memory, so t.data_ptr == t2.data_ptr, but that doesn't tell me how many elements.
It looks like t.storage().size() is what I'm after. From the torch.Tensor documentation: Each tensor has an associated torch.Storage, which holds its data. The tensor class also provides multi-dimensional, strided view of a storage and defines numeric operations on it. Tensor.storage() returns a reference to the storage used for the tensor: import torch t = torch.tensor(4.0) t2 = t.expand(3, 4) t3 = t2.unsqueeze(0).expand(5, 3, 4) assert t.storage().size() == 1 assert t2.storage().size() == 1 assert t3.storage().size() == 1 t4 = torch.ones(3, 4) t5 = t4.unsqueeze(0).expand(5, 3, 4) assert t4.storage().size() == 12 assert t5.storage().size() == 12 Note that the underlying storage might also include more elements than are exposed by some particular view (this wasn't relevant to my use-case). For example torch.ones(10)[3:6].storage().size() == 10.
https://stackoverflow.com/questions/68596433/
Reduce torch tensor
For a boolean tensor of shape (15,10), I want to perform bitwise_or along axis 0 so that the resulting tensor would be of shape 10. torch.bitwise_or does not support this. I know it can done in numpy using np.bitwise_or.reduce(x,axis=0). I did not find something similar in torch. How to reduce torch tensor?
Hi figured out the problem here if you look at the docstring for the reduce function it's essentially just a for loop adding itself from 0 # ufunc docstring # op.identiy is 0 r = op.identity # op = ufunc for i in range(len(A)): r = op(r, A[i]) return r So to solve and fix your problem import numpy as np import torch bool_arr = np.random.randint(0, 2, (15, 10), dtype=np.bool) # create a bool arr tensor_bool_arr = torch.tensor(bool_arr) # Create torch version output_np = np.bitwise_or.reduce(bool_arr, axis=0) # array([ True, True, True, True, True, True, True, True, True, True]) # Create a pytorch equivalent of bitwise reduce r = torch.tensor(0) for i in range(len(tensor_bool_arr)): r = torch.bitwise_or(r, tensor_bool_arr[i]) torch_output = r.type(torch.bool) # tensor([True, True, True, True, True, True, True, True, True, True]) assert torch_output.shape[0] == np_output.shape[0]
https://stackoverflow.com/questions/68596639/
I get this error using PyTorch: RuntimeError: gather_out_cpu(): Expected dtype int64 for index
I'm trying to make an AI with PyTorch, but I get this error: RuntimeError: gather_out_cpu(): Expected dtype int64 for index And this is my function: def learn(self, batch_state, batch_next_state, batch_reward, batch_action): outputs = self.model(batch_state).gather(1, batch_action.unsqueeze(1)).squeeze(1) next_outputs = self.model(batch_next_state).detach().max(1)[0] target = self.gamma * next_outputs + batch_reward td_loss = F.smooth_l1_loss(outputs, target) self.optimizer.zero_grad() td_loss.backward(retain_variables = True) self.optimizer.step()
You need to change the data type of your batch_action tensor before passing it to torch.gather. def learn(...): batch_action = batch_action.type(torch.int64) outputs = ... ... # or outputs = self.model(batch_state).gather(1, batch_action.type(torch.int64).unsqueeze(1)).squeeze(1)
https://stackoverflow.com/questions/68598000/
Torch mv behavior not understandable
The following screenshots show that torch.mv is unusable in a situation that obviously seem to be correct... how is this possible, any idea what can be the problem? this first image shows the correct situation, where the vector has 10 rows for a matrix of 10 columns, but I showed the other also just in case. Also swapping w.mv(x) for x.mv(w) does not make a difference. However, the @ operator works... the thing is that for my own reasons I want to use mv, so I would like to know what the problem is.
According to documentation: torch.mv(input, vec, *, out=None) → Tensor If input is a (n×m) tensor, vec is a 1-D tensor of size m, out will be 1-D of size n. The x here should be 1-D, but in your case it's 10x1 (2D). You can remove extra dimension (or create a single dimension x) >>> w.mv(x.squeeze()) tensor([ 0.1432, -2.0639, -2.1871, -1.8837, 0.7333, -0.4000, 0.4023, -1.1318, 0.0423, -1.2136]) >>> w @ x tensor([[ 0.1432], [-2.0639], [-2.1871], [-1.8837], [ 0.7333], [-0.4000], [ 0.4023], [-1.1318], [ 0.0423], [-1.2136]])
https://stackoverflow.com/questions/68598003/
Need to implement Deep Learning architecture quite similar to Siamese Network
I must implement this network: Similar to a siamese network with a contrastive loss. My problem is S1/F1. The paper tells this: "F1 and S1 are neural networks that we use to learn the unit-normalized embeddings for the face and speech modalities, respectively. In Figure 1, we depict F1 and S1 in both training and testing routines. They are composed of 2D convolutional layers (purple), max-pooling layers (yellow), and fully connected layers (green). ReLU non-linearity is used between all layers. The last layer is a unit-normalization layer (blue). For both face and speech modalities, F1 and S1 return 250-dimensional unit-normalized embeddings". My question is: How can apply a 2D convolutional layer (purple) to input with shape (number of videos, number of frames, features)? What is the last layer? Batch norm? F.normalize?
I will give an answer to your two questions without going too much into details: If you're working with a CNN, you're most likely having spatial information in your input, that is your input is a two dimensional multi-channel tensor (*, channels, height, width), not a feature vector (*, features). You simply won't be able to apply a convolution on your input (at least a 2D conv), if you don't retain two-dimensionality. The last layer is described as a "unit-normalization" layer. This is merely the operation of making the vector's norm unit (equal to 1). You can do this by dividing the said vector by its norm.
https://stackoverflow.com/questions/68600336/
RuntimeError: Given groups=721, expected weight to be at least 721 at dimension 0, but got weight of size [3, 1, 5, 5] instead
Here is the complete error: It is worth mentioning that the input image is of size 480 by 721. Traceback (most recent call last): File "/home/amir/PycharmProjects/LPTN/loadPretrainedModel.py", line 222, in <module> output = model(images) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/amir/PycharmProjects/LPTN/loadPretrainedModel.py", line 180, in forward pyr_A = self.lap_pyramid.pyramid_decom(img=real_A_full) File "/home/amir/PycharmProjects/LPTN/loadPretrainedModel.py", line 65, in pyramid_decom filtered = self.conv_gauss(current, self.kernel) File "/home/amir/PycharmProjects/LPTN/loadPretrainedModel.py", line 58, in conv_gauss out = torch.nn.functional.conv2d(img, kernel, groups=img.shape[1]) RuntimeError: Given groups=721, expected weight to be at least 721 at dimension 0, but got weight of size [3, 1, 5, 5] instead I am trying to run inference on the LPTN (Laplacian Pyramid Translation Network) model. model = LPTN() state_dict = torch.load('/home/amir/PycharmProjects/LPTN/experiments/pretrained_models/net_g_FiveK_numhigh3.pth', map_location='cpu') model.load_state_dict(state_dict, strict=False) model.eval() img = cv2.imread("/home/amir/PycharmProjects/LPTN/scripts/data_preparation/datasets/FiveK/FiveK_480p/train/A/2.jpg") img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) images = torch.from_numpy(np.asarray([img])).float() output = model(images) prediction = torch.argmax(output) Here is the function where the error occurs: def conv_gauss(self, img, kernel): padding = (2, 2, 2, 2) img = torch.nn.functional.pad(img, padding, mode='reflect') out = torch.nn.functional.conv2d(img, kernel, groups=img.shape[1])
Well i dont really have the code or anything but i am making a guess based what you have showed import cv2 import torch img = cv2.imread("/home/amir/PycharmProjects/LPTN/scripts/data_preparation/datasets/FiveK/FiveK_480p/train/A/2.jpg") img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # H, W, C img_tensor = torch.tensor(img, dtype=torch.float32) # Convert to torch tensor img_tensor = img_tensor / 255. # Normalize [0 - 1] range (but depends on the model) img_tensor = img_tensor.permute(2, 1, 0) # Reorder to C, H, W (torch requires this format) img_tensor = img_tensor.unsqueeze(0) # Becomes this format B, C, H, W # Set model to eval mode model.eval() # Run forward pass with torch.no_grad(): # Dont run your gradients, speeds up inference predictions = model(img_tensor) # Get back predictions from model
https://stackoverflow.com/questions/68603870/
AttributeError: module transformers has no attribute TFGPTNeoForCausalLM
I cloned this repository/documentation https://huggingface.co/EleutherAI/gpt-neo-125M I get the below error whether I run it on google collab or locally. I also installed transformers using this pip install git+https://github.com/huggingface/transformers and made sure the configuration file is named as config.json 5 tokenizer = AutoTokenizer.from_pretrained("gpt-neo-125M/",from_tf=True) ----> 6 model = AutoModelForCausalLM.from_pretrained("gpt-neo-125M",from_tf=True) 7 8 3 frames /usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getattr__(self, name) AttributeError: module transformers has no attribute TFGPTNeoForCausalLM Full code: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M",from_tf=True) model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M",from_tf=True) transformers-cli env results: transformers version: 4.10.0.dev0 Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.29 Python version: 3.8.5 PyTorch version (GPU?): 1.9.0+cpu (False) Tensorflow version (GPU?): 2.5.0 (False) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Using distributed or parallel set-up in script?: Both collab and locally have TensorFlow 2.5.0 version
My solution was to first edit the source code to remove the line that adds "TF" in front of the package as the correct transformers module is GPTNeoForCausalLM , but somewhere in the source code it manually added a "TF" in front of it. Secondly, before cloning the repository it is a must to run git lfs install. This link helped me install git lfs properly https://askubuntu.com/questions/799341/how-to-install-git-lfs-on-ubuntu-16-04
https://stackoverflow.com/questions/68604289/
How to efficiently calculate distance matrix in pytorch for two sets 3D tensors with different sizes?
I have tensors X of shape BxCxHxW and Y of shape NxCxHxW. B is batch size, C is channels, H is height, W is width, and N will be constant for any batch. Basically I want the BxN distance matrix of distances between a set of B images and another set of N images. I tried using torch.cdist by reshaping X as 1xBx(C*H*W) and Y as 1xNx(C*H*W) by unsqueezing a dimension and flattening the last 3 channels, but I did a sanity check and got wrong answers with this method. I want L2 distance.
According to the documentation page for torch.cdist, the two inputs and outputs are shaped in the following manner: x1: (B, P, M), x2: (B, R, M), and output: (B, P, R). To match your case: B=1, P=B, R=N, while M=C*H*W (i.e. flattened). As you just explained. So you are basically going for: >>> torch.cdist(X[None].flatten(2), Y[None].flatten(2)) If you're not convinced, you can check with the following method: >>> dist = [] >>> for x in X: ... for y in Y: ... dist.append((x-y).norm()) And compare the torch.cdist result with torch.tensor(dist).reshape(len(X), len(Y)).
https://stackoverflow.com/questions/68604861/
What is difference between nn.Module and nn.Sequential
I am just learning to use PyTorch as a beginner. If anyone is familiar with PyTorch, would you tell me the difference between nn.Module and nn.Sequential? My questions are What is the advantage to use nn.Module instead of nn.Sequential? Which is regularly utilised to build the model? How we should select nn.Module or nn.Sequential?
TLDR; answering your questions What is the advantage to use nn.Module instead of nn.Sequential? While nn.Module is the base class to implement PyTorch models, nn.Sequential is a quick way to define a sequential neural network structures inside or outside an existing nn.Module. Which is regularly utilized to build the model? Both are widely used. How we should select nn.Module or nn.Sequential? All neural networks are implemented with nn.Module. If the layers are sequentially used (self.layer3(self.layer2(self.layer1(x))), you can leverage nn.Sequential to not have to define the forward function of the model. I should start by mentioning that nn.Module is the base class for all neural network modules in PyTorch. As such nn.Sequential is actually a direct subclass of nn.Module, you can look for yourself on this line. When creating a new neural network, you would usually go about creating a new class and inheriting from nn.Module, and defining two methods: __init__ (the initializer, where you define your layers) and forward (the inference code of your module, where you use your layers). That's all you need, since PyTorch will handle backward pass with Autograd. Here is an example of a module: class NN(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(10, 4) self.fc2 = nn.Linear(4, 2) def forward(self, x) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) return x If the model you are defining is sequential, i.e. the layers are called sequentially on the input, one by one. Then, you can simply use a nn.Sequential. As I explained earlier, nn.Sequential is a special kind of nn.Module made for this particular widespread type of neural network. The equivalent here is: class NN(nn.Sequential): def __init__(self): super().__init__( nn.Linear(10, 4), nn.ReLU(), nn.Linear(4, 2), nn.ReLU()) Or a simpler way of putting it is: NN = Sequential( nn.Linear(10, 4), nn.ReLU(), nn.Linear(4, 2), nn.Linear()) The objective of nn.Sequential is to quickly implement sequential modules such that you are not required to write the forward definition, it being implicitly known because the layers are sequentially called on the outputs. In a more complicated module though, you might need to use multiple sequential submodules. For instance, take a CNN classifier, you could define a nn.Sequential for the CNN part, then define another nn.Sequential for the fully connected classifier section of the model.
https://stackoverflow.com/questions/68606661/
How to calculate correct Cross Entropy between 2 tensors in Pytorch when target is not one-hot?
I am confused about the calculation of cross entropy in Pytorch. If I want to calculate the cross entropy between 2 tensors and the target tensor is not a one-hot label, which loss should I use? It is quite common to calculate the cross entropy between 2 probability distributions instead of the predicted result and a determined one-hot label. The basic loss function CrossEntropyLoss forces the target as the index integer and it is not eligible in this case. BCELoss seems to work but it gives an unexpected result. The expected formula to calculate the cross entropy is But BCELoss calculates the BCE of each dimension, which is expressed as -yi*log(pi)-(1-yi)*log(1-pi) Compared with the first equation, the term -(1-yi)*log(1-pi) should not be involved. Here is an example using BCELoss and we can see the second term is involved in each dimension's result. And that make the result different from the correct one. import torch.nn as nn import torch from math import log a = torch.Tensor([0.1,0.2,0.7]) y = torch.Tensor([0.2,0.2,0.6]) L = nn.BCELoss(reduction='none') y1 = -0.2 * log(0.1) - 0.8 * log(0.9) print(L(a, y)) print(y1) And the result is tensor([0.5448, 0.5004, 0.6956]) 0.5448054311250702 If we sum the results of all the dimensions, the final cross entropy doesn't correspond to the expected one. Because each one of these dimensions involves the -(1-yi)*log(1-pi) term. In constrast, Tensorflow can calculate the correct cross entropy value with CategoricalCrossentropy. Here is the example with the same setting and we can see the cross entropy is calculated in the same way as the first formula. import tensorflow as tf from math import log L = tf.losses.CategoricalCrossentropy() a = tf.convert_to_tensor([0.1,0.2,0.7]) y = tf.convert_to_tensor([0.2,0.2,0.6]) y_ = -0.2* log(0.1) - 0.2 * log(0.2) - 0.6 * log(0.7) print(L(y,a), y_) tf.Tensor(0.9964096, shape=(), dtype=float32) 0.9964095674488687 Is there any function can calculate the correct cross entropy in Pytorch, using the first formula, just like CategoricalCrossentropy in Tensorflow?
The fundamental problem is that you are incorrectly using the BCELoss function. Cross-entropy loss is what you want. It is used to compute the loss between two arbitrary probability distributions. Indeed, its definition is exactly the equation that you provided: where p is the target distribution and q is your predicted distribution. See this StackOverflow post for more information. In your example where you provide the line y = tf.convert_to_tensor([0.2, 0.2, 0.6]) you are implicitly modeling a multi-class classification problem where the target class can be one of three classes (the length of that tensor). More specifically, that line is saying that for this one data instance, class 0 has probably 0.2, class 1 has probability 0.2, and class 2 has probability 0.6. The problem you are having is that PyTorch's BCELoss computes the binary cross-entropy loss, which is formulated differently. Binary cross-entropy loss computes the cross-entropy for classification problems where the target class can be only 0 or 1. In binary cross-entropy, you only need one probability, e.g. 0.2, meaning that the probability of the instance being class 1 is 0.2. Correspondingly, class 0 has probability 0.8. If you give the same tensor [0.2, 0.2, 0.6] to BCELoss, you are modeling a situation where there are three data instances, where data instance 0 has probability 0.2 of being class 1, data instance 1 has probability 0.2 of being class 1, and data instance 2 has probability 0.6 of being class 1. Now, to your original question: If I want to calculate the cross entropy between 2 tensors and the target tensor is not a one-hot label, which loss should I use? Unfortunately, PyTorch does not have a cross-entropy function that takes in two probability distributions. See this question: https://discuss.pytorch.org/t/how-should-i-implement-cross-entropy-loss-with-continuous-target-outputs/10720 The recommendation is to implement your own function using its equation definition. Here is code that works: def cross_entropy(input, target): return torch.mean(-torch.sum(target * torch.log(input), 1)) y = torch.Tensor([[0.2, 0.2, 0.6]]) yhat = torch.Tensor([[0.1, 0.2, 0.7]]) cross_entropy(yhat, y) # tensor(0.9964) It provides the answer that you wanted.
https://stackoverflow.com/questions/68609414/
How do I insert a 1D Torch tensor into an existing 2D Torch tensor into a specific row?
I want to insert a 1D Torch tensor into a specific row number into a 2D Torch tensor (Using Pytorch). The 1D tensor and the 2D tensor will always have the same length, so you can easily visualize this as a table with rows and columns. The 2D tensor is the existing table and I would like to be able to specify the the row number the 1D tensor (or row) will be inserted. When I say I'd like to use Pytorch, I don't want to turn anything into a non-Pytorch list and send the computations back and forth over the CPU and GPU. The tensors are all already on my CUDA device and I would like to keep them there for the sake of time. The 2D tensor three_by_four tensor([[0.7421, 0.1584, 0.3231, 0.4840], [0.4065, 0.7646, 0.9677, 0.4537], [0.5226, 0.6216, 0.9420, 0.0605]], device='cuda:1') The 1D tensor one_by_three tensor([[0.3095, 0.8460, 0.2900, 0.9683]], device='cuda:1') The best I was able to do was get the new row (the 1D tensor) appended to the bottom or top of the 2D tensor with torch.cat depending on the order. The 1D tensor added to the top. torch.cat([one_by_three, three_by_four]) tensor([[0.3095, 0.8460, 0.2900, 0.9683], [0.7421, 0.1584, 0.3231, 0.4840], [0.4065, 0.7646, 0.9677, 0.4537], [0.5226, 0.6216, 0.9420, 0.0605]], device='cuda:1') The 1D tensor added to the bottom torch.cat([three_by_four, one_by_three]) tensor([[0.7421, 0.1584, 0.3231, 0.4840], [0.4065, 0.7646, 0.9677, 0.4537], [0.5226, 0.6216, 0.9420, 0.0605], [0.3095, 0.8460, 0.2900, 0.9683]], device='cuda:1') What I would like, for example, if I could put it in position 1, or 2 in this example. tensor([[0.7421, 0.1584, 0.3231, 0.4840], [0.4065, 0.7646, 0.9677, 0.4537], [0.3095, 0.8460, 0.2900, 0.9683] [0.5226, 0.6216, 0.9420, 0.0605]], device='cuda:1')
As of now the best I could find from torch import tensor, cat x = tensor([[0.7421, 0.1584, 0.3231, 0.4840], [0.4065, 0.7646, 0.9677, 0.4537], [0.5226, 0.6216, 0.9420, 0.0605]]) y = tensor([[0.3095, 0.8460, 0.2900, 0.9683]]) cat([x[:1], y, x[1:]]) ''' tensor([[0.7421, 0.1584, 0.3231, 0.4840], [0.3095, 0.8460, 0.2900, 0.9683], [0.4065, 0.7646, 0.9677, 0.4537], [0.5226, 0.6216, 0.9420, 0.0605]]) ''' https://discuss.pytorch.org/t/is-there-a-way-to-insert-a-tensor-into-an-existing-tensor/14642
https://stackoverflow.com/questions/68610135/
pos_weight in binary cross entropy calculation
When we deal with imbalanced training data (there are more negative samples and less positive samples), usually pos_weight parameter will be used. The expectation of pos_weight is that the model will get higher loss when the positive sample gets the wrong label than the negative sample. When I use the binary_cross_entropy_with_logits function, I found: bce = torch.nn.functional.binary_cross_entropy_with_logits pos_weight = torch.FloatTensor([5]) preds_pos_wrong = torch.FloatTensor([0.5, 1.5]) label_pos = torch.FloatTensor([1, 0]) loss_pos_wrong = bce(preds_pos_wrong, label_pos, pos_weight=pos_weight) preds_neg_wrong = torch.FloatTensor([1.5, 0.5]) label_neg = torch.FloatTensor([0, 1]) loss_neg_wrong = bce(preds_neg_wrong, label_neg, pos_weight=pos_weight) However: >>> loss_pos_wrong tensor(2.0359) >>> loss_neg_wrong tensor(2.0359) The losses derived from wrong positive samples and negative samples are the same, so how does pos_weight work in the imbalanced data loss calculation?
TLDR; both losses are identical because you are computing the same quantity: both inputs are identical, the two batch elements and labels are just switched. Why are you getting the same loss? I think you got confused in the usage of F.binary_cross_entropy_with_logits (you can find a more detailed documentation page with nn.BCEWithLogitsLoss). In your case your input shape (aka the output of your model) is one-dimensional, which means you only have a single logit x, not two). In your example you have preds_pos_wrong = torch.FloatTensor([0.5, 1.5]) label_pos = torch.FloatTensor([1, 0]) This means your batch size is 2, and since by default the function is averaging the losses of the batch elements, you end up with the same result for BCE(preds_pos_wrong, label_pos) and BCE(preds_neg_wrong, label_neg). The two elements of your batch are just switched. You can verify this very easily by not averaging the loss over the batch-elements with the reduction='none' option: >>> F.binary_cross_entropy_with_logits(preds_pos_wrong, label_pos, pos_weight=pos_weight, reduction='none') tensor([2.3704, 1.7014]) >>> F.binary_cross_entropy_with_logits(preds_pos_wrong, label_pos, pos_weight=pos_weight, reduction='none') tensor([1.7014, 2.3704]) Looking into F.binary_cross_entropy_with_logits: That being said the formula for the binary cross-entropy is: bce = -[y*log(sigmoid(x)) + (1-y)*log(1- sigmoid(x))] Where y (respectively sigmoid(x) is for the positive class associated with that logit, and 1 - y (resp. 1 - sigmoid(x)) is the negative class. The documentation could be more precise on the weighting scheme for pos_weight (not to be confused with weight, which is the weighting of the different logits output). The idea with pos_weight as you said, is to weigh the positive term, not the whole term. bce = -[w_p*y*log(sigmoid(x)) + (1-y)*log(1- sigmoid(x))] Where w_p is the weight for the positive term, to compensate for the positive to negative sample imbalance. In practice, this should be w_p = #negative/#positive. Therefore: >>> w_p = torch.FloatTensor([5]) >>> preds = torch.FloatTensor([0.5, 1.5]) >>> label = torch.FloatTensor([1, 0]) With the builtin loss function, >>> F.binary_cross_entropy_with_logits(preds, label, pos_weight=w_p, reduction='none') tensor([2.3704, 1.7014]) Compared with the manual computation: >>> z = torch.sigmoid(preds) >>> -(w_p*label*torch.log(z) + (1-label)*torch.log(1-z)) tensor([2.3704, 1.7014])
https://stackoverflow.com/questions/68611397/
Pytorch: best practice to save list of tensors?
I use tensors to do transformation then I save it in a list. Later, I will make it a dataset using Dataset, then finally DataLoader to train my model. To do it, I can simply use: l = [tensor1, tensor2, tensor3,...] dataset = Dataset.TensorDataset(l) dataloader = DataLoader(dataset) I wonder what is the best practice doing so, to avoid RAM overflow if the size of l grows? Can something like Iterator avoid it?
Save tensors for idx, tensor in enumerate(dataloader0): torch.save(tensor, f"{my_folder}/tensor{idx}.pt") Create dataset class FolderDataset(Dataset): def __init__(self, folder): self.files = os.listdir(folder) self.folder = folder def __len__(self): return len(self.files) def __getitem__(self, idx): return torch.load(f"{self.folder}/{self.files[idx]}") And then you can implement your own dataloader. If you can't hold the whole dataset in memory, some file system loading is required.
https://stackoverflow.com/questions/68617340/
Roll rows with variable step value on a 2D tensor
I have a tensor a that I would like to first mask using mask and then discard the remaining frames. To ensure the output tensor is of the correct shape, padding should fill in the remaining values at the end. I can assume there is only a single continuous sequence of True's in each row of the mask. e.g. a = torch.arange(1,17).reshape(4,4) # tensor([[ 1, 2, 3, 4], # [ 5, 6, 7, 8], # [ 9, 10, 11, 12], # [13, 14, 15, 16]]) mask = torch.tensor([[False, True, True, False], [False, True, True, True], [ True, False, False, False], [ True, True, True, True]]) # desired output (assuming padding value is 0): # tensor([[ 2, 3, 0, 0], # [ 6, 7, 8, 0], # [ 9, 0, 0, 0], # [13, 14, 15, 16]]) I can achieve the desired output by applying torch.masked_select followed by torch.nn.functional.pad on each row in a loop but I am struggling to think of a way to do this more efficiently in batches. I have also looked into starting by using torch.roll and zeroing after appropriate indexes, but this function can only be applied across an entire dimension and not a custom amount of roll per row.
By applying torch.sort on the mask itself you can achieve the desired result. Indeed if your sort the boolean values you can manage to move the False values at the end of the stack, and let the True values at the beginning. Do note this might vary depending on the sorting algorithm, there might be some shuffling for certain algorithms.... As @Seraf Fej pointed out: you can use the stable=True option on torch.stable such that the order of equivalent items is preserved. Then use the indices of the sorting to gather the values on a with torch.gather. Finally, you will need to mask the resulting matrix to replace the discarded values with the appropriate padding. >>> a tensor([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]]) >>> mask tensor([[False, True, True, False], [False, True, True, True], [ True, False, False, False], [ True, True, True, True]]) Sort the mask: >>> values, indices = mask.sort(1, descending=True, stable=True) >>> values tensor([[ True, True, False, False], [ True, True, True, False], [ True, False, False, False], [ True, True, True, True]]) >>> indices tensor([[1, 2, 0, 3], [1, 2, 3, 0], [0, 1, 2, 3], [0, 1, 2, 3]]) Gather from indices and mask with values: >>> a.gather(1, indices)*values tensor([[ 2, 3, 0, 0], [ 6, 7, 8, 0], [ 9, 0, 0, 0], [13, 14, 15, 16]]) You can easily extend to any padding value using torch.where: >>> torch.where(values, a.gather(1, indices), -1) tensor([[ 2, 3, -1, -1], [ 6, 7, 8, -1], [ 9, -1, -1, -1], [13, 14, 15, 16]]) Or using the inverse mask ~values, weighted by the padding value: >>> a.gather(1, indices)*values -1*~values tensor([[ 2, 3, -1, -1], [ 6, 7, 8, -1], [ 9, -1, -1, -1], [13, 14, 15, 16]])
https://stackoverflow.com/questions/68621175/
RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'
I am trying to train PeleeNet pytorch and got the following error train.py line 80 pelee_voc train configuration
Turning the shuffle parameter off in the dataloader solved it. Got the answer form here.
https://stackoverflow.com/questions/68621210/
Create custom gradient descent in pytorch
I am trying to use PyTorch autograd to implement my own batch gradient descent algorithm. I want to create a simple one-layer neural net with a linear activation function and the mean squared error as the loss function. I can't seem to get my head around what exactly is happening in the backward pass and how PyTorch understands my outputs. I have coded one class specifying the linear function in the forward pass, and in the backward pass, I calculated the gradients with respect to each variable. I also coded a class for the MSE function and specified the gradients with respect to ITS variables in the backward pass. When I run a simple gradient descent algorithm, I get no errors, but the MSE only goes down in the first iteration, and after that, it continually goes up. This leads me to believe that I have made a mistake, but I am not sure, where. Does anybody see the error in my code? Also, if somebody could explain to me what exactly the grad_output stands for, that would be amazing. Here are the functions: import torch from torch.autograd import Function from torch.autograd import gradcheck class Context: def __init__(self): self._saved_tensors = () def save_for_backward(self, *args): self._saved_tensors = args @property def saved_tensors(self): return self._saved_tensors class MSE(Function): @staticmethod def forward(ctx, yhat, y): ctx.save_for_backward(yhat, y) q = yhat.size()[0] mse = torch.sum((yhat-y)**2)/q return mse @staticmethod def backward(ctx, grad_output): yhat, y = ctx.saved_tensors q = yhat.size()[0] return 2*grad_output*(yhat-y)/q, -2*grad_output*(yhat-y)/q class Linear(Function): @staticmethod def forward(ctx, X, W, b): rows = X.size()[0] yhat = torch.mm(X,W) + b.repeat(rows,1) ctx.save_for_backward(yhat, X, W) return yhat @staticmethod def backward(ctx, grad_output): yhat, X, W = ctx.saved_tensors q = yhat.size()[0] p = yhat.size()[1] return torch.transpose(X, 0, 1), W, torch.ones(p) And here is my gradient descent: import torch from torch.utils.tensorboard import SummaryWriter from tp1moi import MSE, Linear, Context x = torch.randn(50, 13) y = torch.randn(50, 3) w = torch.randn(13, 3, requires_grad=True) b = torch.randn(3, requires_grad=True) epsilon = 0.05 writer = SummaryWriter() for n_iter in range(100): linear = Linear.apply mse = MSE.apply loss = mse(linear(x, w, b), y) writer.add_scalar('Loss/train', loss, n_iter) print(f"Itérations {n_iter}: loss {loss}") loss.backward() with torch.no_grad(): w -= epsilon*w.grad b -= epsilon*b.grad w.grad.zero_() b.grad.zero_() Here is one output I got (they all look similar to this one): Itérations 0: loss 72.99712371826172 Itérations 1: loss 7.509067535400391 Itérations 2: loss 7.309497833251953 Itérations 3: loss 7.124927997589111 Itérations 4: loss 6.955358982086182 Itérations 5: loss 6.800788402557373 Itérations 6: loss 6.661219596862793 Itérations 7: loss 6.536648750305176 Itérations 8: loss 6.427078723907471 Itérations 9: loss 6.3325090408325195 Itérations 10: loss 6.252938747406006 Itérations 11: loss 6.188369274139404 Itérations 12: loss 6.138798713684082 Itérations 13: loss 6.104228973388672 Itérations 14: loss 6.084658145904541 Itérations 15: loss 6.0800886154174805 Itérations 16: loss 6.090517520904541 Itérations 17: loss 6.115947723388672 Itérations 18: loss 6.156377792358398 Itérations 19: loss 6.2118072509765625 Itérations 20: loss 6.2822370529174805 Itérations 21: loss 6.367666721343994 Itérations 22: loss 6.468096733093262 Itérations 23: loss 6.583526611328125 Itérations 24: loss 6.713956356048584 Itérations 25: loss 6.859385967254639 Itérations 26: loss 7.019815444946289 Itérations 27: loss 7.195245742797852 Itérations 28: loss 7.385674953460693 Itérations 29: loss 7.591104507446289 Itérations 30: loss 7.811534881591797 Itérations 31: loss 8.046965599060059 Itérations 32: loss 8.297393798828125 Itérations 33: loss 8.562823295593262 Itérations 34: loss 8.843254089355469 Itérations 35: loss 9.138683319091797 Itérations 36: loss 9.449112892150879 Itérations 37: loss 9.774543762207031 Itérations 38: loss 10.114972114562988 Itérations 39: loss 10.470401763916016 Itérations 40: loss 10.840831756591797 Itérations 41: loss 11.226261138916016 Itérations 42: loss 11.626690864562988 Itérations 43: loss 12.042119979858398 Itérations 44: loss 12.472548484802246 Itérations 45: loss 12.917980194091797 Itérations 46: loss 13.378408432006836 Itérations 47: loss 13.853838920593262 Itérations 48: loss 14.344267845153809 Itérations 49: loss 14.849695205688477 Itérations 50: loss 15.370124816894531 Itérations 51: loss 15.905555725097656 Itérations 52: loss 16.455984115600586 Itérations 53: loss 17.02141571044922 Itérations 54: loss 17.601844787597656 Itérations 55: loss 18.19727325439453 Itérations 56: loss 18.807701110839844 Itérations 57: loss 19.43313217163086 Itérations 58: loss 20.07356071472168 Itérations 59: loss 20.728988647460938 Itérations 60: loss 21.3994197845459 Itérations 61: loss 22.084848403930664 Itérations 62: loss 22.7852783203125 Itérations 63: loss 23.50070571899414 Itérations 64: loss 24.23113441467285 Itérations 65: loss 24.9765625 Itérations 66: loss 25.73699188232422 Itérations 67: loss 26.512422561645508 Itérations 68: loss 27.302854537963867 Itérations 69: loss 28.108285903930664 Itérations 70: loss 28.9287166595459 Itérations 71: loss 29.764144897460938 Itérations 72: loss 30.614578247070312 Itérations 73: loss 31.48000717163086 Itérations 74: loss 32.36043930053711 Itérations 75: loss 33.2558708190918 Itérations 76: loss 34.16630172729492 Itérations 77: loss 35.091732025146484 Itérations 78: loss 36.032161712646484 Itérations 79: loss 36.98759460449219 Itérations 80: loss 37.95802307128906 Itérations 81: loss 38.943458557128906 Itérations 82: loss 39.943885803222656 Itérations 83: loss 40.959320068359375 Itérations 84: loss 41.98974609375 Itérations 85: loss 43.03517532348633 Itérations 86: loss 44.09561538696289 Itérations 87: loss 45.171043395996094 Itérations 88: loss 46.261474609375 Itérations 89: loss 47.366905212402344 Itérations 90: loss 48.487335205078125 Itérations 91: loss 49.62276840209961 Itérations 92: loss 50.773197174072266 Itérations 93: loss 51.93863296508789 Itérations 94: loss 53.11906433105469 Itérations 95: loss 54.31448745727539 Itérations 96: loss 55.524925231933594 Itérations 97: loss 56.75035095214844 Itérations 98: loss 57.990787506103516 Itérations 99: loss 59.2462158203125```
Let's take a look at the implementation of MSE, the forward pass will be MSE(y, y_hat) = (y_hat-y)² which is straightforward. For the backward pass, we are looking to compute the derivative of the output with regards to the input, as well as the derivative with regards to each of the parameters. Here MSE does not have any learned parameters, so we just want to compute dMSE/dy*dz/dMSE using the chain rule, which is d(y_hat-y)²/dy*dz/dMSE, i.e. -2(y_hat-y)*dz/dMSE. Not to confuse you here: I wrote dz/dMSEas the incoming gradient. It corresponds to the gradient following backward towards the MSE layer. From your notation grad_output is dz/dMSE. Therefore the backward pass is simply -2*(y_hat-y)*grad_output. Then normalized by the batch size q, retrieved from y_hat.size(0). The same thing goes with the Linear layer. It will involve some more computation since, this time, the layer is parametrized by w and b. The forward pass is essentially x@w + b. While the backward pass, consists in calculating dz/dx, dz/dw, and dz/db. Writing f as x@w + b. After some work you can find that that: dz/dx = d(x@w + b)/dx * dz/df = dz/df*W.T, dz/dw = d(x@w + b)/dw * dz/df = X.T*dz/df, dz/db = d(x@w + b)/db * dz/df = 1. In terms of implementation this would look like: [email protected] for the gradient w.r.t x, x.T@output_grad for the gradient w.r.t w, torch.ones_like(b) for the gradient w.r.t b.
https://stackoverflow.com/questions/68623720/
Running out of memory with pytorch
I am trying to train a model using huggingface's wav2vec for audio classification. I keep getting this error: The following columns in the training set don't have a corresponding argument in `Wav2Vec2ForSpeechClassification.forward` and have been ignored: name, emotion, path. ***** Running training ***** Num examples = 2708 Num Epochs = 1 Instantaneous batch size per device = 4 Total train batch size (w. parallel, distributed & accumulation) = 64 Gradient Accumulation steps = 2 Total optimization steps = 42 [ 2/42 : < :, Epoch 0.02/1] Step Training Loss Validation Loss RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "<ipython-input-81-dd9fe3ea0f13>", line 77, in forward return_dict=return_dict, File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1073, in forward return_dict=return_dict, File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 732, in forward hidden_states, attention_mask=attention_mask, output_attentions=output_attentions File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 574, in forward hidden_states = hidden_states + self.feed_forward(self.final_layer_norm(hidden_states)) File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 510, in forward hidden_states = self.intermediate_act_fn(hidden_states) File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/functional.py", line 1555, in gelu return torch._C._nn.gelu(input) RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 11.17 GiB total capacity; 10.49 GiB already allocated; 11.44 MiB free; 10.68 GiB reserved in total by PyTorch) I'm on an AWS ubuntu deep learning AMI ec2. I've been researching this a lot. I've already tried: reducing the batch size (I want 4, but I've gone down to 1 with no change in error) adding: import gc gc.collect() torch.cuda.empty_cache() removing all wav files in my dataset that are longer than 6 seconds Is there anything else I can do? I'm on a p2.8xlarge dataset with 105 GiB mounted. Running torch.cuda.memory_summary(device=None, abbreviated=False) gives me: |===========================================================================|\n| PyTorch CUDA memory summary, device ID 0 |\n|---------------------------------------------------------------------------|\n| CUDA OOMs: 3 | cudaMalloc retries: 4 |\n|===========================================================================|\n| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |\n|---------------------------------------------------------------------------|\n| Allocated memory | 7550 MB | 10852 MB | 209624 MB | 202073 MB |\n| from large pool | 7544 MB | 10781 MB | 209325 MB | 201780 MB |\n| from small pool | 5 MB | 87 MB | 298 MB | 293 MB |\n|---------------------------------------------------------------------------|\n| Active memory | 7550 MB | 10852 MB | 209624 MB | 202073 MB |\n| from large pool | 7544 MB | 10781 MB | 209325 MB | 201780 MB |\n| from small pool | 5 MB | 87 MB | 298 MB | 293 MB |\n|---------------------------------------------------------------------------|\n| GPU reserved memory | 10936 MB | 10960 MB | 63236 MB | 52300 MB |\n| from large pool | 10928 MB | 10954 MB | 63124 MB | 52196 MB |\n| from small pool | 8 MB | 98 MB | 112 MB | 104 MB |\n|---------------------------------------------------------------------------|\n| Non-releasable memory | 443755 KB | 1309 MB | 155426 MB | 154992 MB |\n| from large pool | 443551 KB | 1306 MB | 155081 MB | 154648 MB |\n| from small pool | 204 KB | 12 MB | 344 MB | 344 MB |\n|---------------------------------------------------------------------------|\n| Allocations | 1940 | 2622 | 32288 | 30348 |\n| from large pool | 1036 | 1618 | 21855 | 20819 |\n| from small pool | 904 | 1203 | 10433 | 9529 |\n|---------------------------------------------------------------------------|\n| Active allocs | 1940 | 2622 | 32288 | 30348 |\n| from large pool | 1036 | 1618 | 21855 | 20819 |\n| from small pool | 904 | 1203 | 10433 | 9529 |\n|---------------------------------------------------------------------------|\n| GPU reserved segments | 495 | 495 | 2169 | 1674 |\n| from large pool | 491 | 491 | 2113 | 1622 |\n| from small pool | 4 | 49 | 56 | 52 |\n|---------------------------------------------------------------------------|\n| Non-releasable allocs | 179 | 335 | 15998 | 15819 |\n| from large pool | 165 | 272 | 12420 | 12255 |\n| from small pool | 14 | 63 | 3578 | 3564 |\n|===========================================================================|\n' After reducing data only to inputs that are less tahn 2 seconds in length, it trains a lot further but still errors with this: The following columns in the training set don't have a corresponding argument in `Wav2Vec2ForSpeechClassification.forward` and have been ignored: path, emotion, name. ***** Running training ***** Num examples = 1411 Num Epochs = 1 Instantaneous batch size per device = 4 Total train batch size (w. parallel, distributed & accumulation) = 64 Gradient Accumulation steps = 2 Total optimization steps = 22 /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.) return torch.floor_divide(self, other) /home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' [11/22 01:12 < 01:28, 0.12 it/s, Epoch 0.44/1] Step Training Loss Validation Loss Accuracy 10 2.428100 2.257138 0.300283 The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForSpeechClassification.forward` and have been ignored: path, emotion, name. ***** Running Evaluation ***** Num examples = 353 Batch size = 32 Saving model checkpoint to trainingArgs/checkpoint-10 Configuration saved in trainingArgs/checkpoint-10/config.json Model weights saved in trainingArgs/checkpoint-10/pytorch_model.bin Configuration saved in trainingArgs/checkpoint-10/preprocessor_config.json --------------------------------------------------------------------------- OSError Traceback (most recent call last) ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization) 378 with _open_zipfile_writer(opened_file) as opened_zipfile: --> 379 _save(obj, opened_zipfile, pickle_module, pickle_protocol) 380 return ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/serialization.py in _save(obj, zip_file, pickle_module, pickle_protocol) 498 num_bytes = storage.size() * storage.element_size() --> 499 zip_file.write_record(name, storage.data_ptr(), num_bytes) 500 OSError: [Errno 28] No space left on device During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) <ipython-input-25-3435b262f1ae> in <module> ----> 1 trainer.train() ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1334 self.control = self.callback_handler.on_step_end(args, self.state, self.control) 1335 -> 1336 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) 1337 else: 1338 self.control = self.callback_handler.on_substep_end(args, self.state, self.control) ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval) 1441 1442 if self.control.should_save: -> 1443 self._save_checkpoint(model, trial, metrics=metrics) 1444 self.control = self.callback_handler.on_save(self.args, self.state, self.control) 1445 ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/trainer.py in _save_checkpoint(self, model, trial, metrics) 1531 elif self.args.should_save and not self.deepspeed: 1532 # deepspeed.save_checkpoint above saves model/optim/sched -> 1533 torch.save(self.optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt")) 1534 with warnings.catch_warnings(record=True) as caught_warnings: 1535 torch.save(self.lr_scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt")) ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization) 378 with _open_zipfile_writer(opened_file) as opened_zipfile: 379 _save(obj, opened_zipfile, pickle_module, pickle_protocol) --> 380 return 381 _legacy_save(obj, opened_file, pickle_module, pickle_protocol) 382 ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/serialization.py in __exit__(self, *args) 257 258 def __exit__(self, *args) -> None: --> 259 self.file_like.write_end_of_file() 260 self.buffer.flush() 261 RuntimeError: [enforce fail at inline_container.cc:298] . unexpected pos 1849920000 vs 1849919888 When I run !free in the notebook, I get: The history saving thread hit an unexpected error (OperationalError('database or disk is full')).History will not be written to the database. total used free shared buff/cache available Mem: 503392908 6223452 478499292 346492 18670164 492641984 Swap: 0 0 0 For training code, I am essentially running this colab notebook as an example: https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb#scrollTo=6M8bNvLLJnG1 All that I am changing is the incoming data/labels, which I have intentionally fit into the same directory structure used in the tutorial notebook. The tutorial notebook runs fine for some reason, even though my data has comparable size/num classes.
You might use the DataParallel or DistributedDataParallel framework in Pytorch model = Model(input_size, output_size) if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs model = nn.DataParallel(model) model.to(device) In this approach the model get replicated on each device (gpu) and the data is distributed across devices DataParallel splits your data automatically and sends job orders to multiple models on several GPUs. After each model finishes their job, DataParallel collects and merges the results before returning it to you. Further examples here https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html. If the model does not fit in the memory of one gpu, then a model parallel approach should be resorted to. From your existing model you might tell which layer sits on which gpu with .to('cuda:0'), .to('cuda:1') etc. class ModelParallelResNet50(ResNet): def __init__(self, *args, **kwargs): super(ModelParallelResNet50, self).__init__( Bottleneck, [3, 4, 6, 3], num_classes=num_classes, *args, **kwargs) self.seq1 = nn.Sequential( self.conv1, self.bn1, self.relu, self.maxpool, self.layer1, self.layer2 ).to('cuda:0') self.seq2 = nn.Sequential( self.layer3, self.layer4, self.avgpool, ).to('cuda:1') self.fc.to('cuda:1') def forward(self, x): x = self.seq2(self.seq1(x).to('cuda:1')) return self.fc(x.view(x.size(0), -1)) Since you might lose performance, a pipelining approach might be of use, i.e. further chunking input data into batches which are run in parallel on the different devices.
https://stackoverflow.com/questions/68624392/
How I can give input to the 3d Convolutional Neural Network?
The 3d CNN works with the videos, MRI, and scan datasets. Can you tell me If I have to feed the input (video) to the proposed 3d CNN network, and train it's weights, how can I able to do that? as 3d CNN expect 5 dimensional inputs; [batch size, channels, depth, height, weight] how can I extract depth from the videos? If I have 10 video of 10 different classes. The duration of each video is 6 seconds. I extract 2 frames for each second and it goes around 12 frames for each video. Size of RGB videos is 112x112 --> Height = 112, Width=112, and Channels=3 If I keep the batch size equals 2 1 video --> 6 seconds --> 12 frames (1sec == 2frames) [each frame (3,112,112)] 10 videos (10 classes) --> 60 seconds --> 120 frames So the 5 dimensions will be something like this; [2, 3, 12, 112, 112] 2 --> Two videos will be processed for each batch size. 3 --> RGB channels 12 --> each video contains 12 frames 112 --> Height of each video 112 --> Width of each video Am I right?
Yes, that seems to make sense if you're looking to use a 3D CNN. You're essentially adding a dimension to your input which is the temporal one, it is logical to use the depth dimension for it. This way you keep the channel axis as the feature channel (i.e. not a spatial-temporal dimension). Keep in mind 3D CNNs are really memory intensive. There exist other methods to work with temporal dependent input. Here you are not really dealing with a third dimension (a 'spatial' dimension that is), so you're not required to use a 3D CNN. Edit: If I give the input of the above dimension to the 3d CNN, will it learn both features (spatial and temporal)? [...] Can you make me understand, spatial and temporal features? If you use a 3D CNN then your filters will have a 3D kernel, and the convolution will be three dimensional: along the two spatial dimensions (width and height) as well as the depth dimensions (here corresponding to the temporal dimensions, since you're using depth dimension for the sequence of videos frames. A 3D CNN will allow you to capture local ('local' because the perception field is limited by the sizes of the kernels and the overall number of layers in the CNN) spatial and temporal information.
https://stackoverflow.com/questions/68625606/
How can I plot pytorch tensor?
I would like to plot pytorch gpu tensor: input= torch.randn(100).to(device) output = torch.where(input>=0, input, -input) input = input.('cpu').detach().numpy().copy() output = output.('cpu').detach().numpy().copy() plt.plot(input,out) However I try to convert those tensors into cpu, numpy, it does not work. How can I plot the tensors ?
Does this work? plt.plot(input.cpu().numpy(),output.cpu().numpy()) Alternatively you can try, plt.plot(input.to('cpu').numpy(),output.to('cpu').numpy())
https://stackoverflow.com/questions/68629652/
compress two consecutive dimension in PyTorch
I'm looking for such a function Tensor.compress(*dims) where dims=(int ...) is a consecutive sequence of integer: a = torch.rand(2,2,3) b = a.compress(0,1) b.size() >>> (4,3) I know view would work, however, in case I don't know the shape of a in advance, I have to do an extra operation to acquire its size and then do view, which is not what I want.
You do not have to explicitly "do the math", torch.view can do some of it for you if you use -1 as the shape of one of the dimensions: b = a.view(-1, *a.shape[2:]) b.shape >>> torch.Size([4, 3])
https://stackoverflow.com/questions/68630400/
Implementing dropout with pytorch
I wonder if I want to implement dropout by myself, is something like the following sufficient (taken from Implementing dropout from scratch): class MyDropout(nn.Module): def __init__(self, p: float = 0.5): super(MyDropout, self).__init__() if p < 0 or p > 1: raise ValueError("dropout probability has to be between 0 and 1, " "but got {}".format(p)) self.p = p def forward(self, X): if self.training: binomial = torch.distributions.binomial.Binomial(probs=1-self.p) return X * binomial.sample(X.size()) * (1.0/(1-self.p)) return X My concern is even if the unwanted weights are masked out (either through this way or by using a mask tensor), there can still be gradient flow through the 0 weights (https://discuss.pytorch.org/t/custom-connections-in-neural-network-layers/3027/9). Is my concern valid?
DropOut does not mask the weights - it masks the features. For linear layers implementing y = <w, x> the gradient w.r.t the parameters w is x. Therefore, if you set entries in x to zero - it will amount to no update for the corresponding weight in the adjacent linear layer.
https://stackoverflow.com/questions/68631000/
How to test a trained model saved in .pth.tar files?
I am working with CORnet-Z and I am building a separate test file. The model seems to be saved as .pth.tar files if FLAGS.output_path is not None: records.append(results) if len(results) > 1: pickle.dump(records, open(os.path.join(FLAGS.output_path, 'results.pkl'), 'wb')) ckpt_data = {} ckpt_data['flags'] = FLAGS.__dict__.copy() ckpt_data['epoch'] = epoch ckpt_data['state_dict'] = model.state_dict() ckpt_data['optimizer'] = trainer.optimizer.state_dict() if save_model_secs is not None: if time.time() - recent_time > save_model_secs: torch.save(ckpt_data, os.path.join(FLAGS.output_path, 'latest_checkpoint.pth.tar')) recent_time = time.time() What would be the best approach to load this model and run evaluation and testing?
def load_checkpoint(checkpoint, model, optimizer = None): if not os.path.exists(checkpoint): raise("File does not exists {}".format(checkpoint)) checkpoint = torch.load(checkpoint) model.load_state_dict(checkpoint['state_dict']) if optimizer: optimizer.load_state_dict(checkpoint['optim_dict']) return checkpoint To test a model you need to load the state dictionary of your trained model and optimizer (if applicable). But, if you are resuming training from a point and you are using any sort of scheduler you need to load the scheduler state too.
https://stackoverflow.com/questions/68632076/
How to load two neural networks in pytorch
Let's say I have two jupyter notebooks, one for each neural network. They both are for binary classification. I want to combine the results of two networks, for example I want the probability of x belonging to a class to be 0.2model1(x)+0.8model2(x). I saved both models with torch.save(model.state_dict(), 'saved_networks/model1.pt') Now I saw that in order to load a model I first have to create an object of this class model = TheModelClass(*args, **kwargs) model.load_state_dict(torch.load(PATH)) But I have these classes in two different notebooks, so my question is how does one usually handles such situations?
I would suggest you write the two classes properly in .py files. This way you can import those classes anywhere you want (let it be a notebook or another python file). For instance, if you have Model1 and Model2 classes defined in models.py you can then import them, initialize separate models and load their respective state dictionaries: from models import Model1, Model2 model1 = Model1(*args, **kwargs) model2 = Model2(*args, **kwargs) model1.load_state_dict(torch.load(PATH1)) model2.load_state_dict(torch.load(PATH2))
https://stackoverflow.com/questions/68641397/
Gradient Matrix (NxWxEPOCH) using Pytorch
I'm trying to create a matrix of gradient with the gradient of each observation by parameters and EPOCH. If my model has 100 obs, 1000 params and 10 Epoch, my matrix should be (100,1000,10). The problem is that I'm not able to get those gradient. The parameters and the observation are set at required_gradient=True. I've tried to run this after each observation pass thru the net: for p in net.parameters(): paramgradlist.append(p.grad) But the gradient stays the same of each params stays the same for all observations. Thank you
You are not copying your data and instead of storing a reference to the gradients. In the end, this means all your observations will be the same (i.e. the gradients' final value). Instead, you could clone the gradients before appending them to the list: for p in net.parameters(): paramgradlist.append(p.grad.clone())
https://stackoverflow.com/questions/68642300/
issue with arcface ( 0 accuracy)
Hello guys I've joined a university-level image recognition competition. In the test, they will give two images (people face) and my model need to detect pair of the image is the same person or not My model is resnet18 with IR block and SE block. and it will use Arcface loss. I can use only the MS1M dataset with a total of 86876 classes The problem is that loss is getting better, but accuracy is 0 and not changing. Here's part of code I'm working on. Train def train_model(model, net, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) for phase in ['train']: if phase == 'train': model.train() # Set model to training mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in notebook.tqdm(dataloader): inputs = inputs.to(device) labels = labels.to(device).long() # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): features = model(inputs) outputs = net(features, labels) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) if phase == 'train': scheduler.step() epoch_loss = running_loss / len(dataloader) epoch_acc = running_corrects.double() / len(dataloader) print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # deep copy the model if phase == 'train' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) torch.save({'epoch': epoch, 'mode_state_dict': model.state_dict(), 'fc_state_dict': net.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'scheduler': scheduler.state_dict(), # HERE IS THE CHANGE }, f'/content/drive/MyDrive/inha_data/training_saver/training_stat{epoch}.pth') print(f'finished {epoch} and saved model_save_{epoch}.pt') print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best train Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) torch.save(model.state_dict(), 'model_save.pt') return model Parameters train_dataset = MS1MDataset('train') dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=128, shuffle=True,num_workers=4) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # 디바이스 설정 num_classes = 86876 # normal classifier # net = nn.Sequential(nn.Linear(512, num_classes)) # Feature extractor backbone, input is 112x112 image output is 512 feature vector model_ft = resnet18(True) #set metric metric_fc = metrics.ArcMarginProduct(512, num_classes, s = 30.0, m = 0.50, easy_margin = False) metric_fc.to(device) # net = net.to(device) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = torch.optim.Adam([{'params': model_ft.parameters()}, {'params': metric_fc.parameters()}], lr=0.1) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=4, gamma=0.1) Arcface from __future__ import print_function from __future__ import division import torch import torch.nn as nn import torch.nn.functional as F from torch.nn import Parameter import math class ArcMarginProduct(nn.Module): r"""Implement of large margin arc distance: : Args: in_features: size of each input sample out_features: size of each output sample s: norm of input feature m: margin cos(theta + m) """ def __init__(self, in_features, out_features, s=30.0, m=0.50, easy_margin=False): super(ArcMarginProduct, self).__init__() self.in_features = in_features self.out_features = out_features self.s = s self.m = m self.weight = Parameter(torch.FloatTensor(out_features, in_features)) nn.init.xavier_uniform_(self.weight) self.easy_margin = easy_margin self.cos_m = math.cos(m) self.sin_m = math.sin(m) self.th = math.cos(math.pi - m) self.mm = math.sin(math.pi - m) * m def forward(self, input, label): # --------------------------- cos(theta) & phi(theta) --------------------------- cosine = F.linear(F.normalize(input), F.normalize(self.weight)) sine = torch.sqrt((1.0 - torch.pow(cosine, 2)).clamp(0, 1)) phi = cosine * self.cos_m - sine * self.sin_m if self.easy_margin: phi = torch.where(cosine > 0, phi, cosine) else: phi = torch.where(cosine > self.th, phi, cosine - self.mm) # --------------------------- convert label to one-hot --------------------------- # one_hot = torch.zeros(cosine.size(), requires_grad=True, device='cuda') one_hot = torch.zeros(cosine.size(), device='cuda') one_hot.scatter_(1, label.view(-1, 1).long(), 1) # -------------torch.where(out_i = {x_i if condition_i else y_i) ------------- output = (one_hot * phi) + ((1.0 - one_hot) * cosine) # you can use torch.where if your torch.__version__ is 0.4 output *= self.s # print(output) return output dataset data_transforms = { 'train': transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.ColorJitter(brightness=0.125, contrast=0.125, saturation=0.125), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } #train_ms1_data = torchvision.datasets.ImageFolder('/content/drive/MyDrive/inha_data/train', transform = data_transforms) class MS1MDataset(Dataset): def __init__(self,split): self.file_list = '/content/drive/MyDrive/inha_data/ID_List.txt' self.images = [] self.labels = [] self.transformer = data_transforms['train'] with open(self.file_list) as f: files = f.read().splitlines() for i, fi in enumerate(files): fi = fi.split() image = "/content/" + fi[1] label = int(fi[0]) self.images.append(image) self.labels.append(label) def __getitem__(self, index): img = Image.open(self.images[index]) img = self.transformer(img) label = self.labels[index] return img, label def __len__(self): return len(self.images)
You can try to use a smaller m in ArcFace, even a minus value.
https://stackoverflow.com/questions/68647266/
Difference between freezing layer with requires_grad and not passing params to optim in PyTorch
Let's say I train an autoencoder. I want to freeze the parameters of the encoder for the training, so only the decoder trains. I can do this using: # assuming it's a single layer called 'encoder' model.encoder.weights.data.requers_grad = False Or I can pass only the decoder's parameters to the optimizer. Is there a difference?
The most practical way is to iterate through all parameters of the module you want to freeze and set required_grad to False. This gives you the flexibility to switch your modules on and off without having to initialize a new optimizer each time. You can do this using the parameters generator available on all nn.Modules: for param in module.parameters(): param.requires_grad = False This method is model agnostic since you don't have to worry whether your module contains multiple layers or sub-modules. Alternatively, you can call the function nn.Module.requires_grad_ once as: module.requires_grad_(False)
https://stackoverflow.com/questions/68650482/
How to check torch gpu compatibility without initializing CUDA?
Older GPUs don't seem to support torch in spite of recent cuda versions. In my case the crash has the following error: /home/maxs/dev/mdb/venv38/lib/python3.8/site-packages/torch/cuda/__init__.py:83: UserWarning: Found GPU%d %s which is of cuda capability %d.%d. PyTorch no longer supports this GPU because it is too old. The minimum cuda capability supported by this library is %d.%d. warnings.warn(old_gpu_warn.format(d, name, major, minor, min_arch // 10, min_arch % 10)) WARNING:lightwood-16979:Exception: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. when training model: <lightwood.model.neural.Neural object at 0x7f9c34df1e80> Process LearnProcess-1:13: Traceback (most recent call last): File "/home/maxs/dev/mdb/venv38/sources/lightwood/lightwood/model/helpers/default_net.py", line 59, in forward output = self.net(input) File "/home/maxs/dev/mdb/venv38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/maxs/dev/mdb/venv38/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward input = module(input) File "/home/maxs/dev/mdb/venv38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/maxs/dev/mdb/venv38/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 96, in forward return F.linear(input, self.weight, self.bias) File "/home/maxs/dev/mdb/venv38/lib/python3.8/site-packages/torch/nn/functional.py", line 1847, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. This happens in spite of: assert torch.cuda.is_available() == True torch.version.cuda == '10.2' How can I check for an older GPU that doesn't support torch without actually try/catching a tensor-to-gpu transfer? The transfer initializes cuda, which wastes like 2GB of memory, something I can't afford since I'd be running this check in dozens of processes, all of which would then waste 2GB of memory extra due to the initialization.
Based on the code in torch.cuda.__init__ that was actually throwing the error the following check seems to work: import torch from torch.cuda import device_count, get_device_capability def is_cuda_compatible(): compatible_device_count = 0 if torch.version.cuda is not None: for d in range(device_count()): capability = get_device_capability(d) major = capability[0] minor = capability[1] current_arch = major * 10 + minor min_arch = min((int(arch.split("_")[1]) for arch in torch.cuda.get_arch_list()), default=35) if (not current_arch < min_arch and not torch._C._cuda_getCompiledVersion() <= 9000): compatible_device_count += 1 if compatible_device_count > 0: return True return False Not sure if it's 100% correct but putting it out here for feedback and in case anybody else needs it.
https://stackoverflow.com/questions/68650792/
ValueError: expected sequence of length 0 at dim 2 (got 1)
I've recently started a tutorial about Neural Networks with Python. I am working on a cat/dog classification task with a CNN. However even though I thought I've done exactly what the tutorial told me to do, I somehow ended up with a dim error. This is the tutorial. I believe he uses Python 3.7, I'm using Python 3.9(64-bit). The Error: ValueError: expected sequence of length 0 at dim 2 (got 1) The line of code: y = torch.Tensor([i[1] for i in training_data]) It sounds like I might have made a mistake in preparing the training data, but I'm not sure. Here is the code for that: class DogsVSCats: IMG_SIZE = 50 CATS = '[Path]' DOGS = '[Path]' LABELS = {CATS: 0, DOGS: 1} training_data = [] catcount = 0 dogcount = 0 def make_training_data(self): for label in self.LABELS: print label for f in tqdm(os.listdir(label)): try: path = os.path.join(label, f) img = cv2.imread(path, cv2.IMREAD_GRAYSCALE) img = cv2.resize(img, (self.IMG_SIZE, self.IMG_SIZE)) self.training_data.append([np.array(img), np.eye(2, self.LABELS[label])]) if label == self.CATS: self.catcount += 1 elif label == self.DOGS: self.dogcount += 1 except Exception, e: pass np.random.shuffle(self.training_data) np.save('training_data.npy', self.training_data) print ('Cats: ', self.catcount) print ('Dogs: ', self.dogcount) if REBUILD_DATA: dogsvcats = DogsVSCats() dogsvcats.make_training_data() print 'Nothing found!!' That all seemed to work as it did in the tutorial, with no errors and showing the same amount of pictures for each category. Here is also the problematic line: class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 32, 5) self.conv2 = nn.Conv2d(32, 64, 5) self.conv3 = nn.Conv2d(64, 128, 5) x = torch.randn(50, 50).view(-1, 1, 50, 50) self._to_linear = None self.convs(x) self.fc1 = nn.Linear(self._to_linear, 512) self.fc2 = nn.Linear(512, 2) def convs(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2)) if self._to_linear is None: self._to_linear = x[0].shape[0] * x[0].shape[1] * x[0].shape[2] return x def forward(self, x): x = self.convs(x) x = x.view(-1, self._to_linear) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.softmax(x, dim = 1) net = Net() optimizer = optim.Adam(net.parameters(), lr = 1e-3) loss_function = nn.MSELoss() X = torch.Tensor([i[0] for i in training_data]).view(-1, 50, 50) X = X / 255.0 y = torch.Tensor([i[1] for i in training_data]) !!!Error Line!!! VAL_PCT = 0.1 val_size = int(len(X) * VAL_PCT) train_X = X[:-val_size] train_y = y[:-val_size] test_X = X[-val_size:] test_y = y[-val_size:] print(val_size)
You didn't define the labels properly, it shouldn't be np.eye(2, self.LABELS[label]) but instead: np.eye(2)[self.LABELS[label]]
https://stackoverflow.com/questions/68651332/
How to convert numpy array(float data) to torch tensor?
test = ['0.01171875', '0.01757812', '0.02929688'] test = np.array(test).astype(float) print(test) ->[0.01171875 0.01757812 0.02929688] test_torch = torch.from_numpy(test) test_torch ->tensor([0.0117, 0.0176, 0.0293], dtype=torch.float64) It looks like from_numpy() loses some precision there... If I want to convert this float data exactly the same, what kind of functions do I use?
The data precision is the same, it's just that the format used by PyTorch to print the values is different, it will round the floats down: >>> test_torch = torch.from_numpy(test) >>> test_torch tensor([0.0117, 0.0176, 0.0293], dtype=torch.float64) You can check that it matches your original input by converting to a list with tolist: >>> test_torch.tolist() [0.01171875, 0.01757812, 0.02929688]
https://stackoverflow.com/questions/68653578/
what does reshape`(1, 1, 28, 28)` mean
I'm trying to reproduce the results of this code: https://github.com/vjayd/Image-Alignment-using-CNN the first problem I've faced is that, as far as I know, MNIST data are gray images and not color images so why he converted them to grayscale images using rgb2gray function for img_train in glob.glob(trdata): n = io.imread(img_train) n = rgb2gray(n) n= resize(n,(28,28)) train_x.append(n.reshape(1, 28, 28)) and what does (1, 1, 28, 28) mean in this line test_x = test_x.reshape(1, 1, 28, 28)
A Pytorch model mostly requires the first dimension of the input to be the batch size. So the shape of the image is (1, 28, 28). If you want to feed only one image to the model you still have to specify the batch size, which is of course 1 for one image. Therefore he adds the batch size dimension to the image by "reshaping" it to (1, 1, 28, 28).
https://stackoverflow.com/questions/68654038/
Modify training function of 3 nets with shared classifier
I have 3 VGG: VGGA, VGGB and VGG*, trained with the following training function: def train(nets, loaders, optimizer, criterion, epochs=20, dev=None, save_param=False, model_name="valerio"): # try: nets = [n.to(dev) for n in nets] model_a = module_unwrap(nets[0], True) model_b = module_unwrap(nets[1], True) model_c = module_unwrap(nets[2], True) reg_loss = nn.MSELoss() criterion.to(dev) reg_loss.to(dev) # Initialize history history_loss = {"train": [], "val": [], "test": []} history_accuracy = {"train": [], "val": [], "test": []} # Store the best val accuracy best_val_accuracy = 0 # Process each epoch for epoch in range(epochs): # Initialize epoch variables sum_loss = {"train": 0, "val": 0, "test": 0} sum_accuracy = {"train": [0,0,0], "val": [0,0,0], "test": [0,0,0]} progbar = None # Process each split for split in ["train", "val", "test"]: if split == "train": for n in nets: n.train() widgets = [ ' [', pb.Timer(), '] ', pb.Bar(), ' [', pb.ETA(), '] ', pb.Variable('ta','[Train Acc: {formatted_value}]') ] progbar = pb.ProgressBar(max_value=len(loaders[split][0]),widgets=widgets,redirect_stdout=True) else: for n in nets: n.eval() # Process each batch for j,((input_a, labels_a),(input_b, labels_b)) in enumerate(zip(loaders[split][0],loaders[split][1])): input_a = input_a.to(dev) input_b = input_b.to(dev) labels_a = labels_a.long().to(dev) labels_b = labels_b.long().to(dev) #print(labels_a.shape) #labels_a = labels_a.squeeze() #labels_b = labels_b.squeeze() #labels_a = labels_a.unsqueeze(1) #labels_b = labels_b.unsqueeze(1) #print(labels_a.shape) #labels_a = labels_a.argmax(-1) #labels_b = labels_b.argmax(-1) inputs = torch.cat([input_a,input_b],axis=0) labels = torch.cat([labels_a, labels_b]) #labels = labels.squeeze() #print(labels.shape) #labels = labels.argmax(-1) # Reset gradients optimizer.zero_grad() # Compute output features_a = nets[0](input_a) features_b = nets[1](input_b) features_c = nets[2](inputs) pred_a = torch.squeeze(nets[3](features_a)) pred_b = torch.squeeze(nets[3](features_b)) pred_c = torch.squeeze(nets[3](features_c)) loss = criterion(pred_a, labels_a) + criterion(pred_b, labels_b) + criterion(pred_c, labels) for n in model_a: layer_a = model_a[n] layer_b = model_b[n] layer_c = model_c[n] if (isinstance(layer_a,nn.Conv2d)): loss += lambda_reg * reg_loss(combo_fn(layer_a.weight,layer_b.weight),layer_c.weight) if (layer_a.bias is not None): loss += lambda_reg * reg_loss(combo_fn(layer_a.bias, layer_b.bias), layer_c.bias) # Update loss sum_loss[split] += loss.item() # Check parameter update if split == "train": # Compute gradients loss.backward() # Optimize optimizer.step() # Compute accuracy #https://discuss.pytorch.org/t/bcewithlogitsloss-and-model-accuracy-calculation/59293/ 2 #pred_labels_a = (pred_a >= 0.0).long() # Binarize predictions to 0 and 1 #pred_labels_b = (pred_b >= 0.0).long() # Binarize predictions to 0 and 1 #pred_labels_c = (pred_c >= 0.0).long() # Binarize predictions to 0 and 1 #print(pred_a.shape) _,pred_label_a = torch.max(pred_a, dim = 1) pred_labels_a = (pred_label_a == labels_a).float() _,pred_label_b = torch.max(pred_b, dim = 1) pred_labels_b = (pred_label_b == labels_b).float() _,pred_label_c = torch.max(pred_c, dim = 1) pred_labels_c = (pred_label_c == labels).float() batch_accuracy_a = pred_labels_a.sum().item() / len(labels_a) batch_accuracy_b = pred_labels_b.sum().item() / len(labels_b) batch_accuracy_c = pred_labels_c.sum().item() / len(labels) # Update accuracy sum_accuracy[split][0] += batch_accuracy_a sum_accuracy[split][1] += batch_accuracy_b sum_accuracy[split][2] += batch_accuracy_c if (split=='train'): progbar.update(j, ta=batch_accuracy_c) if (progbar is not None): progbar.finish() # Compute epoch loss/accuracy epoch_loss = {split: sum_loss[split] / len(loaders[split][0]) for split in ["train", "val", "test"]} epoch_accuracy = {split: [sum_accuracy[split][i] / len(loaders[split][0]) for i in range(len(sum_accuracy[split])) ] for split in ["train", "val", "test"]} # # Store params at the best validation accuracy # if save_param and epoch_accuracy["val"] > best_val_accuracy: # # torch.save(net.state_dict(), f"{net.__class__.__name__}_best_val.pth") # torch.save(net.state_dict(), f"{model_name}_best_val.pth") # best_val_accuracy = epoch_accuracy["val"] print(f"Epoch {epoch + 1}:") # Update history for split in ["train", "val", "test"]: history_loss[split].append(epoch_loss[split]) history_accuracy[split].append(epoch_accuracy[split]) # Print info print(f"\t{split}\tLoss: {epoch_loss[split]:0.5}\tVGG 1:{epoch_accuracy[split][0]:0.5}" f"\tVGG 2:{epoch_accuracy[split][1]:0.5}\tVGG *:{epoch_accuracy[split][2]:0.5}") if save_param: torch.save({'vgg_a':nets[0].state_dict(),'vgg_b':nets[1].state_dict(),'vgg_star':nets[2].state_dict(),'classifier':nets[3].state_dict()},f'{model_name}.pth') For each epoch of training the result is this: Then, I have a combined model which sums the weights of VGGA and VGGB: DO = 'TEST' if (DO=='TRAIN'): train(nets, loaders, optimizer, criterion, epochs=50, dev=dev,save_param=True) else: state_dicts = torch.load('valerio.pth') model1.load_state_dict(state_dicts['vgg_a']) #questi state_dict vengono dalla funzione di training model2.load_state_dict(state_dicts['vgg_b']) model3.load_state_dict(state_dicts['vgg_star']) classifier.load_state_dict(state_dicts['classifier']) test(model1,classifier,test_loader_all) test(model2, classifier, test_loader_all) test(model3, classifier, test_loader_all) summed_state_dict = OrderedDict() for key in state_dicts['vgg_star']: if key.find('conv') >=0: print(key) summed_state_dict[key] = combo_fn(state_dicts['vgg_a'][key],state_dicts['vgg_b'][key]) else: summed_state_dict[key] = state_dicts['vgg_star'][key] model3.load_state_dict(summed_state_dict) test(model3, classifier, test_loader_all) where the test function is this: def test(net,classifier, loader): net.to(dev) classifier.to(dev) net.eval() sum_accuracy = 0 # Process each batch for j, (input, labels) in enumerate(loader): input = input.to(dev) labels = labels.float().to(dev) features = net(input) pred = torch.squeeze(classifier(features)) # https://discuss.pytorch.org/t/bcewithlogitsloss-and-model-accuracy-calculation/59293/ 2 #pred_labels = (pred >= 0.0).long() # Binarize predictions to 0 and 1 _,pred_label = torch.max(pred, dim = 1) pred_labels = (pred_label == labels).float() batch_accuracy = pred_labels.sum().item() / len(labels) # Update accuracy sum_accuracy += batch_accuracy epoch_accuracy = sum_accuracy / len(loader) print(f"Accuracy after sum: {epoch_accuracy:0.5}") And the result of this aggregation is the following: I want to modify my training function in order to print the same things of the first image, plus the accuracy of the aggregated model (the highlighted part in red of the second picture). So basically, for each epoch, accuracies of VGGA, VGGB, VGG* and combined VGG, print these accuracies and continue with the training. I tried to add this model combo but I failed, because I did not able to insert into each epoch, but only at the end of the training. I was trying to add in the training function, between print(f"Epoch {epoch + 1}:")and # Update history for split in ["train", "val", "test"]: the code with the part of state_dict, but i am doing something wrong, i do not know what. Can I reuse the code of the test, or I have to write new code? Do you think i have to save the state_dict for each epoch, or i can do something else? Like model_c.parameters()=model_a.parameters()+model_b.parameters() (which does not work, already tried)
I solved, here is the solution of how I modified my training function: def train(nets, loaders, optimizer, criterion, epochs=20, dev=None, save_param=False, model_name="valerio"): # try: nets = [n.to(dev) for n in nets] model_a = module_unwrap(nets[0], True) model_b = module_unwrap(nets[1], True) model_c = module_unwrap(nets[2], True) reg_loss = nn.MSELoss() criterion.to(dev) reg_loss.to(dev) # Initialize history history_loss = {"train": [], "val": [], "test": []} history_accuracy = {"train": [], "val": [], "test": []} history_test = 0 # Store the best val accuracy best_val_accuracy = 0 # Process each epoch for epoch in range(epochs): # Initialize epoch variables sum_loss = {"train": 0, "val": 0, "test": 0} sum_accuracy = {"train": [0,0,0], "val": [0,0,0], "test": [0,0,0]} progbar = None # Process each split for split in ["train", "val", "test"]: if split == "train": for n in nets: n.train() widgets = [ ' [', pb.Timer(), '] ', pb.Bar(), ' [', pb.ETA(), '] ', pb.Variable('ta','[Train Acc: {formatted_value}]') ] progbar = pb.ProgressBar(max_value=len(loaders[split][0]),widgets=widgets,redirect_stdout=True) else: for n in nets: n.eval() # Process each batch for j,((input_a, labels_a),(input_b, labels_b)) in enumerate(zip(loaders[split][0],loaders[split][1])): input_a = input_a.to(dev) input_b = input_b.to(dev) labels_a = labels_a.long().to(dev) labels_b = labels_b.long().to(dev) #print(labels_a.shape) #labels_a = labels_a.squeeze() #labels_b = labels_b.squeeze() #labels_a = labels_a.unsqueeze(1) #labels_b = labels_b.unsqueeze(1) #print(labels_a.shape) #labels_a = labels_a.argmax(-1) #labels_b = labels_b.argmax(-1) inputs = torch.cat([input_a,input_b],axis=0) labels = torch.cat([labels_a, labels_b]) #labels = labels.squeeze() #print(labels.shape) #labels = labels.argmax(-1) # Reset gradients optimizer.zero_grad() # Compute output features_a = nets[0](input_a) features_b = nets[1](input_b) features_c = nets[2](inputs) pred_a = torch.squeeze(nets[3](features_a)) pred_b = torch.squeeze(nets[3](features_b)) pred_c = torch.squeeze(nets[3](features_c)) loss = criterion(pred_a, labels_a) + criterion(pred_b, labels_b) + criterion(pred_c, labels) for n in model_a: layer_a = model_a[n] layer_b = model_b[n] layer_c = model_c[n] if (isinstance(layer_a,nn.Conv2d)): loss += lambda_reg * reg_loss(combo_fn(layer_a.weight,layer_b.weight),layer_c.weight) if (layer_a.bias is not None): loss += lambda_reg * reg_loss(combo_fn(layer_a.bias, layer_b.bias), layer_c.bias) # Update loss sum_loss[split] += loss.item() # Check parameter update if split == "train": # Compute gradients loss.backward() # Optimize optimizer.step() # Compute accuracy #https://discuss.pytorch.org/t/bcewithlogitsloss-and-model-accuracy-calculation/59293/ 2 #pred_labels_a = (pred_a >= 0.0).long() # Binarize predictions to 0 and 1 #pred_labels_b = (pred_b >= 0.0).long() # Binarize predictions to 0 and 1 #pred_labels_c = (pred_c >= 0.0).long() # Binarize predictions to 0 and 1 #print(pred_a.shape) _,pred_label_a = torch.max(pred_a, dim = 1) pred_labels_a = (pred_label_a == labels_a).float() _,pred_label_b = torch.max(pred_b, dim = 1) pred_labels_b = (pred_label_b == labels_b).float() _,pred_label_c = torch.max(pred_c, dim = 1) pred_labels_c = (pred_label_c == labels).float() batch_accuracy_a = pred_labels_a.sum().item() / len(labels_a) batch_accuracy_b = pred_labels_b.sum().item() / len(labels_b) batch_accuracy_c = pred_labels_c.sum().item() / len(labels) # Update accuracy sum_accuracy[split][0] += batch_accuracy_a sum_accuracy[split][1] += batch_accuracy_b sum_accuracy[split][2] += batch_accuracy_c if (split=='train'): progbar.update(j, ta=batch_accuracy_c) if (progbar is not None): progbar.finish() # Compute epoch loss/accuracy epoch_loss = {split: sum_loss[split] / len(loaders[split][0]) for split in ["train", "val", "test"]} epoch_accuracy = {split: [sum_accuracy[split][i] / len(loaders[split][0]) for i in range(len(sum_accuracy[split])) ] for split in ["train", "val", "test"]} # # Store params at the best validation accuracy # if save_param and epoch_accuracy["val"] > best_val_accuracy: # # torch.save(net.state_dict(), f"{net.__class__.__name__}_best_val.pth") # torch.save(net.state_dict(), f"{model_name}_best_val.pth") # best_val_accuracy = epoch_accuracy["val"] print(f"Epoch {epoch + 1}:") # Update history for split in ["train", "val", "test"]: history_loss[split].append(epoch_loss[split]) history_accuracy[split].append(epoch_accuracy[split]) # Print info print(f"\t{split}\tLoss: {epoch_loss[split]:0.5}\tVGG 1:{epoch_accuracy[split][0]:0.5}" f"\tVGG 2:{epoch_accuracy[split][1]:0.5}\tVGG *:{epoch_accuracy[split][2]:0.5}") if save_param: torch.save({'vgg_a':nets[0].state_dict(),'vgg_b':nets[1].state_dict(),'vgg_star':nets[2].state_dict(),'classifier':nets[3].state_dict()},f'{model_name}.pth') test(nets[0], nets[3], test_loader_all) test(nets[1], nets[3], test_loader_all) test(nets[2], nets[3], test_loader_all) summed_state_dict = OrderedDict() for key in nets[2].state_dict(): if key.find('conv') >=0: #print(key) summed_state_dict[key] = combo_fn(nets[0].state_dict()[key],nets[1].state_dict()[key]) else: summed_state_dict[key] = nets[2].state_dict()[key] nets[2].load_state_dict(summed_state_dict) test(nets[2], nets[3], test_loader_all) The edited parts are the last rows.
https://stackoverflow.com/questions/68656284/
Pytorch Lightning limit_val_batches and val_check_interval behavior
I'm setting limit_val_batches=10 and val_check_interval=1000 so that I'm validating on 10 validation batches every 1000 training steps. Is it guaranteed that Trainer will use the same 10 batches every time validation is called? I tried search the source code for limit_val_batches but couldn't figure out how it was being used to obtain the validation batches.
The answer doesn't have much to do with PyTorch Lightning and its flags (--limit_val_batches and --val_check_interval). The exact batches of data provided by Lightning inside any of the def *_step(self, batch, ...): ... methods (* is training/validation/test) is determined by the underlying PyTorch DataLoaders returned by def *_dataloder(...): return DataLoader(dataset, shuffle=..., sampler=..., batch_sampler=...) If the dataloader returned by these functions DO NOT have shuffle=True or any randomized Samplers, the batches will be same. As far as --limit_val_batches=N is concerned, it fetches first N batches from the underlying dataloader. Lightning doesn't do any data selection by itself. It is confirmed by a core developer here.
https://stackoverflow.com/questions/68658917/
pytroch data loader RuntimeError: stack expects each tensor to be equal size, but got [224, 224] at entry 0 and [224, 224, 3] at entry 1
my problem is i got two tensors in one dataset with headers image and label when i execute simple loop all look fine unfortunetly when i make dataloader as below training_loader = torch.utils.data.DataLoader(training_dataset, batch_size=100, shuffle=True) and run for i in training_loader: print(i) Im getting error: RuntimeError: stack expects each tensor to be equal size, but got [224, 224] at entry 0 and [224, 224, 3] at entry 4 what can cause it and how to fix it ? thank you inadvance
It seems like one (or more) of your images is not a color image, but a gray-scale image. Modify your loading code to force all images to be treated as color images: img = Image.load(filename).convert('RGB') See this answer for more details.
https://stackoverflow.com/questions/68664217/
Receiving Infinity Infinity in LineString
I am try get linestring so I can measure the distance and time. Here in this linestring I am getting nan distance and time. Also, pleased to hear any of your suggestion on my code or logic. Thanks data: [[29.87819, 121.54944999999998], [24.23111845, 119.02311485000001], [5.402576549999999, 106.87891215000002], [1.367889, 104.27658300000002], [4.65750565, 98.40456015000001], [5.93498595, 82.50298040000001], [6.895460999999999, 75.83849285000002], [11.087761, 55.21659015], [11.986111, 50.79761100000002], [12.57124165, 44.563427950000005], [15.262399899999998, 41.828814550000004], [27.339266099999996, 34.20131845], [29.927166, 32.566855000000004], [32.36497615, 28.787162800000004], [36.25582884999999, 14.171143199999989], [37.089583, 11.039139000000006], [36.98901405, 4.773231850000002], [36.139162799999994, -4.182775300000003], [36.86918755, -8.487389949999994], [42.41353785, -9.331828900000005], [47.68888635, -5.458406800000006], [50.7011586, 1.0547821000000113], [52.84235105, -6.168939849999987], [53.33306, -6.248889999999989]] Code: from shapely.geometry import LineString from shapely.ops import transform from functools import partial import pyproj path = LineString(path) print(path) # Geometry transform function based on pyproj.transform project = partial( pyproj.transform, pyproj.Proj('EPSG:4326'), pyproj.Proj('EPSG:32633')) path = transform(project, path) print(path) distance = path.length/1852 print(str(path.length) + " METERS") print(str(distance) + " NM") print(str(distance/13) + " Hr will take") Output:
Probably, previous pyproj EPSG was switching the opposite interpretation. So I change the EPSG code with below: project = partial( pyproj.transform, pyproj.Proj('EPSG:4326'), pyproj.Proj('EPSG:3857'))
https://stackoverflow.com/questions/68668657/
Using CUDA in Colab
Is it necessary to convert tensors and models to CUDA with tensor.to in colab when I've chosen runtime type as GPU? I want use CUDA for training my model
tensor.to(device) transfer data to the given device. Yes, you need to transfer model, input, labels etc to whatever device you are intending to use
https://stackoverflow.com/questions/68670817/
CUDA out of memory error when reloading Pytorch model
Common pytorch error here, but I'm seeing it under a unique circumstance: when reloading a model, I get a CUDA: Out of Memory error, even though I haven't yet placed the model on the GPU. model = model.load_state_dict(torch.load(model_file_path)) optimizer = optimizer.load_state_dict(torch.load(optimizer_file_path)) # Error happens here ^, before I send the model to the device. model = model.to(device_id)
The issue is that I was trying to load to a new GPU (cuda:2) but originally saved the model and optimizer from a different GPU (cuda:0). So even though I didn't explicitly tell it to reload to the previous GPU, the default behavior is to reload to the original GPU (which happened to be occupied). Adding map_location=device_id to each torch.load call fixed the problem: model.to(device_id) model = model.load_state_dict(torch.load(model_file_path, map_location=device_id)) optimizer = optimizer.load_state_dict(torch.load(optimizer_file_path, map_location=device_id))
https://stackoverflow.com/questions/68670866/
How Conv2D works in Tensorflow/PyTorch when two layers are connected with different filter numbers?
After reviewing LeNet5 architecture description, when Max Pooling layer (with 6 filters) is connected with Conv2D layer (16 filters), there is a requirement of special filter mapping, in the following way: Taking inputs from every contiguous subset of 3 feature maps Taking inputs from every contiguous subset of 4 feature maps Taking inputs from every contiguous subset of 4 feature maps Taking inputs from the discontinuous subset of 4 feature maps Taking all the feature maps A slightly annotated image from LeNet paper, courtesy of TowardsAI Here recall this mapping is pretty specialized to 10 <-> 16, whereas Tensorflow/PyTorch is pretty flexible, just wonder how Tensorflow/PyTorch handles it exactly?
I don't know about the actual implementation of this network in those frameworks. However, here's one way you can implement such an operation in PyTorch. You can look at this operation as a change of basis: going from feature maps in the 'S2' space to feature maps in the 'C3' space using a transform matrix M. The whole objective is to construct that matrix, it is composed of ones and zeros, where the ones are positioned such that you construct vectors in C3 space using components of vectors in S2 space. For instance, let's look at the discontinuous subsets of 4 of the table: column #12 requires maps n°0, 1, 3, and 4. The corresponding row in M for vector #12 will therefore be [1,1,0,1,1,0]. Essentially, the 1s here correspond to the crosses shown in the figure. For this particular portion of the transition M will look like: tensor([[1., 0., 1.], [1., 1., 0.], [0., 1., 1.], [1., 0., 1.], [1., 1., 0.], [0., 1., 1.]]) To actually perform the matrix multiplication, you can use torch.einsum: torch.einsum('bchw,cd->bdhw', x, M) Here's an example: starting from a 6-channel 2x2 map and transitioning to a 3-channel 2x2 map (defined by columns #12, #13, and #14 of the Table I): >>> x = torch.rand(1,6,2,2) tensor([[[[0.3134, 0.2468], [0.2759, 0.4971]], [[0.4150, 0.8735], [0.6726, 0.0463]], [[0.9547, 0.5338], [0.0654, 0.7458]], [[0.4099, 0.1984], [0.0930, 0.8054]], [[0.1695, 0.1586], [0.7961, 0.3894]], [[0.5535, 0.0678], [0.1484, 0.7735]]]]) >>> torch.einsum('bchw,cd->bdhw', x, M) tensor([[[[1.3077, 1.4773], [1.8377, 1.7382]], [[2.0926, 1.6338], [1.6825, 1.9550]], [[2.2315, 1.0467], [0.5828, 2.8219]]]]) You can of course expand this operation to the whole of Table I, this would result in a matrix M of size 6x16.
https://stackoverflow.com/questions/68672762/
neural network trained with PyTorch outputs the mean value for every input
I am using PyTorch in order to get my neural network to recognize digits from the MNIST database. import torch import torchvision I'd like to implement a very simple design similar to what is shown in 3Blue1Brown's video series about neural networks. The following design in particular achieved an error rate of 1.6%. class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.layer1 = torch.nn.Linear(784, 800) self.layer2 = torch.nn.Linear(800, 10) def forward(self, x): x = torch.sigmoid(self.layer1(x)) x = torch.sigmoid(self.layer2(x)) return x The data is gathered using torchvision and organised in mini batches containing 32 images each. batch_size = 32 training_set = torchvision.datasets.MNIST("./", download=True, transform=torchvision.transforms.ToTensor()) training_loader = torch.utils.data.DataLoader(training_set, batch_size=32) I am using the mean squared error as a loss funtion and stochastic gradient descent with a learning rate of 0.001 as my optimization algorithm. net = Net() loss_function = torch.nn.MSELoss() optimizer = torch.optim.SGD(net.parameters(), lr=0.001) Finally the network gets trained and saved using the following code: for images, labels in training_loader: optimizer.zero_grad() for i in range(batch_size): output = net(torch.flatten(images[i])) desired_output = torch.tensor([float(j == labels[i]) for j in range(10)]) loss = loss_function(output, desired_output) loss.backward() optimizer.step() torch.save(net.state_dict(), "./trained_net.pth") However, here are the outputs of some test images: tensor([0.0978, 0.1225, 0.1018, 0.0961, 0.1022, 0.0885, 0.1007, 0.1077, 0.0994, 0.1081], grad_fn=<SigmoidBackward>) tensor([0.0978, 0.1180, 0.1001, 0.0929, 0.1006, 0.0893, 0.1010, 0.1051, 0.0978, 0.1067], grad_fn=<SigmoidBackward>) tensor([0.0981, 0.1227, 0.1018, 0.0970, 0.0979, 0.0908, 0.1001, 0.1092, 0.1011, 0.1088], grad_fn=<SigmoidBackward>) tensor([0.1061, 0.1149, 0.1037, 0.1001, 0.0957, 0.0919, 0.1044, 0.1022, 0.0997, 0.1052], grad_fn=<SigmoidBackward>) tensor([0.0996, 0.1137, 0.1005, 0.0947, 0.0977, 0.0916, 0.1048, 0.1109, 0.1013, 0.1085], grad_fn=<SigmoidBackward>) tensor([0.1008, 0.1154, 0.0986, 0.0996, 0.1031, 0.0952, 0.0995, 0.1063, 0.0982, 0.1094], grad_fn=<SigmoidBackward>) tensor([0.0972, 0.1235, 0.1013, 0.0984, 0.0974, 0.0907, 0.1032, 0.1075, 0.1001, 0.1080], grad_fn=<SigmoidBackward>) tensor([0.0929, 0.1258, 0.1016, 0.0978, 0.1006, 0.0889, 0.1001, 0.1068, 0.0986, 0.1024], grad_fn=<SigmoidBackward>) tensor([0.0982, 0.1207, 0.1040, 0.0990, 0.0999, 0.0910, 0.0980, 0.1051, 0.1039, 0.1078], grad_fn=<SigmoidBackward>) As you can see the network seems to approach a state where the answer for every input is: [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1] This neural network is not better than just guessing. Where did I go wrong in my design or code?
Here are a few points that would be useful for you: At first glance your model is not learning since your prediction are as good as a random guess. The first initiative would be to monitor your loss, here you only have a single epoch. At least you could evaluate your model on unseen data: validation_set = torchvision.datasets.MNIST('./', download=True, train=False, transform=T.ToTensor()) validation_loader = DataLoader(validation_set, batch_size=32) You are using a MSE loss (the L2-norm) to train a classification task which is not the right tool for this kind of task. You could instead be using the negative log-likelihood. PyTorch offers nn.CrossEntropyLoss which includes a log-softmax and the negative log-likelihood loss in one module. This change can be implemented by adding in: loss_function = nn.CrossEntropyLoss() and using the right target shapes when applying loss_function (see below). Since the loss function will apply a log-softmax, you shouldn't have an activation function on your model's output. You are using sigmoid as an activation function, intermediate non-linearities will work better as ReLU (see related post). A sigmoid is more suited for a binary classification task. Again, since we are using nn.CrossEntropyLoss, we have to remove the activation after layer2. class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.flatten = nn.Flatten() self.layer1 = torch.nn.Linear(784, 800) self.layer2 = torch.nn.Linear(800, 10) def forward(self, x): x = self.flatten(x) x = torch.relu(self.layer1(x)) x = self.layer2(x) return x A less crucial point is the fact that you could infer estimations on a whole batch instead of looping through each batch one element at a time. A typical training loop for one epoch would look like: for images, labels in training_loader: optimizer.zero_grad() output = net(images) loss = loss_function(output, labels) loss.backward() optimizer.step() With these kinds of modifications, you can expect to have a validation of around 80% after a single epoch.
https://stackoverflow.com/questions/68672764/
PyTorch: vectorising looping addition of one value from a vector to a vector
I want to create a matrix that contains every combination of the sums of all elements in two large vectors using Torch, ultimately using CUDA within Torch. The best way to describe it is with this (inefficient) code: import numpy as np import torch x = torch.Tensor([1.1,2.2,3.3,4.4,5.5]) x_cent = torch.Tensor([10.2,20.2,100.1]) res_matrix = torch.zeros([int((x.shape)[0]), int((x_cent.shape)[0])]) res_col = torch.zeros([int((x.shape)[0])]) for i in x_cent: res_col = x.add(i) for i in range(0,int((x_cent.shape)[0])): res_col = x.add(x_cent[i]) res_matrix[:,i] = res_col print(res_matrix) The output of this is: > tensor([[ 11.3000, 21.3000, 101.2000], [ 12.4000, 22.4000, 102.3000], [ 13.5000, 23.5000, 103.4000], [ 14.6000, 24.6000, 104.5000], [ 15.7000, 25.7000, 105.6000]]) There may be a term for this operation, and if someone can point it out, I will edit this question and include the term. Can you suggest a more efficient (vectorised?) approach to this that I could implement using the CUDA device on very large vectors? I'm guessing that this is a very simple question, but I am a beginner with torch. Thanks!
Prompted by a comment above, and looking around, I see that a similar blog post exists: every combination of addition of two vectors -- but not a matrix This can be modified to produce a matrix: (x.unsqueeze(1) + x_cent.unsqueeze(0)) Any better approaches?
https://stackoverflow.com/questions/68674394/
Can someone explain the layers code in the following pytorch neural network
The neural network is a pytorch implementation of the NVIDIA model for self driving cars. Here I did not understand the first layer of the linear layers, the following is the line. 'nn.Linear(in_features=64 * 2 * 33, out_features=100)' I can understand that 64 is the output of previous layer and 2 is number of flattened layers (if im not wrong). Now my question is what's the purpose of '33'? class NetworkDense(nn.Module): def __init__(self): super(NetworkDense, self).__init__() self.conv_layers = nn.Sequential( nn.Conv2d(3, 24, 5, stride=2), nn.ELU(), nn.Conv2d(24, 36, 5, stride=2), nn.ELU(), nn.Conv2d(36, 48, 5, stride=2), nn.ELU(), nn.Conv2d(48, 64, 3), nn.ELU(), nn.Conv2d(64, 64, 3), nn.Dropout(0.25) ) self.linear_layers = nn.Sequential( nn.Linear(in_features=64 * 2 * 33, out_features=100), nn.ELU(), nn.Linear(in_features=100, out_features=50), nn.ELU(), nn.Linear(in_features=50, out_features=10), nn.Linear(in_features=10, out_features=1) ) def forward(self, input): input = input.view(input.size(0), 3, 70, 320) output = self.conv_layers(input) output = output.view(output.size(0), -1) output = self.linear_layers(output) return output
It comes from your first input shape. input = input.view(input.size(0), 3, 70, 320) Think about it as an image with a size of (70, 320). If we pass that image to the first conv2d, then it will become (33, 158)..., and at last, if it all passes the conv layer, its shape will become (2, 33). So, how to calculate these numbers? It goes like: (length - (kernel_size - stride)) // stride Now let's just think about how the 70 will be changed. The first conv2d has a kernel size of 5 with stride 2. So 70 will be like: (70 - (5 - 2)) // 2 => 33, then for the rest: (33 - (5 - 2)) // 2 => 15 (15 - (5 - 2)) // 2 => 6 (6 - (3 - 1)) // 1` => 4 (4 - (3 - 1)) // 1` => 2 It's the same as the other length, 320 will become 33.
https://stackoverflow.com/questions/68688868/
can i use PyTorch Data Loader to load raw data images which are saved in CSV files?
I have raw data images saved in separate CSV files(each image in a file). I want to train a CNN on them using PyTorch. how should I load data to be appropriate for using as CNN's input? (also, it is 1 channel and the image net's input is RGB as the default)
PyTorch's DataLoader, as the name suggests, is simply a utility class that helps you load your data in parallel, build your batch, shuffle and so on, what you need is instead a custom Dataset implementation. Ignoring the fact that images stored in CSV files is kind of weird, you simply need something of the sort: from torch.utils.data import Dataset, DataLoader class CustomDataset(Dataset): def __init__(self, path: Path, ...): # do some preliminary checks, e.g. your path exists, files are there... assert path.exists() ... # retrieve your files in some way, e.g. glob self.csv_files = list(glob.glob(str(path / "*.csv"))) def __len__(self) -> int: # this lets you know len(dataset) once you instantiate it return len(self.csv_files) def __getitem__(self, index: int) -> Any: # this method is called by the dataloader, each index refers to # a CSV file in the list you built in the constructor csv = self.csv_files[index] # now do whatever you need to do and return some tensors image, label = self.load_image(csv) return image, label And that's it, more or less. You can then create your dataset, pass it to a dataloader and iterate the dataloader, something like: dataset = CustomDataset(Path("path/to/csv/files")) train_loader = DataLoader(dataset, shuffle=True, num_workers=8,...) for batch in train_loader: ...
https://stackoverflow.com/questions/68692578/
PyTorch Lightning functools.partial error
I'm using a combination of PyTorch Forecasting and PyTorch Lightning, and running into an odd error. Some code below. batch_size = 128 train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=8) val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=8) . . . tft = TemporalFusionTransformer.from_dataset( training, learning_rate=0.05, hidden_size=16, # biggest influence network size attention_head_size=1, dropout=0.1, hidden_continuous_size=8, output_size=7, # QuantileLoss has 7 quantiles by default loss=QuantileLoss(), log_interval=10, # log example every 10 batches reduce_on_plateau_patience=4, # reduce learning automatically ) trainer.fit( tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader ) However, I then run into this error and I can't figure out why. Can anyone help me figure out what to do with the below error? I tried playing around with changing the syntax for the val_dataloader, but couldn't get anything to work. Traceback (most recent call last): File "/model.py", line 136, in <module> val_dataloaders=val_dataloader, File "C:\...\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 553, in fit self._run(model) File "C:\...\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 912, in _run self._pre_dispatch() File "C:\...\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 941, in _pre_dispatch self._log_hyperparams() File "C:\...\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 970, in _log_hyperparams self.logger.save() File "C:\...\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py", line 48, in wrapped_fn return fn(*args, **kwargs) File "C:\...\venv\lib\site-packages\pytorch_lightning\loggers\tensorboard.py", line 249, in save save_hparams_to_yaml(hparams_file, self.hparams) File "C:\...\venv\lib\site-packages\pytorch_lightning\core\saving.py", line 405, in save_hparams_to_yaml yaml.dump(v) File "C:\...\venv\lib\site-packages\yaml\__init__.py", line 290, in dump return dump_all([data], stream, Dumper=Dumper, **kwds) File "C:\...\venv\lib\site-packages\yaml\__init__.py", line 278, in dump_all dumper.represent(data) File "C:\...\lib\site-packages\yaml\representer.py", line 27, in represent node = self.represent_data(data) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 343, in represent_object 'tag:yaml.org,2002:python/object:'+function_name, state) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 343, in represent_object 'tag:yaml.org,2002:python/object:'+function_name, state) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 346, in represent_object return self.represent_sequence(tag+function_name, args) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 286, in represent_tuple return self.represent_sequence('tag:yaml.org,2002:python/tuple', data) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File "C:\...\venv\lib\site-packages\yaml\representer.py", line 331, in represent_object if function.__name__ == '__newobj__': AttributeError: 'functools.partial' object has no attribute '__name__' Process finished with exit code 1
This ended up being caused by an issue with a recent pandas upgrade. Rolling back to 1.2.5 resolved the issue. pip install --upgrade pandas==1.2.5 Details on the problem in the link below. https://github.com/pandas-dev/pandas/issues/42748
https://stackoverflow.com/questions/68694047/
How to train only RPN for torch vision Faster RCNN with pretrained backbone
As per the title mentioned, if I have already pretrained backbone, and I want to train only the RPN instead of the classifier using the Faster R-CNN from torchvision. Is there any parameters I can pass in to the create_model function or would I stop the classifier training in my train() function? I’m on mobile so olease excuse my editting This is my create model function Create your backbone from timm backbone = timm.create_model( “resnet50”, pretrained=True, num_classes=0, # this is important to remove fc layers global_pool="" # this is important to remove fc layers ) backbone.out_channels = backbone.feature_info[-1][“num_chs”] anchor_generator = AnchorGenerator( sizes=((16, 32, 64, 128, 256),), aspect_ratios=((0.25, 0.5, 1.0, 2.0),) ) roi_pooler = torchvision.ops.MultiScaleRoIAlign( featmap_names=[“0”], output_size=7, sampling_ratio=2 ) fastercnn_model = FasterRCNN( backbone=backbone, num_classes=1000, rpn_anchor_generator=anchor_generator, box_roi_pool=roi_pooler, )
You can do the following # First you can use model.children() method to see the idx of the backbone for idx, child in enumerate(fastercnn_model.children()): if idx == 1: # Now set requires_grad for that idx to False for param in child.parameters(): param.requires_grad = False break # =============== UPDATED ======================== # This will train only the box_predictor not even the RPN. You can try out # Different strategies and find the best for you. # setting everything to false for child in fastercnn_model.children(): for param in child.parameters(): param.requires_grad = False for idx, child in enumerate(fastercnn_model.children()): if idx == 3: for i, param in enumerate(child.parameters()): if i==1: param.requires_grad = True break
https://stackoverflow.com/questions/68694221/
How can I fix cuda runtime error on google colab?
I'm trying to execute the named entity recognition example using BERT and pytorch following the Hugging Face page: Token Classification with W-NUT Emerging Entities. There was a related question on stackoverflow, but the error message is different from my case. cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorMath.cu:29 I have trouble with fixing the above cuda runtime error. How can I execute the sample code on google colab with the run time type, GPU? Error trainer.train() # Error Message /usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 147 Variable._execution_engine.run_backward( 148 tensors, grad_tensors_, retain_graph, create_graph, inputs, --> 149 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag 150 151 RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorMath.cu:29 Code I didn't change the original data and code introduced on the tutorial, Token Classification with W-NUT Emerging Entities. Access from the browser to Token Classification with W-NUT Emerging Entities code: custom_datasets.ipynb - Colaboratory from pathlib import Path import re def read_wnut(file_path): file_path = Path(file_path) raw_text = file_path.read_text().strip() raw_docs = re.split(r'\n\t?\n', raw_text) token_docs = [] tag_docs = [] for doc in raw_docs: tokens = [] tags = [] for line in doc.split('\n'): token, tag = line.split('\t') tokens.append(token) tags.append(tag) token_docs.append(tokens) tag_docs.append(tags) return token_docs, tag_docs texts, tags = read_wnut('wnut17train.conll') from sklearn.model_selection import train_test_split train_texts, val_texts, train_tags, val_tags = train_test_split(texts, tags, test_size=.2) unique_tags = set(tag for doc in tags for tag in doc) tag2id = {tag: id for id, tag in enumerate(unique_tags)} id2tag = {id: tag for tag, id in tag2id.items()} from transformers import DistilBertTokenizerFast tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-cased') train_encodings = tokenizer(train_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True) val_encodings = tokenizer(val_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True) import numpy as np def encode_tags(tags, encodings): labels = [[tag2id[tag] for tag in doc] for doc in tags] encoded_labels = [] for doc_labels, doc_offset in zip(labels, encodings.offset_mapping): # create an empty array of -100 doc_enc_labels = np.ones(len(doc_offset),dtype=int) * -100 arr_offset = np.array(doc_offset) # set labels whose first offset position is 0 and the second is not 0 doc_enc_labels[(arr_offset[:,0] == 0) & (arr_offset[:,1] != 0)] = doc_labels encoded_labels.append(doc_enc_labels.tolist()) return encoded_labels train_labels = encode_tags(train_tags, train_encodings) val_labels = encode_tags(val_tags, val_encodings) import torch import os #os.environ['CUDA_LAUNCH_BLOCKING'] = "1" torch.backends.cudnn.enabled = False # check if CUDA is available train_on_gpu = torch.cuda.is_available() # torch.backends.cudnn.enabled if not train_on_gpu: print('CUDA is not available. Training on CPU ...') else: print('CUDA is available! Training on GPU ...') class WNUTDataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) train_encodings.pop("offset_mapping") # we don't want to pass this to the model val_encodings.pop("offset_mapping") train_dataset = WNUTDataset(train_encodings, train_labels) val_dataset = WNUTDataset(val_encodings, val_labels) from transformers import DistilBertForTokenClassification model = DistilBertForTokenClassification.from_pretrained('distilbert-base-cased', num_labels=len(unique_tags)) from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments, DistilBertForTokenClassification from sklearn.metrics import precision_recall_fscore_support import tensorflow as tf def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) model = DistilBertForTokenClassification.from_pretrained("distilbert-base-uncased") trainer = Trainer( model=model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset compute_metrics=compute_metrics ) trainer.train() What I did I checked cuda and GPU related settings. #os.environ['CUDA_LAUNCH_BLOCKING'] = "1" torch.backends.cudnn.enabled = False # check if CUDA is available train_on_gpu = torch.cuda.is_available() # torch.backends.cudnn.enabled if not train_on_gpu: print('CUDA is not available. Training on CPU ...') else: print('CUDA is available! Training on GPU ...') #output CUDA is available! Training on GPU ... training_args.device #output device(type='cuda', index=0) Responce to an answer When I comment out the part, #os.environ['CUDA_LAUNCH_BLOCKING'] = "1" #torch.backends.cudnn.enabled = False The error message changed to the below when I didn't reset runtime. /usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in _make_grads(outputs, grads) 49 if out.numel() != 1: 50 raise RuntimeError("grad can be implicitly created only for scalar outputs") ---> 51 new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format)) 52 else: 53 new_grads.append(None) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. If I reset runtime, the message was the same. RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorMath.cu:29
Maybe the problem comes from this line: torch.backends.cudnn.enabled = False You might comment or remove it and try again.
https://stackoverflow.com/questions/68698065/
How to deploy PyTorch in Centos6?
Recently, I want to run some pytorch codes on centos6. However, no matter I perform either "pip install torch" or "conda install torch", the prompt shows: >>> import torch Traceback (most recent call last): File "", line 1, in File "XXX/anaconda3/envs/XXX/lib/python3.6/site-packages/torch/init.py", line 56, in from torch._C import * ImportError: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by XXX/anaconda3/envs/XXX/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so) My enviroment: OS: CentOS release 6.7 How you installed PyTorch (conda, pip, source): pip & conda Python version: 3.6.5 I tried mannually compiling glibc-2.19, but when I put the library path into LD_LIBRARY_PATH, I can't use commands like "ls", "make", etc. And I got "segment fault". I also tried just copying the *.so to the lib directory under the python virtual environment, no wonder, python crashed with "segment fault". I understand plenty of components rely on a specific glibc library. I just wonder how I can run pytorch on centos6 without switching to a new linux distribution. PS: I need to deploy such python environments on the product environment, whose OS is centos6.
It's a tough one. You can either downgrade to a very old version of torch ( v0.3.1 as I remember was running ok on Centos 6.5 ), or upgrade to Centos 7. Having 2 version of glibc is hell. If you really need Centos 6 to live with the latest version of torch, try installing glibc into non standard location and compiling both Python and pytorch from source. update You can't replace system's glibc and but you can install it somewhere else, like /opt/myglibc. Pytorch stopped supporting Centos 6 since v0.4.1. So you will have to build it using gcc v5+ and linking it to your glibc version. Here are the instructions. But since you don't invoke pytorch directly, you need to build Python also. Then you can run your program by setting glibc path specifically for your program. LD_LIBRARY_PATH=/opt/myglibc python my_program.py
https://stackoverflow.com/questions/68699341/
How to update the weights of a model only if loss for current batch is smaller than previous
I'm trying to update the weights of a model during training only for those batches in which the loss is smaller than that obtained in the previous batch. So, in the batches loop, I store the loss obtained at each iteration, and then I have tried evaluating a condition: if loss at time t-1 is smaller that that a time t, then I proceed as follows: if loss[t-1] <= loss[t]: loss.backward() optimizer.step() else: #do nothing or what ? Then, nothing should be done in the else part. Nonetheless, I get an error saying CUDA is running out of memory. Of course, before computing the loss, I perform an optimizer.zero_grad() sentence. The for loop that runs over batches seems to be running fine, but memory usage blows up. I read that maybe setting gradients to None would prevent the weights update process but I have tried many sentences (output.clone().detach() also optimizer.zero_grad(set_to_none=True)) but I'm not sure they work. I think they did not. Nonetheless, the memory usage explosion still occurs. Is there a way to get this done?
This is a common problem when storing losses from consecutive steps. The out-of-memory error is caused because you are storing the losses in a list. The computational graphs will still remain and will stay in memory as long as you keep a reference to your losses. An easy fix is to detach the tensor when you append it to the list: # loss = loss_fn(...) losses.append(loss.detach()) Then you can work with if losses[t] <= losses[t-1]: # current loss is smaller losses[t].backward() optimizer.step() else: pass
https://stackoverflow.com/questions/68703623/
How to automatically judge whether the training process of the deep learning model is converged?
When training a deep learning model, I have to look at the loss curve and performance curve to judge whether the training process of the deep learning model is converged. This has cost me a lot of time. Sometimes, the time of convergence judged by the naked eye is not accurate. Therefore, I'd like to know whether there exists an algorithm or a package that can automatically judge whether the training process of the deep learning model is converged. Can anyone help me? Thanks a lot.
To the risk of disappointing you, I believe there is no such universal algorithm. In my experience, it depends on what you want to achieve, which metrics are important to you and how much time you are willing to let the training go on for. I have already seen validation losses dramatically go up (a sign of overfitting) while other metrics (mIoU in this case) were still improving on the validation set. In these cases, you need to know what your target is. It is possible (although it is very rare) that your loss goes up for a substantial amount of time before going down again and reach better levels than before. There is no way to anticipate this. Finally, and this is arguably a common case if you have tons of training data, your validation loss may continually go down, but do so slower and slower. In this case, the best strategy if you had an infinite amount of time would be to let it keep the training going indefinitely. In practice, this is impossible, and you would need to find the right balance between performance and training time. If you really need an algorithm, I would suggest this quite simple one : Compute a validation metric M(i) after each ith epoch on a fixed subset of your validation set or the whole validation set. Let's suppose that the higher M(i)is, the better. Fix k an integer depending on the duration of one training epoch (k~3 should do the trick) If for some n you have M(n) > max(M(n+1), ..., M(n+k)), stop and keep the network you had at epoch n. It's far from perfect, but should be enough for simple tasks. [Edit] If you're not using it yet, I invite you to use TensorBoard to visualize the evolution of your metrics throughout the training. Once set up, it is a huge gain of time.
https://stackoverflow.com/questions/68707680/
Torch.gather from 1D array using 2D indices
I have an nx1 tensor and an nxm tensor. I want to gather values from the nx1 tensor using the nxm tensor. For example for input tensor([1, 2, 3, 4]) and index tensor([[0, 3], [2, 1],[1, 3], [2,3]]) output should be tensor([[1, 4], [3, 2], [2,4], [3,4]) The indices are in the 2D matrix and the values are to be gathered from the 1D list. How to use torch.gather/ or any torch tensor function for this purpose? My following code gives error t = torch.tensor([[1, 2, 3, 4]]) ind = torch.tensor([[0, 3], [2, 1],[1, 3], [2,3]]) torch.gather(t, 0, ind) RuntimeError: index 2 is out of bounds for dimension 0 with size 1 Edit: You can do simple indexing to achieve this output. t[ind] Is this the best way to do this. I assume this involves broadcasting the input array. Edit Using t[ind] in forward pass is resulting in the error /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [430,0,0], thread: [97,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. When I try to print tensors in the forward pass, no output is shown after the t[ind] operation. Which makes sense as getitem is not a differentiable operation for the loss to propagate. So there is a valid use case in using gather over getitem.
If you want to use torch.gather: torch.gather(t.expand(4, -1), 1, ind)
https://stackoverflow.com/questions/68708238/
Building a non-image classifier with ground truths
I have a dataset that looks like this: The labels are basically a list of items (let's say, cars in a parking lot) I am given, where there are 10 of them in total, labeled from 0 to 10. I have 14 classes (let's say, 14 different car brands). Every float point value is just percentage of which class that particular item belongs to. For example, Item 2 is likely class 2 with a probability of 0.995275: print(set(list(df['label']))) > {0, 1, 2, 3, 4, 5, 6, 7, 9} My goal is to train a classifier to output an integer from 0 to 14 to predict what class label x belongs to. I am trying to build a feedforward NN with 3 hidden layers (+ input and output layers) and takes 15 inputs and outputs a prediction from 0 to 14. This is what I've designed so far: class NNO(nn.Module): def __init__(self): super(NNO, self).__init__() h= [2,1] self.hidden = nn.Linear(h[0], h[1]) self.hidden = nn.Linear(2,20) self.hidden = nn.Linear(20,20) self.output = nn.Linear(20,15) self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim = 1) def forward(self, y): x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) My question is this. How do I feed the dataset to my NN to start training the epochs? I couldn't find any resource that pertains to a dataset like this.
Here is the answer: # First I create some dummy data label = np.random.randint(0, 14, 1000) random = np.random.random((1000, 14)) total = pd.DataFrame(data=random, columns=[f'{i}_col' for i in range(14)]) total['label'] = label ''' From what I understood you need 1 class in output that has the highest probability and hence this is a multi-class classification problem. In my case, I will just use the highest value from `random` as the target class. ''' class TDataset(torch.utils.data.Dataset): def __init__(self, df): self.inputs = df[[f'{i}_col' for i in range(14)] + ['label']].values self.outputs = df[[f'{i}_col' for i in range(14)]].values def __len__(self): return len(self.inputs) def __getitem__(self, idx): x = torch.tensor(self.inputs[idx], dtype=torch.float) y = torch.tensor(np.argmax(self.outputs[idx])) return x, y ds = TDataset(total) dl = torch.utils.data.DataLoader(ds, batch_size=64) # After doing this I will create a model which takes 15 inputs and # Give 14 outputs in my case which represent the logits class NNO(nn.Module): def __init__(self): super(NNO, self).__init__() self.hidden = nn.Linear(15, 20) self.relu = nn.ReLU() self.output = nn.Linear(20, 14) def forward(self, x): x = self.hidden(x) x = self.relu(x) x = self.output(x) return x # Now we create the model object m = NNO() sample = None for i in dl: sample = i break print(m(sample[0]).shape) # shape = [64, 14] as desired. # Now we define the loss function and then the optimizer loss_fn = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(m.parameters()) # Now we define the training loop for i in range(500): # for 500 epochs epoch_loss = 0 for idx, data in enumerate(dl): inputs = data[0] targets = data[1] # change accordingly for your data preds = m(inputs) optimizer.zero_grad() loss = loss_fn(preds, targets) epoch_loss += loss loss.backward() optimizer.step() if (i%50 == 0): print('loss: ', epoch_loss.item() / len(dl)) ''' Now at the time of inference, you just need to apply softmax on the results of your model and select the most probable output. ''' preds = m(sample[0]) predicted_classes = torch.argmax(torch.nn.functional.softmax(preds), axis=1) # Here the predicted classes are the desired final output.
https://stackoverflow.com/questions/68709057/
Pytorch Assigning fixed parameter to the model
I am only interest in the feature map after 2 convolution layers with specific weights. class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=1, kernel_size=(2, 2), stride=1, padding=1, bias=False), nn.AvgPool2d(kernel_size=(2, 2), stride=(2, 2)) ) with torch.no_grad(): weights1 = torch.tensor([[0.2390, 0.1593], [0.5377, 0]]) self.layer1.weight = nn.Parameter(weights1, requires_grad=False) self.layer2 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=1, kernel_size=(2, 2), stride=1, padding=1, bias=False), nn.AvgPool2d(kernel_size=(2, 2), stride=(2, 2)) ) with torch.no_grad(): weights2 = torch.tensor([[-0.2390, -0.3585], [-0.5377, 0.2390]]) self.layer2.weight = nn.Parameter(weights2, requires_grad=False) def forward(self, x): x = self.layer1(x) x = self.layer2(x) return x Due to the limitation of real sensor implementation, I have to use fixed weights as above. Problem is that the output is not constant. >>>list(model.parameters()) [Parameter containing: tensor([[0.2390, 0.1593], [0.5377, 0.0000]]), Parameter containing: tensor([[[[-0.2701, 0.1602], [-0.0056, -0.0924]]]], requires_grad=True), Parameter containing: tensor([[-0.2390, -0.3585], [-0.5377, 0.2390]]), Parameter containing: tensor([[[[-0.0287, 0.2864], [ 0.3319, -0.3913]]]], requires_grad=True)] Above is the result of model's parameter and you can see there are other parameters. Do you know how to fix the paramteres?
You are not accessing the weight property on the incorrect object: self.layer1 and self.layer2 are not nn.Conv2d instances, they are nn.Sequential layers. Doing so you are essentially registering two new tensors (the fixed ones) to your module, added the two tensor parameters instantiated by the nn.Conv2d layers. You should assign the fixed parameters to self.layer1[0] and (self.layer2[0] respectively): self.layer1[0].weight = nn.Parameter(weights1, requires_grad=False) # and self.layer2[0].weight = nn.Parameter(weights2, requires_grad=False) Then .parameters() will generate two tensor parameters: >>> list [Parameter containing: tensor([[0.2390, 0.1593], [0.5377, 0.0000]]), Parameter containing: tensor([[-0.2390, -0.3585], [-0.5377, 0.2390]])]
https://stackoverflow.com/questions/68715352/
ChecksumMismatchError: Conda detected a mismatch between the expected content and downloaded content
I have installed many many packages including torch, gpytorch, ... in the past in Windows, Ubuntu and Mac following this scenario: conda create -n env_name conda activate env_name conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia However, this time on Ubuntu, I interfered the following error on downloading the package which apparently after downloading when checking the checksum, it sees a mismatch. I also tried removing those *.bz2 files just in case if there is a pre-downloaded file, it didn't work. ChecksumMismatchError: Conda detected a mismatch between the expected content and downloaded content for url 'https://conda.anaconda.org/pytorch/linux-64/torchaudio-0.9.0-py39.tar.bz2'. download saved to: /home/amin/anaconda3/pkgs/torchaudio-0.9.0-py39.tar.bz2 expected md5: 7224453f68125005e034cb6646f2f0a3 actual md5: 6bbb8056603453427bbe4cca4b033361 ChecksumMismatchError: Conda detected a mismatch between the expected content and downloaded content for url 'https://conda.anaconda.org/pytorch/linux-64/torchvision-0.10.0-py39_cu111.tar.bz2'. download saved to: /home/amin/anaconda3/pkgs/torchvision-0.10.0-py39_cu111.tar.bz2 expected md5: 78b4c927e54b06d7a6d18eec8b3f2d18 actual md5: 69dd8411c573903db293535017742bd9 My system information: Linux SPOT-Server 5.8.0-63-generic #71~20.04.1-Ubuntu SMP Thu Jul 15 17:46:08 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux My conda --version is also 4.8.2. I also should add that I have same issue on Windows having conda --version equal to 4.10.1.
The PyTorch channel maintainers had an issue when uploading some new package builds, which has since been resolved (see GitHub Issue). The technical cause was uploading new builds with identical versions and build numbers as before, without replacing the previous build. This caused the expected MD5 checksum to correspond to the new upload, but the tarball that was ultimately downloaded still corresponded to the previous upload, leading to a checksum mismatch.
https://stackoverflow.com/questions/68719486/
Setting learning rate for Stochastic Weight Averaging in PyTorch
Following is a small working code for Stochastic Weight Averaging in Pytorch taken from here. loader, optimizer, model, loss_fn = ... swa_model = torch.optim.swa_utils.AveragedModel(model) scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=300) swa_start = 160 swa_scheduler = SWALR(optimizer, swa_lr=0.05) for epoch in range(300): for input, target in loader: optimizer.zero_grad() loss_fn(model(input), target).backward() optimizer.step() if epoch > swa_start: swa_model.update_parameters(model) swa_scheduler.step() else: scheduler.step() # Update bn statistics for the swa_model at the end torch.optim.swa_utils.update_bn(loader, swa_model) # Use swa_model to make predictions on test data preds = swa_model(test_input) In this code after 160th epoch the swa_scheduler is used instead of the usual scheduler. What does swa_lr signify? The documentation says, Typically, in SWA the learning rate is set to a high constant value. SWALR is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant. So what happens to the learning rate of the optimizer after 160th epoch? Does swa_lr affect the optimizer learning rate? Suppose that at the beginning of the code the optimizer was ADAM initialized with a learning rate of 1e-4. Then does the above code imply that for the first 160 epochs the learning rate for training will be 1e-4 and then for the remaining number of epochs it will be swa_lr=0.05? If yes, is it a good idea to define swa_lr also to 1e-4?
does the above code imply that for the first 160 epochs the learning rate for training will be 1e-4 No it won't be equal to 1e-4, during the first 160 epochs the learning rate is managed by the first scheduler scheduler. This one is a initialize as a torch.optim.lr_scheduler.CosineAnnealingLR. The learning rate will follow this curve: for the remaining number of epochs it will be swa_lr=0.05 This is partially true, during the second part - from epoch 160 - the optimizer's learning rate will be handled by the second scheduler swa_scheduler. This one is initialized as a torch.optim.swa_utils.SWALR. You can read on the documentation page: SWALR is a learning rate scheduler that anneals the learning rate to a fixed value [swa_lr], and then keeps it constant. By default (cf. source code), the number of epochs before annealing is equal to 10. Therefore the learning rate from epoch 170 to epoch 300 will be equal to swa_lr and will stay this way. The second part will be: This complete profile, i.e. both parts: If yes, is it a good idea to define swa_lr also to 1e-4 It is mentioned in the docs: Typically, in SWA the learning rate is set to a high constant value. Setting swa_lr to 1e-4 would result in the following learning-rate profile:
https://stackoverflow.com/questions/68726290/
cross entropy loss with weight manual calculation
Hi just playing around with code, I got the unexpected result of cross entropy loss weight implementation. pred=torch.tensor([[8,5,3,2,6,1,6,8,4],[2,5,1,3,4,6,2,2,6],[1,1,5,8,9,2,5,2,8],[2,2,6,4,1,1,7,8,3],[2,2,2,7,1,7,3,4,9]]).float() label=torch.tensor([[3],[7],[8],[2],[5]],dtype=torch.int64) weights=torch.tensor([1,1,1,10,1,6,1,1,1],dtype=torch.float32) with this kind of sample variables, pytorch's cross entropy loss gives out 4.7894 loss = F.cross_entropy(pred, label, weight=weights,reduction='mean') > 4.7894 I manually implemented the cross entropy loss code as below one_hot = torch.zeros_like(pred).scatter(1, label.view(-1, 1), 1) log_prb = F.log_softmax(pred, dim=1) loss = -(one_hot * log_prb).sum(dim=1).mean() this kind of implementation gives same result with pytorch's cross entropy function if given without weight value. However with weight value one_hot = torch.zeros_like(pred).scatter(1, label.view(-1, 1), 1) log_prb = F.log_softmax(pred, dim=1) loss = -(one_hot * log_prb)*weights.sum(dim=1).sum()/weights.sum() > 3.9564 it gives out different loss value with pytorch module(4.7894). I can roughly estimate that my understanding of loss's weight have some problem here, but I can't find out the exact reason for this kind of discrepancy. Can anybody help me handling this issue?
I found out the problem. It was quite simple... I shouldn't have divided with the whole sum of weights. Instead with dividing with wt.sum() (wt=one_hot*weight) got me 4.7894. >>> wt = one_hot*weights >>> loss = -(one_hot * log_prb * weights).sum(dim=1).sum() / wt.sum() 4.7894 The denominator was only with 'related' weight value, not whole.
https://stackoverflow.com/questions/68727252/
Pytorch model output - only keep scores above 0.3
I have a large list (actually only one element with 3 dictionaries) as shown below. It is the output of a pretrained pytorch model for one instance of the test set. There are three attributes in the list (boxes, labels, scores), all of type tensor. Each box has a corresponding score and label. In total there are 100 boxes. Is there any quick way to only keep the boxes, labels and scores where the score is greater than 0.3? So for this example, there should only be 5 boxes with their respective score and label. output = [{'boxes': tensor([[0.0000e+00, 2.9095e+01, 7.3249e+01, 1.1387e+02], [7.8610e+01, 1.9392e+01, 1.6580e+02, 1.0291e+02], [3.6086e-01, 2.9609e+01, 1.0292e+02, 2.0285e+02], [1.8569e+02, 2.3418e+01, 2.4397e+02, 1.4092e+02], [1.9678e-03, 0.0000e+00, 5.8328e+01, 1.7467e+02], [1.4161e+02, 1.5196e+02, 2.2797e+02, 2.3690e+02], [1.5630e+02, 5.4246e+01, 2.1178e+02, 1.7170e+02], [5.3407e+01, 6.4962e+01, 1.0892e+02, 1.8180e+02], [1.0011e+02, 1.5188e+02, 1.8732e+02, 2.3737e+02], [1.5080e+02, 3.9219e+01, 2.3776e+02, 1.2494e+02], [8.9806e+01, 1.3143e+02, 1.7610e+02, 2.1669e+02], [1.3518e+02, 1.2713e+02, 1.9257e+02, 2.4350e+02], [1.1423e+02, 1.4989e+01, 1.7153e+02, 1.3093e+02], [7.9036e+01, 1.1927e+00, 1.9153e+02, 1.7694e+02], [8.4356e+01, 2.3523e+01, 1.4035e+02, 1.4181e+02], [6.9645e+01, 1.5251e+02, 1.5582e+02, 2.3697e+02], [1.4163e+02, 1.2086e+02, 2.2753e+02, 2.0553e+02], [8.3618e+01, 1.0583e+02, 1.4110e+02, 2.2334e+02], [3.2450e-01, 7.1444e+01, 7.1565e+01, 1.5488e+02], [7.2167e+00, 4.9198e+01, 9.3541e+01, 1.3515e+02], [3.8690e+01, 3.7546e+01, 1.2640e+02, 1.2457e+02], [1.0393e+02, 8.4865e+01, 1.6160e+02, 2.0193e+02], [9.6637e+00, 1.2074e+02, 9.1465e+01, 2.0829e+02], [2.6140e+00, 8.6522e+01, 5.9267e+01, 2.0357e+02], [1.6260e+02, 6.0580e+01, 2.4744e+02, 1.4646e+02], [1.7624e+02, 7.5614e+01, 2.3260e+02, 1.9287e+02], [1.2096e+02, 2.8686e+01, 2.0757e+02, 1.1476e+02], [1.0993e+02, 1.1107e+02, 1.9594e+02, 1.9697e+02], [3.8821e+01, 1.6499e+00, 1.5277e+02, 1.7680e+02], [1.3592e+02, 1.7006e+00, 2.5177e+02, 1.7528e+02], [4.3270e+01, 8.5363e+01, 9.8090e+01, 2.0313e+02], [3.9082e+01, 1.6281e+02, 1.2582e+02, 2.4565e+02], [1.0941e+02, 9.5967e+00, 1.9760e+02, 9.2006e+01], [9.4279e+01, 5.4012e+01, 1.5129e+02, 1.7273e+02], [1.7610e+02, 1.1657e+02, 2.3257e+02, 2.3306e+02], [1.8356e+02, 1.2347e+02, 2.5438e+02, 2.0546e+02], [1.4145e+00, 7.8904e+01, 7.9495e+01, 2.5600e+02], [5.7602e+01, 9.8933e+01, 1.7596e+02, 2.5600e+02], [1.3184e+02, 9.0243e+01, 2.1412e+02, 1.7617e+02], [1.3507e+02, 4.4525e+01, 1.9094e+02, 1.6303e+02], [1.0465e+01, 1.5353e+02, 9.2553e+01, 2.3779e+02], [1.8336e+02, 1.5361e+02, 2.5428e+02, 2.3656e+02], [1.9591e+02, 7.6679e+01, 2.5259e+02, 1.9309e+02], [9.8446e+01, 7.9284e+01, 2.1062e+02, 2.5600e+02], [1.3960e+02, 9.7938e+00, 2.2880e+02, 9.1881e+01], [9.0553e+01, 0.0000e+00, 1.7578e+02, 7.1238e+01], [1.8702e+00, 1.2331e+02, 4.8407e+01, 2.4824e+02], [0.0000e+00, 4.9428e-01, 1.2664e+02, 1.4013e+02], [7.9054e+01, 1.5236e+00, 2.3079e+02, 1.0113e+02], [1.6006e+02, 6.4527e+01, 2.5467e+02, 2.5600e+02], [0.0000e+00, 1.7543e+02, 1.8282e+02, 2.5560e+02], [1.7264e+00, 1.7961e+02, 7.0644e+01, 2.5600e+02], [1.8063e+02, 9.7504e+00, 2.5516e+02, 9.3108e+01], [5.0636e+01, 1.3299e+02, 1.3221e+02, 2.1604e+02], [3.1850e+01, 5.4289e+01, 8.8847e+01, 1.7312e+02], [2.0640e+02, 3.2171e+01, 2.5485e+02, 1.5160e+02], [1.9062e+01, 1.2459e+00, 1.7152e+02, 1.0271e+02], [8.0108e+01, 1.8195e+02, 1.6510e+02, 2.5523e+02], [6.4087e+00, 1.3263e+00, 9.6138e+01, 7.0220e+01], [2.1170e+01, 2.3619e+00, 7.9999e+01, 1.0748e+02], [5.7921e+01, 6.5922e-01, 1.4736e+02, 8.1241e+01], [1.1025e+02, 1.8136e+02, 1.9421e+02, 2.5513e+02], [6.1567e+01, 1.7640e+02, 2.5535e+02, 2.5600e+02], [3.9355e+01, 3.4047e+00, 1.0319e+02, 8.7065e+01], [5.0878e+01, 1.0217e+02, 1.3312e+02, 1.8664e+02], [7.4605e+01, 5.4398e+01, 1.2882e+02, 1.7320e+02], [1.7292e+02, 1.8397e+02, 2.5249e+02, 2.5558e+02], [1.8037e+01, 9.5900e+01, 1.3505e+02, 2.5211e+02], [1.4013e+02, 1.9098e+02, 2.2696e+02, 2.5415e+02], [6.2275e+01, 8.4387e+01, 9.0523e+01, 1.4154e+02], [1.7307e+01, 1.9287e+02, 1.1007e+02, 2.5532e+02], [7.2651e+01, 6.8909e+01, 1.0101e+02, 1.2587e+02], [1.6461e+02, 7.4065e+01, 1.9301e+02, 1.3071e+02], [7.7585e+01, 5.3052e+01, 1.0652e+02, 1.0954e+02], [1.6948e+02, 6.2818e+01, 1.9857e+02, 1.2058e+02], [5.7015e+01, 1.0044e+02, 8.5367e+01, 1.5666e+02], [7.7270e+01, 8.4819e+01, 1.0621e+02, 1.4148e+02], [6.7998e-01, 5.9344e+01, 3.2433e+01, 1.0408e+02], [5.7399e+01, 5.8315e+01, 8.5744e+01, 1.1517e+02], [1.5450e+02, 5.2688e+01, 1.8301e+02, 1.1024e+02], [6.7396e+01, 5.3385e+01, 9.6198e+01, 1.0940e+02], [5.2431e+01, 1.1456e+02, 8.0534e+01, 1.7151e+02], [5.2424e+01, 7.2901e+01, 8.0602e+01, 1.3118e+02], [5.4905e+01, 7.4931e+01, 9.8882e+01, 1.1913e+02], [1.7986e+02, 6.2481e+01, 2.0913e+02, 1.2018e+02], [6.7338e+01, 1.0491e+02, 9.5506e+01, 1.6148e+02], [1.7451e+02, 8.3800e+01, 2.0362e+02, 1.4150e+02], [4.9071e+01, 4.8894e+01, 9.4031e+01, 9.3116e+01], [1.0840e+00, 1.1331e+01, 3.7614e+01, 1.2899e+02], [8.2344e+01, 6.8916e+01, 1.1157e+02, 1.2634e+02], [1.6138e+02, 5.9034e+01, 2.0742e+02, 1.0366e+02], [1.4473e+01, 2.2719e+01, 2.1446e+02, 1.3569e+02], [4.7128e+01, 8.9411e+01, 7.5240e+01, 1.4684e+02], [1.8501e+02, 1.1383e+02, 2.1348e+02, 1.7167e+02], [5.4657e+01, 1.2145e+02, 9.8843e+01, 1.6540e+02], [3.7407e+00, 2.7927e+01, 4.8678e+01, 7.2360e+01], [1.6647e+02, 8.0292e+01, 2.1177e+02, 1.2410e+02], [3.4396e+01, 6.4177e+01, 7.8244e+01, 1.0873e+02], [8.7888e+01, 5.2212e+01, 1.1652e+02, 1.1079e+02], [1.7443e+02, 1.1404e+02, 2.0353e+02, 1.7198e+02]], device='cuda:0'), 'labels': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='cuda:0'), 'scores': tensor([0.3317, 0.3235, 0.3208, 0.3157, 0.3108, 0.2977, 0.2974, 0.2955, 0.2938, 0.2904, 0.2902, 0.2836, 0.2797, 0.2794, 0.2787, 0.2784, 0.2782, 0.2746, 0.2739, 0.2695, 0.2658, 0.2655, 0.2647, 0.2628, 0.2622, 0.2596, 0.2596, 0.2593, 0.2591, 0.2584, 0.2574, 0.2559, 0.2550, 0.2529, 0.2526, 0.2429, 0.2428, 0.2408, 0.2397, 0.2381, 0.2370, 0.2344, 0.2302, 0.2296, 0.2292, 0.2260, 0.2258, 0.2252, 0.2201, 0.2166, 0.2125, 0.2063, 0.2056, 0.2054, 0.2050, 0.2032, 0.2023, 0.2021, 0.1985, 0.1956, 0.1943, 0.1776, 0.1739, 0.1708, 0.1700, 0.1665, 0.1657, 0.1595, 0.1588, 0.1561, 0.1553, 0.1553, 0.1484, 0.1426, 0.1419, 0.1416, 0.1289, 0.1265, 0.1250, 0.1248, 0.1226, 0.1219, 0.1216, 0.1208, 0.1197, 0.1186, 0.1182, 0.1164, 0.1164, 0.1157, 0.1133, 0.1109, 0.1097, 0.1086, 0.1055, 0.1055, 0.1054, 0.1047, 0.1026, 0.1020], device='cuda:0')}]
So you're looking for something like this? mask = output[0]['scores'] > 0.3 for key,val in output[0].items(): output[0][key] = val[mask] output[0] {'boxes': tensor([[0.0000e+00, 2.9095e+01, 7.3249e+01, 1.1387e+02], [7.8610e+01, 1.9392e+01, 1.6580e+02, 1.0291e+02], [3.6086e-01, 2.9609e+01, 1.0292e+02, 2.0285e+02], [1.8569e+02, 2.3418e+01, 2.4397e+02, 1.4092e+02], [1.9678e-03, 0.0000e+00, 5.8328e+01, 1.7467e+02]]), 'labels': tensor([1, 1, 1, 1, 1]), 'scores': tensor([0.3317, 0.3235, 0.3208, 0.3157, 0.3108])}
https://stackoverflow.com/questions/68730980/
RuntimeError: CUDA error: device-side assert triggered - BART model
I am trying to run BART language model for a text generation task. My code was working fine when I used for another encoder-decoder model (T5), but with bart I am getting this error: File "train_bart.py", line 89, in train outputs = model(input_ids = ids, attention_mask = mask, decoder_input_ids=y_ids, labels=lm_labels) cs-lab-host1" 12:39 10-Aug-21 File ".../venv/tf_23/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File ".../venv/tf_23/lib/python3.6/site-packages/transformers/models/bart/modeling_bart.py", line 1308, in forward return_dict=return_dict, File ".../venv/tf_23/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File ".../venv/tf_23/lib/python3.6/site-packages/transformers/models/bart/modeling_bart.py", line 1196, in forward return_dict=return_dict, File ".../venv/tf_23/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File ".../venv/tf_23/lib/python3.6/site-packages/transformers/models/bart/modeling_bart.py", line 985, in forward attention_mask, input_shape, inputs_embeds, past_key_values_length File ".../venv/tf_23/lib/python3.6/site-packages/transformers/models/bart/modeling_bart.py", line 866, in _prepare_decoder_attent ion_mask ).to(self.device) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. And this is where error happens: for _, data in tqdm(enumerate(loader, 0), total=len(loader), desc='Processing batches..'): y = data['target_ids'].to(device, dtype = torch.long) y_ids = y[:, :-1].contiguous() lm_labels = y[:, 1:].clone().detach() lm_labels[y[:, 1:] == tokenizer.pad_token_id] = -100 ids = data['source_ids'].to(device, dtype = torch.long) mask = data['source_mask'].to(device, dtype = torch.long) outputs = model(input_ids = ids, attention_mask = mask, decoder_input_ids=y_ids, labels=lm_labels) loss = outputs[0] loader is the tokenized and processed data.
After fighting for many hours, I found that the error was due to adding new tokens to the Bart tokenizer. Thus I needed to resize the model input embeddings matrix: model.resize_token_embeddings(len(tokenizer)) The point that is still not clear to me is that, without resizing the embeddings matrix, I was able to fine-tune T5 model without any problem, but not Bart. Maybe this is because Bart is sharing weights between the input and the output layers (I am not sure of this either).
https://stackoverflow.com/questions/68732271/
Mixed Precision(Pytorch Autocast) Slows Down the Code
I have RTX 3070. Somehow using autocast slows down my code. torch.version.cuda prints 11.1, torch.backends.cudnn.version() prints 8005 and my PyTorch version is 1.9.0. I’m using Ubuntu 20.04 with Kernel 5.11.0-25-generic. That’s the code I’ve been using: torch.cuda.synchronize() start = torch.cuda.Event(enable_timing=True) end = torch.cuda.Event(enable_timing=True) start.record() for epoch in range(10): running_loss = 0.0 for i, data in enumerate(trainloader, 0): inputs, labels = data optimizer.zero_grad() with torch.cuda.amp.autocast(): outputs = net(inputs) oss = criterion(outputs, labels) scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() end.record() torch.cuda.synchronize() print(start.elapsed_time(end)) Without torch.cuda.amp.autocast(), 1 epoch takes 22 seconds, whereas with autocast() 1 epoch takes 30 seconds.
It turns out, my model was not big enough to utilize mixed precision. When I increased the in/out channels of convolutional layer, it finally worked as expected.
https://stackoverflow.com/questions/68734159/
Do I need to load the weights of another class I use in my NN class?
I have a model that needs to implement self-attention and this is how I wrote my code: class SelfAttention(nn.Module): def __init__(self, args): self.multihead_attn = torch.nn.MultiheadAttention(args) def foward(self, x): return self.multihead_attn.forward(x, x, x) class ActualModel(nn.Module): def __init__(self): self.inp_layer = nn.Linear(arg1, arg2) self.self_attention = SelfAttention(some_args) self.out_layer = nn.Linear(arg2, 1) def forward(self, x): x = self.inp_layer(x) x = self.self_attention(x) x = self.out_layer(x) return x After loading a checkpoint of ActualModel, in ActualModel.__init__ during continuing-training or during prediction time should I load a saved model checkpoint of class SelfAttention? If I create an instance of class SelfAttention, would the trained weights corresponding to SelfAttention.multihead_attn be loaded if I do torch.load(actual_model.pth) or would be they be reinitialized? In other words, is this necessary? class ActualModel(nn.Module): def __init__(self): self.inp_layer = nn.Linear(arg1, arg2) self.self_attention = SelfAttention(some_args) self.out_layer = nn.Linear(arg2, 1) def pred_or_continue_train(self): self.self_attention = torch.load('self_attention.pth') actual_model = torch.load('actual_model.pth') actual_model.pred_or_continue_training() actual_model.eval()
In other words, is this necessary? In short, No. The SelfAttention class will be automatically loaded if it has been registered as a nn.module, nn.Parameters, or manually registered buffers. A quick example: import torch import torch.nn as nn class SelfAttention(nn.Module): def __init__(self, fin, n_h): super(SelfAttention, self).__init__() self.multihead_attn = torch.nn.MultiheadAttention(fin, n_h) def foward(self, x): return self.multihead_attn.forward(x, x, x) class ActualModel(nn.Module): def __init__(self): super(ActualModel, self).__init__() self.inp_layer = nn.Linear(10, 20) self.self_attention = SelfAttention(20, 1) self.out_layer = nn.Linear(20, 1) def forward(self, x): x = self.inp_layer(x) x = self.self_attention(x) x = self.out_layer(x) return x m = ActualModel() for k, v in m.named_parameters(): print(k) You will get as follows, where self_attention is successfully registered. inp_layer.weight inp_layer.bias self_attention.multihead_attn.in_proj_weight self_attention.multihead_attn.in_proj_bias self_attention.multihead_attn.out_proj.weight self_attention.multihead_attn.out_proj.bias out_layer.weight out_layer.bias
https://stackoverflow.com/questions/68740357/
How to avoid two variables refering to the same data? #Pytorch
During initializing, I tried to reduce the repeats in my code, so instead of: output= (torch.zeros(2, 3), torch.zeros(2, 3)) I wrote: z = torch.zeros(2, 3) output= (z,z) However, I find that the second method is wrong. If I assign the data to variables h,c, any change on h would also be applied to c h,c = output print(h,c) h +=torch.ones(2,3) print('-----------------') print(h,c) results of the test above: tensor([[0., 0., 0.], [0., 0., 0.]]) tensor([[0., 0., 0.], [0., 0., 0.]]) ----------------- tensor([[1., 1., 1.], [1., 1., 1.]]) tensor([[1., 1., 1.], [1., 1., 1.]]) Is there a more elegent way to initialize two indenpendent variables?
I agree that your initial line needs no modification but if you do want an alternative, consider: z = torch.zeros(2, 3) output= (z,z.clone()) The reason the other one (output = (z,z)) doesn't work, as you've correctly discovered is that no copy is made. You're only passing the same reference in each entry of the tuple to z
https://stackoverflow.com/questions/68741471/
Error while trying to fine-tune the ReformerModelWithLMHead (google/reformer-enwik8) for NER
I'm trying to fine-tune the ReformerModelWithLMHead (google/reformer-enwik8) for NER. I used the padding sequence length same as in the encode method (max_length = max([len(string) for string in list_of_strings])) along with attention_masks. And I got this error: ValueError: If training, make sure that config.axial_pos_shape factors: (128, 512) multiply to sequence length. Got prod((128, 512)) != sequence_length: 2248. You might want to consider padding your sequence length to 65536 or changing config.axial_pos_shape. When I changed the sequence length to 65536, my colab session crashed by getting all the inputs of 65536 lengths. According to the second option(changing config.axial_pos_shape), I cannot change it. I would like to know, Is there any chance to change config.axial_pos_shape while fine-tuning the model? Or I'm missing something in encoding the input strings for reformer-enwik8? Thanks! Question Update: I have tried the following methods: By giving paramteres at the time of model instantiation: model = transformers.ReformerModelWithLMHead.from_pretrained("google/reformer-enwik8", num_labels=9, max_position_embeddings=1024, axial_pos_shape=[16,64], axial_pos_embds_dim=[32,96],hidden_size=128) It gives me the following error: RuntimeError: Error(s) in loading state_dict for ReformerModelWithLMHead: size mismatch for reformer.embeddings.word_embeddings.weight: copying a param with shape torch.Size([258, 1024]) from checkpoint, the shape in current model is torch.Size([258, 128]). size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([128, 1, 256]) from checkpoint, the shape in current model is torch.Size([16, 1, 32]). This is quite a long error. Then I tried this code to update the config: model1 = transformers.ReformerModelWithLMHead.from_pretrained('google/reformer-enwik8', num_labels = 9) Reshape Axial Position Embeddings layer to match desired max seq length model1.reformer.embeddings.position_embeddings.weights[1] = torch.nn.Parameter(model1.reformer.embeddings.position_embeddings.weights[1][0][:128]) Update the config file to match custom max seq length model1.config.axial_pos_shape = 16,128 model1.config.max_position_embeddings = 16*128 #2048 model1.config.axial_pos_embds_dim= 32,96 model1.config.hidden_size = 128 output_model_path = "model" model1.save_pretrained(output_model_path) By this implementation, I am getting this error: RuntimeError: The expanded size of the tensor (512) must match the existing size (128) at non-singleton dimension 2. Target sizes: [1, 128, 512, 768]. Tensor sizes: [128, 768] Because updated size/shape doesn't match with the original config parameters of pretrained model. The original parameters are: axial_pos_shape = 128,512 max_position_embeddings = 128*512 #65536 axial_pos_embds_dim= 256,768 hidden_size = 1024 Is it the right way I'm changing the config parameters or do I have to do something else? Is there any example where ReformerModelWithLMHead('google/reformer-enwik8') model fine-tuned. My main code implementation is as follow: class REFORMER(torch.nn.Module): def __init__(self): super(REFORMER, self).__init__() self.l1 = transformers.ReformerModelWithLMHead.from_pretrained("google/reformer-enwik8", num_labels=9) def forward(self, input_ids, attention_masks, labels): output_1= self.l1(input_ids, attention_masks, labels = labels) return output_1 model = REFORMER() def train(epoch): model.train() for _, data in enumerate(training_loader,0): ids = data['input_ids'][0] # input_ids from encode method of the model https://huggingface.co/google/reformer-enwik8#:~:text=import%20torch%0A%0A%23%20Encoding-,def%20encode,-(list_of_strings%2C%20pad_token_id%3D0 input_shape = ids.size() targets = data['tags'] print("tags: ", targets, targets.size()) least_common_mult_chunk_length = 65536 padding_length = least_common_mult_chunk_length - input_shape[-1] % least_common_mult_chunk_length #pad input input_ids, inputs_embeds, attention_mask, position_ids, input_shape = _pad_to_mult_of_chunk_length(self=model.l1, input_ids=ids, inputs_embeds=None, attention_mask=None, position_ids=None, input_shape=input_shape, padding_length=padding_length, padded_seq_length=None, device=None, ) outputs = model(input_ids, attention_mask, labels=targets) # sending inputs to the forward method print(outputs) loss = outputs.loss logits = outputs.logits if _%500==0: print(f'Epoch: {epoch}, Loss: {loss}') for epoch in range(1): train(epoch)
First of all, you should note that google/reformer-enwik8 is not a properly trained language model and that you will probably not get decent results from fine-tuning it. enwik8 is a compression challenge and the reformer authors used this dataset for exactly that purpose: To verify that the Reformer can indeed fit large models on a single core and train fast on long sequences, we train up to 20-layer big Reformers on enwik8 and imagenet64... This is also the reason why they haven't trained a sub-word tokenizer and operate on character level. You should also note that the LMHead is usually used for predicting the next token of a sequence (CLM). You probably want to use a token classification head (i.e. use an encoder ReformerModel and add a linear layer with 9 classes on top+maybe a dropout layer). Anyway, in case you want to try it still, you can do the following to reduce the memory footprint of the google/reformer-enwik8 reformer: Reduce the number of hashes during training: from transformers import ReformerConfig, ReformerModel conf = ReformerConfig.from_pretrained('google/reformer-enwik8') conf.num_hashes = 2 # or maybe even to 1 model = transformers.ReformerModel.from_pretrained("google/reformer-enwik8", config =conf) After you have finetuned your model, you can increase the number of hashes again to increase the performance (compare Table 2 of the reformer paper). Replace axial-position embeddings: from transformers import ReformerConfig, ReformerModel conf = ReformerConfig.from_pretrained('google/reformer-enwik8') conf.axial_pos_embds = False model = transformers.ReformerModel.from_pretrained("google/reformer-enwik8", config =conf) This will replace the learned axial positional embeddings with learnable position embeddings like Bert's and do not require the full sequence length of 65536. They are untrained and randomly initialized (i.e. consider a longer training).
https://stackoverflow.com/questions/68742863/
Full C++ LibTorch API on Moblie [Android]
I'm trying to compile a programme which uses LibTorch for android. But the libraries built by script/build_android.sh in the pytorch repo, contain only the torch.jit Module. I need the torch.nn Module. Is it possible to access the full C++ LibTorch API ? If so how can I build it ?
It would seem that by setting NO_API to OFF and BUILD_MOBILE_AUTOGRAD to ON in the root CMake file, your can build the api and thus torch.nn.
https://stackoverflow.com/questions/68743609/
'Vocab' object has no attribute 'itos'
I am building ML models for NLP with pytorch, but as I define vocabulary for tekenized words in my text with "vacab" and try to use vocab.itos I get: 'Vocab' object has no attribute 'itos' error. This is my vocab: vocab = torchtext.vocab.vocab(counter, min_freq=1) How can I solve this problem?
You should access torchtext.vocab.Vocab.get_itos to get the indices->tokens mapping. >>> itos = vocab.get_itos()
https://stackoverflow.com/questions/68743912/
Reproducibility in PyTorch with K-Fold Cross Validation
I have recently started a new project using PyTorch and I am still new in AI. In order to perform better on my dataset during training process I used cross-validation technique. Everyone seems to work fine but I am struggling with reproducibility. I even tried to set SEED number for each k-fold iteration but it does not seem to work at all. Changes in loss and accuracy are insignificant but they are. Before using cross-validation everything worked perfect. Thank you in advance. Here is a for loop for my k-fold. I used a solution from: k-fold cross validation using DataLoaders in PyTorch K_FOLD = 5 fraction = 1 / K_FOLD unit = int(dataset_length * fraction) for i in range(K_FOLD): torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.cuda.manual_seed_all(SEED) # if you are using multi-GPU. np.random.seed(SEED) # Numpy module. random.seed(SEED) # Python random module. torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True print("-----------K-FOLD {}------------".format(i+1)) tr_ll = 0 print("Train left begin:", tr_ll) tr_lr = i * unit print("Train left end:", tr_lr) val_l = tr_lr print("Validation begin:", val_l) val_r = i * unit + unit print("Validation end:", val_r) tr_rl = val_r print("Train right begin:", tr_rl) tr_rr = dataset_length print("Train right end:", tr_rr) # msg # print("train indices: [%d,%d),[%d,%d), test indices: [%d,%d)" # % (tr_ll,tr_lr,tr_rl,tr_rr,val_l,val_r)) train_left_indices = list(range(tr_ll, tr_lr)) train_right_indices = list(range(tr_rl, tr_rr)) train_indices = train_left_indices + train_right_indices val_indices = list(range(val_l, val_r)) # print("TRAIN Indices:", train_indices, "VAL Indices:", val_indices) train_set = torch.utils.data.dataset.Subset(DATASET, train_indices) val_set = torch.utils.data.dataset.Subset(DATASET, val_indices) # print("Length of train set:", len(train_set), "Length of val set:", len(val_set)) image_datasets = {"train": train_set, "val": val_set} loader = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=10, shuffle=True) for x in sets} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} # training trained_model = train_model(AlexNet, CRITERION, OPTIMIZER, dataloader=loader, dataset_sizes=dataset_sizes, num_epochs=EPOCHS, k_fold=i)
According to the latest docs, looks like you'll also need: torch.use_deterministic_algorithms(True)
https://stackoverflow.com/questions/68744901/