instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Get input of fully connected layer of ResNet model during runtime
Found a Solution, left it as an answer to this question down below :) Info about the project: Classification task with 2 classes. I am trying to get the output of the fully connected layer of my model for each image I put into the model during runtime. I plan to use them after the model is done training or testing all images to visualize with UMAP. The model: #Load resnet def get_model(): model = torchvision.models.resnet50(pretrained=True) num_ftrs = model.fc.in_features model.fc = nn.Linear(num_ftrs, 2) return model The relevant part of pl module: class classifierModel(pl.LightningModule): def __init__(self, model): super().__init__() self.model = model self.learning_rate = 0.0001 def training_step(self, batch, batch_idx): x= batch['image'] y = batch['targets'] x_hat = self.model(x) output = nn.CrossEntropyLoss() loss= output(x_hat,y) return loss def test_step(self, batch, batch_idx): x= batch['image'] y = batch['targets'] x_hat = self.model(x) Is it possible to do this by adding a empty list to the init of the pl module and then add the output after x_hat = model(x) is executed? How would i know if after x_hat = model(x) is executed, the out_features aren't immediatly deleted/discarded ?
I was able to do this using a forward hook on the avgpool layer and saving the output on each test_step like described here : #Define Hook: def get_features(name): def hook(model, input, output): features[name] = output.detach() return hook Now when I load my model, I register the hook: #Load resnet model: def get_model(): model = models.resnet50(pretrained=True) num_ftrs = model.fc.in_features model.fc = nn.Linear(num_ftrs, 2) model.avgpool.register_forward_hook(get_features('feats')) #register the hook return model I did not need to change the init of the pytorch lightning model but the test step function: FEATS = [] # placeholder for batch features features = {} class classifierModel(pl.LightningModule): def __init__(self, model): super().__init__() self.model = model self.learning_rate = 0.0001 def test_step(self, batch,batch_idx): x= batch['image'] y = batch['targets'] x_hat = self.model(x) FEATS.append(features['feats'].cpu().numpy()) #added this line to save output now we have the output FEATS[0].shape --> (16, 2048, 1, 1) which is what i wanted to get(16 is the batchsize is use).
https://stackoverflow.com/questions/70245605/
variable out of scope but still works
I am following the pytorch tutorial here and got to the word embeddings tutorial but there is some code I do not understand. When constructing n-grams they use the following: ngrams = [ ( [test_sentence[i - j - 1] for j in range(CONTEXT_SIZE)], test_sentence[i] ) for i in range(CONTEXT_SIZE, len(test_sentence)) ] This to me does not seem syntactically correct as i is referenced before it is initialized, there is nothing inside the for loop, and the for loop is missing the : at the end. What is going on with this block of code? It does not seem like it should work but it does.
This is a syntax of python like you can define a list like this: a_list = [i*2 for i in range(100)] in your example it is defining a 2-D list, so there are two for loop like: b_list = [(a_list) for j in range(2)] if we combine, we can write them like the example you posted.
https://stackoverflow.com/questions/70247417/
`RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn` for linear regression with gradient descent using torch
I am trying to implement a simple gradient descent for linear regression with pytorch as shown in this example in the docs: import torch from torch.autograd import Variable learning_rate = 0.01 y = 5 x = torch.tensor([3., 0., 1.]) w = torch.tensor([2., 3., 9.], requires_grad=True) b = torch.tensor(1., requires_grad=True) for z in range(100): y_pred = b + torch.sum(w * x) loss = (y_pred - y).pow(2) loss = Variable(loss, requires_grad = True) # loss.requires_grad = True loss.backward() with torch.no_grad(): w = w - learning_rate * w.grad b = b - learning_rate * b.grad w.grad = None b.grad = None When I run the code I get the error RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn I have read here and here that it could be solved using loss = Variable(loss, requires_grad = True) results in TypeError: unsupported operand type(s) for *: 'float' and 'NoneType' or loss.requires_grad = True results in RuntimeError: you can only change requires_grad flags of leaf variables. How can I fix this?
This error was actually caused by mixing calculation functions from torch with python built ins (same should go with numpy or other libraries which are not torch). It actually means that the autograd implementation from torch is breaking because they don't work with other functions. a good explanation can be read here This is more like a not fully appropiate hack: calling .retain_grad() before backward solved the issue for me: learning_rate = 0.01 y = 5 x = torch.tensor([3., 0., 1.]) w = torch.tensor([2., 3., 9.], requires_grad=True) b = torch.tensor(1., requires_grad=True) for z in range(100): y_pred = b + torch.sum(w * x) loss = (y_pred - y).pow(2) w.retain_grad() b.retain_grad() loss.backward() w = w - learning_rate * w.grad b = b - learning_rate * b.grad
https://stackoverflow.com/questions/70249150/
How to produce 4-dimensional input for 4-dimensional weight?
I am very new to Deep learning. I am working on the CIFAR10 dataset and created a CNN model which is as below. class Net2(nn.Module): def __init__(self): super(Net2, self).__init__() self.conv1 = nn.Conv2d(3, 32, 5, 1) self.fc1 = nn.Linear(32 * 5 * 5, 512) self.fc2 = nn.Linear(512,10) def forward(self, x): x = x.view(x.size(0), -1) x = F.max_pool2d(F.relu(self.conv1(x)),(2,2)) x = F.relu(self.fc1(x)) x = self.fc2(x) return x net2 = Net2().to(device) My assignment requirements are to create a model with: Convolutional layer with 32 filters, kernel size of 5x5 and stride of 1. Max Pooling layer with kernel size of 2x2 and default stride. ReLU Activation Layers. Linear layer with output of 512. ReLU Activation Layers. A linear layer with output of 10. Which I guess I wrote. But I am assuming that I am going to the wrong path. Please help me to write the correct model and also the reason behind those arguments in Conv2d and Linear layers. The error which I am getting from my code is as below: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 3, 5, 5], but got 2-dimensional input of size [1024, 3072] instead Please help me!
There are two problems with the code: Flattening of input x = x.view(x.size(0), -1) The convolutional layer expects a four dimensional input of dimensions (N, C, H, W), where N is the batch size, C = 3 is the number of channels, and (H, W) is the dimension of the image. By using the above statement, you are flattening your (1024, 3, 32, 32) input to (1024, 3072). Number of input features in the first linear layer self.fc1 = nn.Linear(32 * 5 * 5, 512) The output dimensions of the convolutional layer for a (1024, 3, 32, 32) input will be (1024, 32, 28, 28), and after applying the 2 x 2 maxpooling, it is (1024, 32, 14, 14). So the number of input features for the linear layer should be 32 x 14 x 14 = 6272.
https://stackoverflow.com/questions/70252965/
How can I only train the classifier and freeze rest of the parameters in Pytorch?
I have taken the pretrained model of MoviNet, I have changed the last layer. This is last parameters of pretrained model that I have taken; classifier.0.conv_1.conv2d.weight : torch.Size([2048, 640, 1, 1]) classifier.0.conv_1.conv2d.bias : torch.Size([2048]) classifier.3.conv_1.conv2d.weight : torch.Size([600, 2048, 1, 1]) classifier.3.conv_1.conv2d.bias : torch.Size([600]) The following are the parameters that I have changed at the last layer; clfr.0.multi_head.0.head2.0.conv_1.conv2d.weight : torch.Size([2048, 640, 1, 1]) clfr.0.multi_head.0.head2.0.conv_1.conv2d.bias : torch.Size([2048]) clfr.0.multi_head.0.head1.weight : torch.Size([600, 2048, 1, 1]) clfr.0.multi_head.0.head1.bias : torch.Size([600]) I want to train only classifier (clfr) based on previous layer weights, and freeze all previous laers in pytorch, can anyone one tell me how canI do this?
When creating your optimizer, only pass the parameters that you want to update during training. In your example, it could look something like: optimizer = torch.optim.Adam(clfr.parameters())
https://stackoverflow.com/questions/70256003/
how to concate two tensors with different dimensions in pytorch
I have two tensors in pytorch with these shapes: torch.Size([64, 100]) and torch.Size([64, 100, 256]) I want to concate them by torch.cat but they should be in the same shape and size. So I get this error: RuntimeError: Tensors must have same number of dimensions: got 2 and 3 what should I do to fix this problem? how can I convert 2d PyTorch tensor into 3d tensor OR how can I convert 3d PyTorch tensor to 2d tensor without losing any data? or any other idea?
Depending on what you are looking to do with those two tensors, you could consider concatenating on the last axis such that the resulting tensor is shaped (64, 100, 257). This requires you first unsqueeze a singleton dimensions on the first tensor: >>> x, y = torch.rand(64, 100), torch.rand(64, 100, 256) >>> z = torch.cat((x[..., None], y), -1) >>> z.shape torch.Size([64, 100, 257])
https://stackoverflow.com/questions/70259289/
Install PyTorch 1.3 via Anaconda
My current PyTorch version is 1.10. I want to downgrade it to 1.3 but conda suggests the CPU-only version. My GPU is a GTX 1080 TI. My current setup is cudatoolkit 10.2.89 hfd86e86_1 cudnn 7.6.5 cuda10.2_0 pytorch 1.10.0 py3.7_cuda10.2_cudnn7.6.5_0 pytorch pytorch-mutex 1.0 cuda pytorch When I execute: conda install pytorch==1.3.0 torchvision -c pytorchCollecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: /home/lorenzp/.conda/envs/detection added / updated specs: - pytorch==1.3.0 - torchvision The following packages will be downloaded: package | build ---------------------------|----------------- cpuonly-1.0 | 0 2 KB pytorch pytorch-1.3.0 | py3.7_cpu_0 36.1 MB pytorch torchvision-0.4.1 | py37_cpu 14.6 MB pytorch ------------------------------------------------------------ Total: 50.7 MB The following NEW packages will be INSTALLED: cpuonly pytorch/noarch::cpuonly-1.0-0 The following packages will be DOWNGRADED: pytorch 1.10.0-py3.7_cuda10.2_cudnn7.6.5_0 --> 1.3.0-py3.7_cpu_0 torchvision 0.11.1-py37_cu102 --> 0.4.1-py37_cpu Proceed ([y]/n)? n CondaSystemExit: Exiting. It just suggests the CPU-only version. How to find the GPU version?
Well, I needed to find the right cudatoolkit. For that conda search cudatoolkit Then, I installed I can choose version somewhere between Cuda 9.2 and 10.0. I took a look at https://pytorch.org/get-started/previous-versions and made a guess of the cuda version. In my case: conda install cudatoolkit==10.0.130 This automatically installed PyTorch==1.3.1.
https://stackoverflow.com/questions/70260948/
how do I optmize the weights of the input layer using backward for this simple neural network in pytorch when .grad is None
I defined the following simple neural network: import torch import torch.nn as nn X = torch.tensor(([1, 2]), dtype=torch.float) y = torch.tensor([1.]) learning_rate = 0.001 class Neural_Network(nn.Module): def __init__(self, ): super(Neural_Network, self).__init__() self.W1 = torch.nn.Parameter(torch.tensor(([1, 0], [2, 3]), dtype=torch.float, requires_grad=True)) self.W2 = torch.nn.Parameter(torch.tensor(([2], [1]), dtype=torch.float, requires_grad=True)) def forward(self, X): self.xW1 = torch.matmul(X, self.W1) self.h = torch.tensor([torch.tanh(self.xW1[0]), torch.tanh(self.xW1[1])]) return torch.sigmoid(torch.matmul(self.h, self.W2)) net = Neural_Network() for z in range(60): loss = (y - net(X))**2 optim = torch.optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9) loss = criterion(net(X), y) loss.backward() optim.step() I can run it and print(net.W1) print(net.W2) prints Parameter containing: tensor([[1., 0.], [2., 3.]], requires_grad=True) Parameter containing: tensor([[2.0078], [1.0078]], requires_grad=True) So my problem is that it seems like W1 is not being updated. When I call print(net.W1.grad) I get None for every iteration which confuses me a lot. I tried to define the function as one line like so: loss = (y - torch.sigmoid(math.tanh(x[0] * W_1[0][0] + x[1] * W_1[1][0]) * W_2[0] + math.tanh(x[0] * W_1[0][1] + x[1] * W_1[1][1]) * W_2[1])) ** 2, but it did not help anything. For sure I could hardcode the derivate and everything but it seems painful and I though .backward() can be used in this case. How can I optmize W1 with using backward()?
I suspect that the following line: self.h = torch.tensor([torch.tanh(self.xW1[0]), torch.tanh(self.xW1[1])]) is the culprit. The new tensor self.h does not inherit the requires_grad attribute from self.xW1 and, by default, it is set to False. You can call self.h = self.tanh(self.xW1) and the operation will be then applied point-wise to all the elements of self.xW1. In addition, I suggest you to inspect your gradients by using PyTorch hooks.
https://stackoverflow.com/questions/70261912/
Error: Expected all tensors to be on the same device, While all are on same device
I don't know why I am getting this error, while my all tensors are at the same device "cuda". Here is code def train(self): self.model.train() self.optimizer.zero_grad() # Clear gradients. print("X is",self.XT.get_device()) print("edge_index_ is", self.edge_index_.get_device()) print("R_train_mask is", self.R_train_mask.get_device()) print("datas.y is", self.datas.y.get_device()) print("edge_index_ is", self.edge_index_.get_device()) out = self.model(self.XT, self.edge_index_) # Perform a single forward pass. loss = self.criterion(out[self.R_train_mask], self.datas.y[self.R_train_mask]) Here is error X is 0 edge_index_ is 0 R_train_mask is 0 datas.y is 0 edge_index_ is 0 Traceback (most recent call last): File "/home/adnan/GNNPaperCodes/PyGCL/examples/GCAPaperAd.py", line 126, in <module> main() File "/home/adnan/GNNPaperCodes/PyGCL/examples/GCAPaperAd.py", line 91, in main topfeatures=tf.TopFeaturesFind() File "/home/adnan/GNNPaperCodes/PyGCL/examples/PatchFinder.py", line 64, in TopFeaturesFind loss = self.train() File "/home/adnan/GNNPaperCodes/PyGCL/examples/PatchFinder.py", line 154, in train out = self.model(self.XT, self.edge_index_) # Perform a single forward pass. File "/home/adnan/.conda/envs/forGit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/adnan/GNNPaperCodes/PyGCL/examples/PatchFinder.py", line 211, in forward x = self.conv1(x, edge_index_) File "/home/adnan/.conda/envs/forGit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/adnan/.conda/envs/forGit/lib/python3.9/site-packages/torch_geometric/nn/conv/gcn_conv.py", line 181, in forward x = self.lin(x) File "/home/adnan/.conda/envs/forGit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/adnan/.conda/envs/forGit/lib/python3.9/site-packages/torch_geometric/nn/dense/linear.py", line 103, in forward return F.linear(x, self.weight, self.bias) File "/home/adnan/.conda/envs/forGit/lib/python3.9/site-packages/torch/nn/functional.py", line 1848, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_mm) Obvioulsy, Everything is ok if i change device to 'cpu'
You forgot to transfer self.model to the device. Indeed, the model has weights that must be transferred to the correct device, since they interact with your input. You can do it with self.model.to("cuda").
https://stackoverflow.com/questions/70262173/
How to achieve fast "vector clustering" sort?
I have n c-dimensional vectors that formed into a matrix A with the shape of (n, c), how can I perform a quick sort such that the vectors with low Euclidean distances are as close as possible, and the vectors with high distances are as far as possible? For example, I have A = [[0, 3], [0, 0], [0, 1]], and the solution can be A_sorted = [[0, 3], [0, 1], [0, 0]]. Explain: Because the original A has a total weighted distance sum of 3x1+1x1+2x2 = 7, and A_sorted has 2x1+1x1+3x2 = 8. Mathematically, the goal is to maximize the total weighted distance sum. For the 1-dimensional case, this can be achieved by some APIs like sort() in Numpy or PyTorch, and my main concern is if there exists a fast implementation when c ≥ 2 with a time complexity of O(nlog(n))? After a long struggle, I failed. Could you please do me a favor?
I believe this is what you want: def d(l): return math.sqrt(sum([x**2 for x in l])) A = [[0, 3], [0, 0], [0, 1]] dists = [d(el) for el in A] sorted_vecs = sorted(zip(A, dists), key=lambda x: -x[1]) [x[0] for x in sorted_vecs] Returns [[0, 3], [0, 1], [0, 0]] It's O(nlog(n))
https://stackoverflow.com/questions/70274033/
What's the difference between torch.mean and torch.nn.avg_pool?
Taking a tensor with shape [4,8,12] as an example, what's the difference between the two lines: torch.mean(x, dim=2) torch.nn.functional.avg_pool1d(x, kernel_size=12)
With the very example you provided the result is the same, but only because you specified dim=2 and kernel_size equal to the dimensionality of the third (index 2) dimension. But in principle, you are applying two different functions, that sometimes just happen to collide with specific choices of hyperparameters. torch.mean is effectively a dimensionality reduction function, meaning that when you average all values across one dimension, you effectively get rid of that dimension. On the other hand, average 1-dimensional pooling is more powerful in this regard, as it gives you a lot more flexibility in choosing kernel size, padding and stride like you would normally do when using a convolutional layer. You can see the first function as a specific case of 1-d pooling.
https://stackoverflow.com/questions/70274440/
Split torch dataset without shuffling
I'm using Pytorch to run Transformer model. when I want to split data (tokenized data) i'm using this code: train_dataset, test_dataset = torch.utils.data.random_split( tokenized_datasets, [train_size, test_size]) torch.utils.data.random_split using shuffling method, but I don't want to shuffle. I want to split it sequentially. Any advice? thanks
The random_split method has no parameter that can help you create a non-random sequential split. The easiest way to achieve a sequential split is by directly passing the indices for the subset you want to create: # Created using indices from 0 to train_size. train_dataset = torch.utils.data.Subset(tokenized_datasets, range(train_size)) # Created using indices from train_size to train_size + test_size. test_dataset = torch.utils.data.Subset(tokenized_datasets, range(train_size, train_size + test_size)) Refer: PyTorch docs.
https://stackoverflow.com/questions/70275016/
Match pytorch scatter output in tensorflow
How can I do the same operation in tensorflow? tensor = np.random.RandomState(42).uniform(size=(2, 4, 2)).astype(np.float32) tensor = torch.from_numpy(tensor) index = tensor.max(-1, keepdim=True)[1] output = torch.zeros_like(tensor).scatter_(-1, index, 1.0) expected output: tensor([[[0., 1.], [1., 0.], [1., 0.], [0., 1.]], [[0., 1.], [0., 1.], [1., 0.], [0., 1.]]])
As always, everything is a bit more complicated with Tensorflow: import tensorflow as tf import numpy as np tensor = np.random.RandomState(42).uniform(size=(2, 4, 2)).astype(np.float32) tensor = tf.constant(tensor) _, indices = tf.math.top_k(tensor) zeros = tf.zeros_like(tensor) ij = tf.stack(tf.meshgrid( tf.range(zeros.shape[0], dtype=tf.int32), tf.range(zeros.shape[1], dtype=tf.int32), indexing='ij'), axis=-1) gathered_indices = tf.concat([ij, indices], axis=-1) indices_shape = tf.shape(indices) values = tf.ones((indices_shape[0], indices_shape[1])) output = tf.tensor_scatter_nd_update(zeros, gathered_indices, values) print(output) tf.Tensor( [[[0. 1.] [1. 0.] [1. 0.] [0. 1.]] [[0. 1.] [0. 1.] [1. 0.] [0. 1.]]], shape=(2, 4, 2), dtype=float32)
https://stackoverflow.com/questions/70276246/
BertModel and BertForMaskedLM weights count
I want understand BertForMaskedLM model, in huggingface github code, BertForMaskedLM is bert model with additional 2 linear layers with shape (input 768, output 768) and (input 768, output 30522). Count of all weights will be weights of BertModel + 768 * 768 + 768 * 30522, but when I check the numbers don't match. from transformers import BertModel, BertForMaskedLM import torch bertmodel = BertModel.from_pretrained('bert-base-uncased') bertForMaskedLM = BertForMaskedLM.from_pretrained('bert-base-uncased') def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) count_parameters(bertmodel) #output 109482240 count_parameters(bertForMaskedLM) #output 109514298 109482240 + 768 * 768 + 768 * 30522 != 109514298 what am I doing wrong?
Using numel() along with model.parameters() is not a reliable method for counting the total number of parameters and may fail for recursive configuration of layers. This is exactly what is happening in your case. Instead, try following: from torchinfo import summary print(summary(bertmodel)) Output: print(summary(bertForMaskedLM)) Output: From the above outputs we can see that total number of trainable params for the two models are: bertmodel: 109,482,240 bertForMaskedLM: 132,955,194 In order to understand the difference, lets have a look at the last module of both the models (rest of the base model is exactly the same): bertmodel: (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh()) bertForMaskedLM: (cls): BertOnlyMLMHead((predictions): BertLMPredictionHead( (transform): BertPredictionHeadTransform( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) ) (decoder): Linear(in_features=768, out_features=30522, bias=True))) Only additions are the LayerNorm layer (2 * 768 params for layer gammas and betas) and the decoder layer (769 * 30522, using the y=A*X + B, where A is of size (nxm) and B of (nx1) with a total params of nx(m+1). Params for bertForMaskedLM = 109482240 + 2 * 768 + 769 * 30522 = 132955194
https://stackoverflow.com/questions/70276298/
RuntimeError: expected scalar type Long but found Float (Pytorch)
I've tried many times to fix, also I've used the example codes from functional.py then I got my same "loss" value. How can I fix this? My libraries import matplotlib.pyplot as plt import torch import torch.nn as nn import numpy as np import matplotlib import pandas as pd from torch.autograd import Variable from torch.utils.data import DataLoader,TensorDataset from sklearn.model_selection import train_test_split import warnings import os import torchvision import torchvision.datasets as dsets import torchvision.transforms as transforms Data set of Mnis train=pd.read_csv("train.csv",dtype=np.float32) targets_numpy = train.label.values features_numpy = train.loc[:,train.columns != "label"].values/255 # normalization features_train, features_test, targets_train, targets_test = train_test_split(features_numpy, targets_numpy,test_size = 0.2, random_state = 42) featuresTrain=torch.from_numpy(features_train) targetsTrain=torch.from_numpy(targets_train) featuresTest=torch.from_numpy(features_test) targetsTest=torch.from_numpy(targets_test) batch_size=100 n_iterations=10000 num_epochs=n_iterations/(len(features_train)/batch_size) num_epochs=int(num_epochs) train=torch.utils.data.TensorDataset(featuresTrain,targetsTrain) test=torch.utils.data.TensorDataset(featuresTest,targetsTest) print(type(train)) train_loader=DataLoader(train,batch_size=batch_size,shuffle=False) test_loader=DataLoader(test,batch_size=batch_size,shuffle=False) print(type(train_loader)) plt.imshow(features_numpy[226].reshape(28,28)) plt.axis("off") plt.title(str(targets_numpy[226])) plt.show() Here is my model class ANNModel(nn.Module): def __init__(self,input_dim,hidden_dim,output_dim): super(ANNModel,self).__init__() self.fc1=nn.Linear(input_dim,hidden_dim) self.relu1=nn.ReLU() self.fc2=nn.Linear(hidden_dim,hidden_dim) self.tanh2=nn.Tanh() self.fc4=nn.Linear(hidden_dim,output_dim) def forward (self,x): #forward ile elde edilen layer lar bağlanır out=self.fc1(x) out=self.relu1(out) out=self.fc2(out) out=self.tanh2(out) out=self.fc4(out) return out input_dim=28*28 hidden_dim=150 output_dim=10 model=ANNModel(input_dim,hidden_dim,output_dim) error=nn.CrossEntropyLoss() learning_rate=0.02 optimizer=torch.optim.SGD(model.parameters(),lr=learning_rate) where the problem is count=0 loss_list=[] iteration_list=[] accuracy_list = [] for epoch in range(num_epochs): for i,(images,labels) in enumerate(train_loader): train=Variable(images.view(-1,28*28)) labels=Variable(labels) #print(labels) #print(outputs) optimizer.zero_grad() #forward propagation outputs=model(train) #outputs=torch.randn(784,10,requires_grad=True) ##labels=torch.randn(784,10).softmax(dim=1) loss=error(outputs,labels) loss.backward() optimizer.step() count+=1 if count %50 ==0: correct=0 total=0 for images,labels in test_loader: test=Variable(images.view(-1,28*28)) outputs=model(test) predicted=torch.max(outputs.data,1)[1] #mantık??? total+= len(labels) correct+=(predicted==labels).sum() accuracy=100 *correct/float(total) loss_list.append(loss.data) iteration_list.append(count) accuracy_list.append(accuracy) if count %500 ==0 : print('Iteration: {} Loss: {} Accuracy: {} %'.format(count, loss.data, accuracy)) Which gives --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-9-9e53988ad250> in <module>() 26 #outputs=torch.randn(784,10,requires_grad=True) 27 ##labels=torch.randn(784,10).softmax(dim=1) ---> 28 loss=error(outputs,labels) 29 30 2 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing) 2844 if size_average is not None or reduce is not None: 2845 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2846 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) 2847 2848 RuntimeError: expected scalar type Long but found Float
it seems that the dtype of the tensor "labels" is FloatTensor. However, nn.CrossEntropyLoss expects a target of type LongTensor. This means that you should check the type of "labels". if its the case then you should use the following code to convert the dtype of "labels" from FloatTensor to LongTensor: loss=error(outputs,labels.long())
https://stackoverflow.com/questions/70279287/
How to Run Pytorch Bert with AMD
github code: https://github.com/bellowman/Deep-Learning-Practice/blob/main/BioBert%20for%20Multi%20Label%20AMD.ipynb Hello everyone, I am a beginner with pytorch, tensorflow, and BERT. I have a machine at home with an AMD Ryzen 7 1800x and a Radeon RX 6600 video card. I am trying to run a bioBERT model at home. I have trouble leveraging my model to use my AMD card. I posted my github notebook. I have troubles in cell 3 and 9. First Question: In cell 3,I am trying to convert the bioBERT weight to PyTorch with transformmer-cli. I get the warning of "Could not load dynamic library 'cudart64_110.dll'". Does this affect performance later? Second Question: In cell 9, My model load is really slow because it is using just the CPU. How can I get the model to run on my AMD GPU
Thank you to chrispresso AMD ROCm seems to be the way to go, but it requires one to run under linux
https://stackoverflow.com/questions/70279801/
Issue with loading the Roberta-base model
I am trying to use the Roberta-base model using AutoTokenizer.from_pretrained('roberta-base') but I get the following error: RuntimeError: Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback): No module named 'tensorflow.python.keras.engine.keras_tensor' I have tried to install tensorflow but still the same error. Any idea what is going on?
You need to install with GPU, try this: pip install --ignore-installed --upgrade tensorflow-gpu See here for more details.
https://stackoverflow.com/questions/70281385/
How to get the output gradient w.r.t input
I have some problem with getting the output gradient of input. It is simple mnist model. for num,(sample_img, sample_label) in enumerate(mnist_test): if num == 1: break sample_img = sample_img.to(device) sample_img.requires_grad = True prediction = model(sample_img.unsqueeze(dim=0)) cost = criterion(prediction, torch.tensor([sample_label]).to(device)) optimizer.zero_grad() cost.backward() print(sample_label) print(sample_img.shape) plt.imshow(sample_img.detach().cpu().squeeze(),cmap='gray') plt.show() print(sample_img.grad) sample_img.grad is None
If you need to compute the gradient with respect to the input you can do so by calling sample_img.requires_grad_(), or by setting sample_img.requires_grad = True, as suggested in your comments. Here is a small example: import torch import torch.nn as nn import torch.optim as optim import matplotlib.pyplot as plt model = nn.Sequential( # a dummy model nn.Conv2d(1, 1, 3), nn.ReLU(), nn.MaxPool2d(2), nn.Flatten() ) sample_img = torch.rand(1, 5, 5) # a dummy input sample_label = 0 criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=1e-3) device = "cpu" sample_img = sample_img.to(device) sample_img.requires_grad = True prediction = model(sample_img.unsqueeze(dim=0)) cost = criterion(prediction, torch.tensor([sample_label]).to(device)) optimizer.zero_grad() cost.backward() print(sample_label) print(sample_img.shape) plt.imshow(sample_img.detach().cpu().squeeze(), cmap='gray') plt.show() print(sample_img.grad.shape) print(sample_img.grad) Additionally, if you don't need the gradients of the model, you can set their gradient requirements off: for param in model.parameters(): param.requires_grad = False
https://stackoverflow.com/questions/70290615/
How to disable automatic checkpoint loading
Im trying to run a loop over a set of parameters and I wan't to make a new network for each parameter and let it learn a few epochs. Currently my code looks like this: def optimize_scale(self, epochs=5, comp_scale=100, scale_list=[1, 100]): trainer = pyli.Trainer(gpus=1, max_epochs=epochs) for scale in scale_list: test_model = CustomNN(num_layers=1, scale=scale, lr=1, pad=True, batch_size=1) trainer.fit(test_model) trainer.test(verbose=True) del test_model Everything works fine for the first element of scale_list, the network learns 5 epochs and completes the test. All this can be seen in the console. However for all following elements of scale_list it doesn't work as the old network is not overwritten, but instead an old checkpoint is loaded automatically when trainer.fit(model) is called. In the console this is indicated through: C:\Users\XXXX\AppData\Roaming\Python\Python39\site-packages\pytorch_lightning\callbacks\model_checkpoint.py:623: UserWarning: Checkpoint directory D:\XXXX\src\lightning_logs\version_0\checkpoints exists and is not empty. rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.") train_size = 8 val_size = 1 test_size = 1 Restoring states from the checkpoint path at D:\XXXX\src\lightning_logs\version_0\checkpoints\epoch=4-step=39.ckpt LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] Loaded model weights from checkpoint at D:\XXXX\src\lightning_logs\version_0\checkpoints\epoch=4-step=39.ckpt The consequence is that the second test outputs the same result, as the the checkpoint from the old network was loaded which already finished all 5 epochs. I though that adding the del test_model might help in dropping the model completely, but that did not work. On my search I found a few Issues closely related, for example: https://github.com/PyTorchLightning/pytorch-lightning/issues/368. However I did not manage to fix my problem. I assume it has something to with the fact that the new network which should overwrite the old one has the same name/version and therefore looks for the same checkpoints. If anyone has an idea or knows how to circumvent this I would be very grateful.
I think, in your settings, you want to disable automatic checkpointing: trainer = pyli.Trainer(gpus=1, max_epochs=epochs,enable_checkpointing=False) You may need to explicitly save a checkpoint (with a different name) for each training session you are running. You can manually save a checkpoint via: trainer.save_checkpoint(f'checkpoint_for_scale_{scale}.pth')
https://stackoverflow.com/questions/70291523/
How does torch.flatten() order the elements from the flattened dimensions
I have a 4D tensor of shape [32,64,64,3] which corresponds to [batch, timeframes, frequency_bins, features] and I do tensor.flatten(start_dim=2) (in PyTorch). I understand the shape will then transform to [32,64,64*3] --> [batch,timeframes,frequency_bins*features] - but in terms of the actual ordering of the elements within that new flattened dimension of 64*3 are the first 64 indexes relating to what would have been [:,:,:,0] the second 64 [:,:,:,1] and the final 64 [:,:,:,2]?
For the sake of understanding, let us first take the simplest case where we have a tensor of rank 2, i.e., a regular matrix. PyTorch performs flattening in what is called row-major order, traversing from the "innermost" axis to the "outermost" axis. Taking a simple 3x3 array of rank 2, let us call it A[3, 3]: [[a, b, c], [d, e, f], [g, h, i]] Flattening this from innermost to outermost axes would give you [a, b, c, d, e, f, g, h, i]. Let us call this flattened array B[3]. The relation between corresponding elements in A (at index [i, j]) and B (at index k) can easily be derived as: k = A.size[1] * i + j This is because to reach the element at [i, j], we first move i rows down, counting A.size[1] (i.e., the width of the array) elements for each row. Once we reach row i, we need to get to column j, thus we add j to obtain the index in the flattened array. For example, element e is at index [1, 1] in A. In B, it would occupy the index 3 * 1 + 1 = 4, as expected. Let us extend that same idea to a tensor of rank of rank 4, as in your case, where we are flattening only the last two axes. Again, taking a simple rank 4 tensor A of shape (2, 2, 2, 2) as below: A = [[[[ 1, 2], [ 3, 4]], [[ 5, 6], [ 7, 8]]], [[[ 9, 10], [11, 12]], [[13, 14], [15, 16]]]] Let us find a relation between the indices of A and torch.flatten(A, start_dim=2) (let's call the flattened version B). B = [[[ 0, 1, 2, 3], [ 4, 5, 6, 7]], [[ 8, 9, 10, 11], [12, 13, 14, 15]]] Element 12 is at index [1, 1, 0, 0] in A and index [1, 1, 0] in B. Note that the indices at axes 0 and 1, i.e., [1, 1] remain intact even after partial flattening. This is because those axes are not flattened and thus not impacted. This is fantastic! Thus, we can represent the transformation from A to B as B[i, j, _] = A[i, j, _, _] Our task now reduces to finding a relation between the last axis of B and the last 2 axes of A. But A[i, j, _, _] is a 2x2 array, for which we have already derived the relation k = A.size[1] * i + j, A.size[1] would now change to A.size[3] as 3 is now the last axis. But the general relation remains. Filling in the blanks, we get the relation between corresponding elements in A and B as: B[i, j, k] = A[i, j, m, n] where k = A.size[3] * m + n. We can verify that this is correct. Element 14 is at [1, 1, 1, 0] in A. and moves to [1, 1, 2 * 1 + 0] = [1, 1, 2] in B. EDIT: Added example Taking @Molem7b5's example of array A with shape (1, 4, 4, 3), from the comments: Iterating from inner (dim=3) to outer axes (dim=2) of A gives consecutive elements of B. What I mean by this is: // Using relation: A[:, :, i, j] == B[:, :, 3 * i + j] // i = 0, all j A[:, :, 0, 0] == B[:, :, 0] A[:, :, 0, 1] == B[:, :, 1] A[:, :, 0, 2] == B[:, :, 2] // (Note the consecutive order in B.) // i = 1, all j A[:, :, 1, 0] == B[:, :, 3] A[:, :, 1, 1] == B[:, :, 4] // and so on until A[:, :, 3, 2] == B[:, :, 11] This should give you a better picture as to how the flattening occurs. When in doubt, extrapolate from the relation.
https://stackoverflow.com/questions/70295481/
How to get a probability distribution over tokens in a huggingface model?
I'm following this tutorial on getting predictions over masked words. The reason I'm using this one is because it seems to be working with several masked word simultaneously while other approaches I tried could only take 1 masked word at a time. The code: from transformers import RobertaTokenizer, RobertaForMaskedLM import torch tokenizer = RobertaTokenizer.from_pretrained('roberta-base') model = RobertaForMaskedLM.from_pretrained('roberta-base') sentence = "Tom has fully ___ ___ ___ illness." def get_prediction (sent): token_ids = tokenizer.encode(sent, return_tensors='pt') masked_position = (token_ids.squeeze() == tokenizer.mask_token_id).nonzero() masked_pos = [mask.item() for mask in masked_position ] with torch.no_grad(): output = model(token_ids) last_hidden_state = output[0].squeeze() list_of_list =[] for index,mask_index in enumerate(masked_pos): mask_hidden_state = last_hidden_state[mask_index] idx = torch.topk(mask_hidden_state, k=5, dim=0)[1] words = [tokenizer.decode(i.item()).strip() for i in idx] list_of_list.append(words) print ("Mask ",index+1,"Guesses : ",words) best_guess = "" for j in list_of_list: best_guess = best_guess+" "+j[0] return best_guess print ("Original Sentence: ",sentence) sentence = sentence.replace("___","<mask>") print ("Original Sentence replaced with mask: ",sentence) print ("\n") predicted_blanks = get_prediction(sentence) print ("\nBest guess for fill in the blank :::",predicted_blanks) How can I get the probability distribution over the 5 tokens instead of the indices of them? That is, similarly to how this approach (that I used before but once I change to multiple masked tokens I get an error) gets the score as an output: from transformers import pipeline # Initialize MLM pipeline mlm = pipeline('fill-mask') # Get mask token mask = mlm.tokenizer.mask_token # Get result for particular masked phrase phrase = f'Read the rest of this {mask} to understand things in more detail' result = mlm(phrase) # Print result print(result) [{ 'sequence': 'Read the rest of this article to understand things in more detail', 'score': 0.35419148206710815, 'token': 1566, 'token_str': ' article' },...
The variable last_hidden_state[mask_index] is the logits for the prediction of the masked token. So to get token probabilities you can use a softmax over this, i.e. probs = torch.nn.functional.softmax(last_hidden_state[mask_index]) You can then get the probabilities of the topk using word_probs = [probs[i] for i in idx] PS I assume you're aware that you should use <mask> rather then ___, i.e. sent = "Tom has fully <mask> <mask> <mask> illness.", I get the following: Mask 1 Guesses : ['recovered', 'returned', 'cleared', 'recover', 'healed'] [tensor(0.9970), tensor(0.0007), tensor(0.0003), tensor(0.0003), tensor(0.0002)] Mask 2 Guesses : ['from', 'his', 'with', 'to', 'the'] [tensor(0.5066), tensor(0.2048), tensor(0.0684), tensor(0.0513), tensor(0.0399)] Mask 3 Guesses : ['his', 'the','mental', 'serious', 'this'] [tensor(0.5152), tensor(0.2371), tensor(0.0407), tensor(0.0257), tensor(0.0199)]
https://stackoverflow.com/questions/70299442/
How to access loss at each epoch from Pytorch Lighting?
I'm using Pytorch Lighting and Tensorboard as PyTorch Forecasting library is build using them. I want to create my own loss curves via matplotlib and don't want to use Tensorboard. It is possible to access metrics at each epoch via a method? Validation Loss, Training Loss etc? My code is below: logger = TensorBoardLogger("logs", name = "model") trainer = pl.Trainer(#Some params) Does logger or trainer have any method to access this information? PL documentation isn't clear and there are many methods associated with logger and trainer.
My recommendation is that you: Create a csv logger: from pytorch_lightning.loggers import CSVLogger csv_logger = CSVLogger( save_dir=str'./', name='csv_file' ) Pass it to your trainer # Initialize a trainer trainer = Trainer( accelerator="auto", max_epochs=1, log_every_n_steps=10, logger=[csv_logger], ) Have your model log your epoch results. This will trigger a write action into a txt file by the CSVlogger class MNISTModel(LightningModule): def __init__(self): super().__init__() self.l1 = torch.nn.Linear(28 * 28, 10) def forward(self, x): return torch.relu(self.l1(x.view(x.size(0), -1))) def training_step(self, batch, batch_nb): x, y = batch loss = F.cross_entropy(self(x), y) self.log('loss_epoch', loss, on_step=False, on_epoch=True) return loss def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=0.02) Use the logged values into the CSV file for plotting you results. In this way, if you are unhappy with your plot you would be able to just re-run everything with your plot script modifications without having to wait for the training to end again.
https://stackoverflow.com/questions/70300576/
How to create torch.tensor with shape (1,1,32) with default value?
I want to create torch.tensor variable with shape (1,1,32) with default value (None). How can I do it ?
I don't believe you can assign None to a torch.Tensor. What is more appropriate however is to instead assign NaN. You can do so using the builtin torch.full: >>> torch.full((1, 1, 32), torch.nan) tensor([[[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]]])
https://stackoverflow.com/questions/70312495/
How to implement a diagonal data for a linear layer in pytorch
I would like to have a network in pytorch that only scale the data. The mathematical notations for my request is: which means that if my input is [1, 2] and my output is [2, 6]. then the linear layer will look like this: [ [ 2, 0], [ 0, 3] ]. I have this network written in pytorch: class ScalingNetwork(nn.Module): def __init__(self, input_shape, output_shape): super().__init__() self.linear_layer = nn.Linear(in_features=input_shape, out_features=output_shape) self.mask = torch.diag(torch.ones(input_shape)) self.linear_layer.weight.data = self.linear_layer.weight * self.mask self.linear_layer.weight.requires_grad = True def get_tranformation_matrix(self): return self.linear_layer.weight def forward(self, X): X = self.linear_layer(X) return X But at the end of the training, my self.linear is not diagonal. What am I doing wrong?
It seems like an apparent constraint here is the fact that self.linear_layer needs to be a squared matrix. You can use the diagonal matrix self.mask to zero out all non-diagonal elements in the forward pass: class ScalingNetwork(nn.Module): def __init__(self, in_features): super().__init__() self.linear = nn.Linear(in_features, in_features, bias=False) self.mask = torch.eye(in_features, dtype=bool) def forward(self, x): self.linear.weight.data *= self.mask print(self.linear.weight) x = self.linear(x) return x For instance: >>> m = ScalingNetwork(5) >>> m(torch.rand(1,5)) Parameter containing: tensor([[-0.2987, -0.0000, -0.0000, -0.0000, -0.0000], [ 0.0000, -0.1042, -0.0000, -0.0000, -0.0000], [-0.0000, 0.0000, -0.4267, 0.0000, -0.0000], [ 0.0000, -0.0000, -0.0000, 0.1758, 0.0000], [ 0.0000, 0.0000, 0.0000, -0.0000, -0.3208]], requires_grad=True) tensor([[-0.1032, -0.0087, -0.1709, 0.0035, -0.1496]], grad_fn=<MmBackward0>)
https://stackoverflow.com/questions/70314058/
How to resolve the error that says "shapes cannot be multiplied"
I tried the code mentioned in the article in google colab. https://theaisummer.com/spiking-neural-networks/ I got the error that looks like this... Test loss: 8.86368179321289 Test loss: 5.338221073150635 --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-9-646cb112ccb7> in <module>() 15 # forward pass 16 net.train() ---> 17 spk_rec, mem_rec = net(data.view(batch_size, -1)) 18 19 # initialize the loss & sum over time 4 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1846 if has_torch_function_variadic(input, weight, bias): 1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias) -> 1848 return torch._C._nn.linear(input, weight, bias) 1849 1850 RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x588 and 784x1000) I am not sure how to fix it.
I just ran their Colab notebook and ran into the same error. It occurs because the final iteration does not have 128 samples of data, as the total dataset size (60000 and 10000 for training and test set) is not evenly divisible by 128. So there is some left over, and reshaping it to 128 x ... leads to a mismatch of dimensions between input data and the number of neurons in the input layer. There are two possible fixes. Just drop the final batch: train_loader = DataLoader(mnist_train, batch_size=batch_size, shuffle=True, drop_last=True) test_loader = DataLoader(mnist_test, batch_size=batch_size, shuffle=True, drop_last=True) Don't drop the final batch. But flatten the tensor in a way that preserves the original batch_size, instead of forcing it to 128: spk_rec, mem_rec = net(data.flatten(1))
https://stackoverflow.com/questions/70314154/
question of neural network training:the gradient of the same module which is used multiple times in one iteration
When training a neural network, if the same module is used multiple times in one iteration, does the gradient of the module need special processing during backpropagation? for example: One Deformable Compensation is used three times in this model, which means they share the same weights. What will happen when I use loss.backward()? Will loss.backward() work correctly?
The nice thing about autograd and backward passes is that the underlying framework is not "algorithmic", but rather a mathematic one: it implements the chain rule of derivatives. Therefore, there are no "algorithmic" considerations of "shared weights" or "weighting different layers", it's pure math. The backward pass provides the derivative of your loss function w.r.t the weights in a purely mathematical way. Sharing weights can be done globally (e.g., when training Saimese networks), on a "layer level" (as in your example), but also within a layer. When you think about it Convolution layers and Reccurent layers are a fancy way of locally sharing weights. Naturally, pytorch (as well as all other DL frameworks) can trivially handle these cases. As long as your "deformable compensation" layer is correctly implemented -- pytorch will take care of the gradients for you, in a mathematically correct manner, thanks to the chain rule.
https://stackoverflow.com/questions/70315592/
How to make conda use its own gcc version?
I am trying to run the training of stylegan2-pytorch on a remote system. The remote system has gcc (9.3.0) installed on it. I'm using conda env that has the following installed (cudatoolkit=10.2, torch=1.5.0+, and ninja=1.8.2, gcc_linux-64=7.5.0). I encounter the following error: RuntimeError: Error building extension 'fused': [1/2] /home/envs/segmentation_base/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/envs/segmentation_base/lib/python3.6/site-packages/torch/include -isystem /home/envs/segmentation_base/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/envs/segmentation_base/lib/python3.6/site-packages/torch/include/TH -isystem /home/envs/segmentation_base/lib/python3.6/site-packages/torch/include/THC -isystem /home/envs/segmentation_base/include -isystem /home/envs/segmentation_base/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_70,code=sm_70 --compiler-options '-fPIC' -std=c++14 -c /home/code/semanticGAN_code/models/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o FAILED: fused_bias_act_kernel.cuda.o /home/envs/segmentation_base/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/envs/segmentation_base/lib/python3.6/site-packages/torch/include -isystem /home/envs/segmentation_base/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /home/envs/segmentation_base/lib/python3.6/site-packages/torch/include/TH -isystem /home/envs/segmentation_base/lib/python3.6/site-packages/torch/include/THC -isystem /home/envs/segmentation_base/include -isystem /home/envs/segmentation_base/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_70,code=sm_70 --compiler-options '-fPIC' -std=c++14 -c /home/code/semanticGAN_code/models/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o In file included from /home/envs/segmentation_base/include/cuda_runtime.h:83, from <command-line>: /home/envs/segmentation_base/include/crt/host_config.h:138:2: error: #error -- unsupported GNU version! gcc versions later than 8 are not supported! 138 | #error -- unsupported GNU version! gcc versions later than 8 are not supported! | ^~~~~ ninja: build stopped: subcommand failed. I would like to use the gcc of my conda env (gcc_linux-64=7.5.0) to build cuda. When I run gcc --version in my conda env, I get the system's gcc: gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 which gcc when my conda env is active returns: usr/bin/gcc I'd expect it to return gcc version 7.5.0 (the one installed in the environment). I understand that conda has different names for gcc, but the environment variables should point to the installed gcc. Running echo $CC returns /home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-cc. Following suggested solution here, I get the following upon activating my environment, but the same issue stand: INFO: activate-binutils_linux-64.sh made the following environmental changes: +ADDR2LINE=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-addr2line +AR=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-ar +AS=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-as +CXXFILT=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-c++filt +ELFEDIT=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-elfedit +GPROF=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-gprof +LD_GOLD=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-ld.gold +LD=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-ld +NM=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-nm +OBJCOPY=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-objcopy +OBJDUMP=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-objdump +RANLIB=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-ranlib +READELF=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-readelf +SIZE=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-size +STRINGS=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-strings +STRIP=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-strip INFO: activate-gcc_linux-64.sh made the following environmental changes: +build_alias=x86_64-conda-linux-gnu +BUILD=x86_64-conda-linux-gnu +CC_FOR_BUILD=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-cc +CC=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-cc +CFLAGS=-march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /include -fdebug-prefix-map==/usr/local/src/conda/- -fdebug-prefix-map==/usr/local/src/conda-prefix +CMAKE_ARGS=-DCMAKE_LINKER=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-ld -DCMAKE_STRIP=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-strip -DCMAKE_FIND_ROOT_PATH_MODE_PROGRAM=NEVER -DCMAKE_FIND_ROOT_PATH_MODE_LIBRARY=ONLY -DCMAKE_FIND_ROOT_PATH_MODE_INCLUDE=ONLY -DCMAKE_FIND_ROOT_PATH=;/x86_64-conda-linux-gnu/sysroot -DCMAKE_INSTALL_PREFIX= -DCMAKE_INSTALL_LIBDIR=lib +CMAKE_PREFIX_PATH=:/home/envs/segmentation_base/x86_64-conda-linux-gnu/sysroot/usr +CONDA_BUILD_SYSROOT=/home/envs/segmentation_base/x86_64-conda-linux-gnu/sysroot +_CONDA_PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata_x86_64_conda_linux_gnu +CPPFLAGS=-DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /include +CPP=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-cpp +DEBUG_CFLAGS=-march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-all -fno-plt -Og -g -Wall -Wextra -fvar-tracking-assignments -ffunction-sections -pipe -isystem /include -fdebug-prefix-map==/usr/local/src/conda/- -fdebug-prefix-map==/usr/local/src/conda-prefix +DEBUG_CPPFLAGS=-D_DEBUG -D_FORTIFY_SOURCE=2 -Og -isystem /include +GCC_AR=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-gcc-ar +GCC_NM=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-gcc-nm +GCC=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-gcc +GCC_RANLIB=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-gcc-ranlib +host_alias=x86_64-conda-linux-gnu +HOST=x86_64-conda-linux-gnu +LDFLAGS=-Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,--disable-new-dtags -Wl,--gc-sections -Wl,-rpath,/lib -Wl,-rpath-link,/lib -L/lib INFO: activate-gxx_linux-64.sh made the following environmental changes: +CXXFLAGS=-fvisibility-inlines-hidden -std=c++17 -fmessage-length=0 -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /include -fdebug-prefix-map==/usr/local/src/conda/- -fdebug-prefix-map==/usr/local/src/conda-prefix +CXX_FOR_BUILD=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-c++ +CXX=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-c++ +DEBUG_CXXFLAGS=-fvisibility-inlines-hidden -std=c++17 -fmessage-length=0 -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-all -fno-plt -Og -g -Wall -Wextra -fvar-tracking-assignments -ffunction-sections -pipe -isystem /include -fdebug-prefix-map==/usr/local/src/conda/- -fdebug-prefix-map==/usr/local/src/conda-prefix +GXX=/home/envs/segmentation_base/bin/x86_64-conda-linux-gnu-g++ How could one set gcc to conda gcc instead of system gcc? I understand that should be done automatically, when activating the environment through bash scripts in activate.d. Most of the open issues (regarding unsupported GNU version!) either require sudo permission to adjust gcc version (which I don't have) or aren't accepted in the case of conda environments. I have yet to find a clear solution to this :/ TLDR: How to force conda to use own installed gcc version instead of host system gcc? Edit 1: Added conda list output # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 4.5 1_gnu _sysroot_linux-64_curr_repodata_hack 3 haa98f57_10 absl-py 1.0.0 pypi_0 pypi albumentations 0.5.2 pypi_0 pypi binutils_impl_linux-64 2.35.1 h27ae35d_9 binutils_linux-64 2.35.1 h454624a_30 blas 1.0 mkl ca-certificates 2021.10.26 h06a4308_2 cachetools 4.2.4 pypi_0 pypi certifi 2021.5.30 py36h06a4308_0 charset-normalizer 2.0.9 pypi_0 pypi cudatoolkit 10.2.89 3 hcc cycler 0.11.0 pypi_0 pypi decorator 4.4.2 pypi_0 pypi freetype 2.11.0 h70c0345_0 gcc_impl_linux-64 7.5.0 h7105cf2_17 gcc_linux-64 7.5.0 h8f34230_30 google-auth 2.3.3 pypi_0 pypi google-auth-oauthlib 0.4.6 pypi_0 pypi grpcio 1.42.0 pypi_0 pypi gxx_impl_linux-64 7.5.0 h0a5bf11_17 gxx_linux-64 7.5.0 hffc177d_30 idna 3.3 pypi_0 pypi imageio 2.8.0 pypi_0 pypi imageio-ffmpeg 0.4.2 pypi_0 pypi imgaug 0.4.0 pypi_0 pypi importlib-metadata 4.8.2 pypi_0 pypi intel-openmp 2021.4.0 h06a4308_3561 jpeg 9d h7f8727e_0 kernel-headers_linux-64 3.10.0 h57e8cba_10 kiwisolver 1.3.1 pypi_0 pypi lcms2 2.12 h3be6417_0 ld_impl_linux-64 2.35.1 h7274673_9 libffi 3.3 he6710b0_2 libgcc-devel_linux-64 7.5.0 hbbeae57_17 libgcc-ng 9.3.0 h5101ec6_17 libgomp 9.3.0 h5101ec6_17 libpng 1.6.37 hbc83047_0 libstdcxx-devel_linux-64 7.5.0 hf0c5c8d_17 libstdcxx-ng 9.3.0 hd4cf53a_17 libtiff 4.2.0 h85742a9_0 libwebp-base 1.2.0 h27cfd23_0 lmdb 0.98 pypi_0 pypi lz4-c 1.9.3 h295c915_1 markdown 3.3.6 pypi_0 pypi matplotlib 3.3.4 pypi_0 pypi mkl 2020.2 256 mkl-service 2.3.0 py36he8ac12f_0 mkl_fft 1.3.0 py36h54f3939_0 mkl_random 1.1.1 py36h0573a6f_0 ncurses 6.3 h7f8727e_2 networkx 2.5.1 pypi_0 pypi ninja 1.8.2 pypi_0 pypi numpy 1.19.5 pypi_0 pypi numpy-base 1.19.2 py36hfa32c7d_0 oauthlib 3.1.1 pypi_0 pypi olefile 0.46 py36_0 opencv-python 4.5.4.60 pypi_0 pypi opencv-python-headless 4.5.4.60 pypi_0 pypi openjpeg 2.4.0 h3ad879b_0 openssl 1.1.1l h7f8727e_0 pillow 8.4.0 pypi_0 pypi pip 21.2.2 py36h06a4308_0 protobuf 3.19.1 pypi_0 pypi pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pyparsing 3.0.6 pypi_0 pypi python 3.6.13 h12debd9_1 python-dateutil 2.8.2 pypi_0 pypi pytorch 1.5.0 py3.6_cuda10.2.89_cudnn7.6.5_0 pytorch pywavelets 1.1.1 pypi_0 pypi pyyaml 6.0 pypi_0 pypi readline 8.1 h27cfd23_0 requests 2.26.0 pypi_0 pypi requests-oauthlib 1.3.0 pypi_0 pypi rsa 4.8 pypi_0 pypi scikit-image 0.17.2 pypi_0 pypi scipy 1.5.0 pypi_0 pypi setuptools 58.0.4 py36h06a4308_0 shapely 1.8.0 pypi_0 pypi six 1.16.0 pyhd3eb1b0_0 sqlite 3.36.0 hc218d9a_0 sysroot_linux-64 2.17 h57e8cba_10 tensorboard 2.7.0 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.0 pypi_0 pypi tifffile 2020.9.3 pypi_0 pypi tk 8.6.11 h1ccaba5_0 torchvision 0.6.0 py36_cu102 pytorch typing-extensions 4.0.1 pypi_0 pypi urllib3 1.26.7 pypi_0 pypi werkzeug 2.0.2 pypi_0 pypi wheel 0.37.0 pyhd3eb1b0_1 xz 5.2.5 h7b6447c_0 zipp 3.6.0 pypi_0 pypi zlib 1.2.11 h7b6447c_3 zstd 1.4.9 haebb681_0
In addition to the solution posted in this issue. I added symbolic-links that point to the conda installed gcc, which I was missing. ln -s /home/envs/segmentation_base/bin/x86_64-conda_cos6-linux-gnu-cc gcc ln -s /home/envs/segmentation_base/bin/x86_64-conda_cos6-linux-gnu-cpp g++
https://stackoverflow.com/questions/70316504/
ToPILImage : TypeError: Input type int64 is not supported
I'm trying to develop a GAN using FastAi. When converting the Tensor to an Image I get this error. Traceback (most recent call last): File "/Users/DevDog/Documents/vsc/pokemon/dementad.py", line 44, in <module> im =transforms.ToPILImage()(img[0]).convert('RGBA') File "/Users/DevDog/miniforge3/envs/python386/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 179, in __call__ return F.to_pil_image(pic, self.mode) File "/Users/DevDog/miniforge3/envs/python386/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 290, in to_pil_image raise TypeError('Input type {} is not supported'.format(npimg.dtype)) TypeError: Input type int64 is not supported Here's the full code import fastai from fastai.data import transforms from fastai.data.block import DataBlock, TransformBlock from fastai.data.transforms import get_image_files from fastai.optimizer import RMSProp from fastai.vision.data import ImageBlock, ImageDataLoaders from fastcore.imports import noop from numpy import negative import torch import cv2 import PIL from torchvision import transforms from PIL import Image from torch import nn from fastai.vision import * from fastai.vision.augment import * from fastai.imports import * from fastai.vision.gan import * from fastai.data.block import * from fastai.data.transforms import * from fastai.callback.all import * path = Path('pokeman') bs=100 size=64 dblock = DataBlock(blocks = (TransformBlock, ImageBlock), get_x = generate_noise, get_items = get_image_files, splitter = IndexSplitter([]), item_tfms=Resize(size, method=ResizeMethod.Crop), batch_tfms = Normalize.from_stats(torch.tensor([0.5,0.5,0.5]), torch.tensor([0.5,0.5,0.5]))) dls = dblock.dataloaders(path,path=path,bs=bs) generator = basic_generator(64,3,n_extra_layers=1) critic = basic_critic(64, 3, n_extra_layers=1,act_cls=partial(nn.LeakyReLU)) student = GANLearner.wgan(dls,generator,critic,opt_func = RMSProp) student.recorder.train_metrics=True student.recorder.valid_metrics=False student.fit(1,2e-4,wd=0.) #cv2.waitKey(0) student.show_results(max_n=9,ds_idx=0) student.gan_trainer.switch(gen_mode=True) img = student.predict(generate_noise('pocheman',size=100)) print(img[0].size()) im =transforms.ToPILImage()(img[0]).convert('RGB') The point of the Code is to generate pokemon images. But whenever I predict and convert it to a PIL Image the code fails with the aforementioned error.
I suggest for you to use this code to convert the output of your model from a tensor to a PIL image: img = Image.fromarray((255*imgs[0]).numpy().astype(np.uint8).transpose(1, 2, 0))
https://stackoverflow.com/questions/70316929/
loss function with pytorch
I'm new to pytorch, when I see tutorials with MNIST dataset the target is a scalar (a digit from 0 to 9) and the output of the model is a layer is a vector (the code of the last layer is nn.Linear(32,10)) and they calculte the loss with (loss=nn.CrossEntropyLoss() loss = loss(output,target) ) are they compareing digit with a vector ?
I think that according to the PyTorch documentation torch.nn.functional.cross_entropy() that the output is like you mentioned a tensor with shape (N,C) (N is batch size) and C is the number of classes, where the target is either shape (N) when containing only class indices and shape (N,C) when containing also class probabilities. Details about how they compute the actual cross entropy is mentioned here (pytorch docs). So yes they are comparing a digit to a tensor in the sense that the digit tries to show which index should be 1 and therefore the other ones 0.
https://stackoverflow.com/questions/70318490/
Pytorch ValueError: either size or scale_factor should be defined
Background Information I am trying to create a model (a beginner so please excuse my ignorance). The architecture I am trying to convert is given below as a link as well. ![The Architecture to convert to a model][0] This is the code I came up with. I am using Binder to run the code. import os import torch import torchvision import tarfile from torchvision.datasets.utils import download_url from torch.utils.data import random_split from torchsummary import summary # Implementation of CNN/ConvNet Model class build_unet(torch.nn.Module): def __init__(self): super(build_unet, self).__init__() keep_prob = 0.5 self.layer1 = torch.nn.Sequential( torch.nn.Conv2d(3, 32, kernel_size=3), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2, padding=1)) self.layer2 = torch.nn.Sequential( torch.nn.Conv2d(32, 64, kernel_size=3), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2, padding=1)) self.layer3 = torch.nn.Sequential( torch.nn.Conv2d(64, 128, kernel_size=3), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2, padding=1)) self.dense = torch.nn.Linear(64, 128, bias=True) torch.nn.init.xavier_uniform_(self.dense.weight) self.layer4 = torch.nn.Sequential( self.dense, torch.nn.ReLU(), torch.nn.Upsample() ) self.layer5 = torch.nn.Sequential( torch.nn.Conv2d(128, 128, kernel_size=3), torch.nn.Sigmoid(), torch.nn.Upsample() ) self.layer6 = torch.nn.Sequential( torch.nn.Conv2d(128, 64, kernel_size=3), torch.nn.Sigmoid(), torch.nn.Upsample() ) self.layer7 = torch.nn.Sequential( torch.nn.Conv2d(64, 1, kernel_size=3), torch.nn.Sigmoid() ) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) out = self.layer4(out) out = self.layer5(out) out = self.layer6(out) out = self.layer7(out) return out if __name__ == "__main__": x = torch.randn((2, 3, 512, 512)) f = build_unet() y = f(x) print(y.shape) How would I resolve this error? ERROR MESSAGE --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /tmp/ipykernel_36/1438699785.py in <module> 87 x = torch.randn((2, 3, 512, 512)) 88 f = build_unet() ---> 89 y = f(x) 90 print(y.shape) /opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /tmp/ipykernel_36/1438699785.py in forward(self, x) 72 out = self.layer3(out) 73 ---> 74 out = self.layer4(out) 75 out = self.layer5(out) 76 out = self.layer6(out) /opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.9/site-packages/torch/nn/modules/container.py in forward(self, input) 137 def forward(self, input): 138 for module in self: --> 139 input = module(input) 140 return input 141 /opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.9/site-packages/torch/nn/modules/upsampling.py in forward(self, input) 139 140 def forward(self, input: Tensor) -> Tensor: --> 141 return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners) 142 143 def extra_repr(self) -> str: /opt/conda/lib/python3.9/site-packages/torch/nn/functional.py in interpolate(input, size, scale_factor, mode, align_corners, recompute_scale_factor) 3647 scale_factors = [scale_factor for _ in range(dim)] 3648 else: -> 3649 raise ValueError("either size or scale_factor should be defined") 3650 3651 if recompute_scale_factor is None: ValueError: either size or scale_factor should be defined
nn.Upsample() has following parameters: size, scale_factor, mode, align_corners. By default size=None, mode=nearest and align_corners=None. torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align_corners=None) When you set scale_factor=2 you will get following result: import torch import torch.nn as nn class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() keep_prob = 0.5 self.layer1 = nn.Sequential( nn.Conv2d(3, 32, kernel_size=3), nn.ReLU(), nn.MaxPool2d(kernel_size=2, padding=1)) self.layer2 = nn.Sequential( nn.Conv2d(32, 64, kernel_size=3), nn.ReLU(), nn.MaxPool2d(kernel_size=2, padding=1)) self.layer3 = nn.Sequential( nn.Conv2d(64, 128, kernel_size=3), nn.ReLU(), nn.MaxPool2d(kernel_size=2, padding=1)) self.dense = nn.Linear(64, 128, bias=True) nn.init.xavier_uniform_(self.dense.weight) self.layer4 = nn.Sequential( self.dense, nn.ReLU(), nn.Upsample(scale_factor=2) ) self.layer5 = nn.Sequential( nn.Conv2d(128, 128, kernel_size=3), nn.Sigmoid(), nn.Upsample(scale_factor=2) ) self.layer6 = nn.Sequential( nn.Conv2d(128, 64, kernel_size=3), nn.Sigmoid(), nn.Upsample(scale_factor=2) ) self.layer7 = nn.Sequential( nn.Conv2d(64, 1, kernel_size=3), nn.Sigmoid() ) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) out = self.layer4(out) out = self.layer5(out) out = self.layer6(out) out = self.layer7(out) return out if __name__ == "__main__": x = torch.randn((2, 3, 512, 512)) f = Net() y = f(x) print(y.shape) Result: torch.Size([2, 1, 498, 1010])
https://stackoverflow.com/questions/70324346/
Change shape of pytorch tensor
I have a pytorch tensor with a shape: torch.size([6000, 30, 30, 9]) and I want to convert it into the shape: torch.size([6000, 8100]) such that I go from 6000 elements that contain 30 elements that in turn contain 30 elements that in turn contain 9 elements TO 6000 elements that contain 8100 elements. How do I achieve it?
let's say you have a tensor x with the shape torch.size([6000, 30, 30, 9]). In Pytorch, To change the shape of it to torch.size([6000, 8100]), you can use the function view or reshape to keep the first dimension of the tensor (6000) and flatten the rest of dimensions (30,30,9) as follows: import torch x= torch.rand(6000, 30, 30, 9) print(x.shape) #torch.Size([6000, 30, 30, 9]) x=x.view(6000,-1) # or x= x.view(x.size(0),-1) print(x.shape) #torch.Size([6000, 8100]) x= torch.rand(6000, 30, 30, 9) print(x.shape) #torch.Size([6000, 30, 30, 9]) x=x.reshape(6000,-1) # or x= x.reshape(x.size(0),-1) print(x.shape) #torch.Size([6000, 8100])
https://stackoverflow.com/questions/70328737/
Can I install pytorch cpu + any specified version of cudatoolkit?
My remote has cuda==11.0 and I want to install pytorch on it. I use the command conda install pytorch cudatoolkit=11.0 -c pytorch -c conda-forge but in the installation list: cudatoolkit conda-forge/linux-64::cudatoolkit-11.0.3-h15472ef_8 pytorch pytorch/linux-64::pytorch-1.10.0-py3.8_cpu_0 I found that pytorch is a cpu one. Alternatively, I substitute 11.0 with 11.1 and the installation list appears to be: cudatoolkit conda-forge/linux-64::cudatoolkit-11.1.1-h6406543_8 pytorch pytorch/linux-64::pytorch-1.10.0-py3.8_cuda11.1_cudnn8.0.5_0 where pytorch is a gpu one. My question is: are the above two installation essentially same? If not, how can I install pytorch=1.10.0 with cuda==11.0? I'd also like to know how does the cuda compatibility work? Is a cudatoolkit==11.1 compatible with programs compiled with cudatoolkit==11.0?
It all depends on whether the pytorch channel has built a version against the particular cudatoolkit version. I don't know a specific way to search this, but one can browse what builds are available on the pytorch channel. For PyTorch 1.10 on linux-64 platform it appears only CUDA versions 10.2, 11.1, and 11.3 are available. As mentioned in the comments, one can try forcing a CUDA build of PyTorch with conda create -n foo -c pytorch -c conda-forge cudatoolkit=11.0 'pytorch=*=*cuda*' which would fail in this combination. As for compatibility, no, the pytorch package builds lock in the minor version of cudatoolkit. For example,
https://stackoverflow.com/questions/70330604/
separately save the model weight in pytorch
I am using PyTorch to train a deep learning model. I wonder if it is possible for me to separately save the model weight. For example: class my_model(nn.Module): def __init__(self): super(my_model, self).__init__() self.bert = transformers.AutoModel.from_pretrained(BERT_PATH) self.out = nn.Linear(768,1) def forward(self, ids, mask, token_type): x = self.bert(ids, mask, token_type)[1] x = self.out(x) return x I have the BERT model as the base model and an additional linear layer on the top. After I train this model, can I save the weight for the BERT model and this linear layer separately?
Alternatively to the previous answer, You can create two separated class of nn.module. One for the BERT model and another one for the linear layer: class bert_model(nn.Module): def __init__(self): super(bert_model, self).__init__() self.bert = transformers.AutoModel.from_pretrained(BERT_PATH) def forward(self, ids, mask, token_type): x = self.bert(ids, mask, token_type)[1] return x class linear_layer(nn.Module): def __init__(self): super(linear_layer, self).__init__() self.out = nn.Linear(768,1) def forward(self, x): x = self.out(x) return x Then you can save the two part of the model separately with: bert_model = bert_model() linear_layer = linear_layer() #train torch.save(bert_model.state_dict(), PATH) torch.save(linear_layer.state_dict(), PATH)
https://stackoverflow.com/questions/70333321/
Learnable LeakyReLU activation function with Pytorch
I'm trying to write a class for Invertible trainable LeakyReLu in which the model modifies the negative_slope in each iteration, class InvertibleLeakyReLU(nn.Module): def __init__(self, negative_slope): super(InvertibleLeakyReLU, self).__init__() self.negative_slope = torch.tensor(negative_slope, requires_grad=True) def forward(self, input, logdet = 0, reverse = False): if reverse == True: input = torch.where(input>=0.0, input, input *(1/self.negative_slope)) log = - torch.where(input >= 0.0, torch.zeros_like(input), torch.ones_like(input) * math.log(self.negative_slope)) logdet = (sum(log, dim=[1, 2, 3]) +logdet).mean() return input, logdet else: input = torch.where(input>=0.0, input, input *(self.negative_slope)) log = torch.where(input >= 0.0, torch.zeros_like(input), torch.ones_like(input) * math.log(self.negative_slope)) logdet = (sum(log, dim=[1, 2, 3]) +logdet).mean() return input, logdet However I set requires_grad=True, the negative slope wouldn't update. Are there any other points that I must modify?
Does your optimizer know it should update InvertibleLeakyReLU.negative_slope? My guess is - no: self.negative_slope is not defined as nn.Parameter, and therefore, by default, when you initialize your optimizer with model.parameters() negative_slope is not one of the optimization parameters. You can either define negative_slope as a nn.Parameter: self.negative_slope = nn.Parameter(data=torch.tensor(negative_slope), requires_grad=True) Or, explicitly pass negative_slope from all InvertibleLeakyReLU in your model to the optimizer.
https://stackoverflow.com/questions/70335181/
Missing keys when loading the model weight in pytorch
I plan to load weight from a pth file, e.g., model = my_model() model.load_state_dict(torch.load("../input/checkpoint/checkpoint.pth") However, here is an error, saying: RuntimeError: Error(s) in loading state_dict for my_model: Missing key(s) in state_dict: "att.in_proj_weight", "att.in_proj_bias", "att.out_proj.weight", "att.out_proj.bias". Unexpected key(s) in state_dict: "in_proj_weight", "in_proj_bias", "out_proj.weight", "out_proj.bias". seems that the parameter name of my model is different from the one that stored in the state_dict. In this case, how am I supposed to make them consistent?
You can create new dictionary and modify keys without att. prefix and you can load the new dictionary to your model as following: state_dict = torch.load('path\to\checkpoint.pth') from collections import OrderedDict new_state_dict = OrderedDict() for key, value in state_dict.items(): key = key[4:] # remove `att.` new_state_dict[key] = value # load params model = my_model() model.load_state_dict(new_state_dict)
https://stackoverflow.com/questions/70336220/
Pytorch, retrieving values from a tensor using several indices. Most computationally efficient solution
If I have an example 3d tensor a = [[4, 2, 1, 6],[1, 2, 3, 8], [92, 4, 23, 54]] tensor_a = torch.tensor(a) I can get 2 of the 1D tensors along the first dimension using tensor_a[[0, 1]] tensor([[4, 2, 1, 6], [1, 2, 3, 8]]) But how about using several indices? So I have something like this list_indices = [[0, 0], [0,2], [1, 2]] I could do something like combos = [] for indi in list_indices: combos.append(tensor_a[indi]) But I'm wondering if since there's a for loop, if there's a more computationally way to do this, perhaps also using pytorch
It is more computationally effecient to use the predefined Pytorch function "torch.index_select" to select tensor elements using a list of indices: a = [[4, 2, 1, 6],[1, 2, 3, 8], [92, 4, 23, 54]] tensor_a = torch.tensor(a) list_indices = [[0, 0], [0,2], [1, 2]] #convert list_indices to Tensor indices = torch.tensor(list_indices) #get elements from tensor_a using indices. tensor_a=torch.index_select(tensor_a, 0, indices.view(-1)) print(tensor_a) if you want the result to be a list not a tensors, you can convert tensor_a to a list: tensor_a_list = tensor_a.tolist() To test the computational efficiency I created 1000000 indices and I compared the execution time. Using the loop takes more time then using my suggested pytorch approach: import time import torch start_time = time.time() a = [[4, 2, 1, 6],[1, 2, 3, 8], [92, 4, 23, 54]] tensor_a = torch.tensor(a) indices = torch.randint(0, 2, (1000000,)).tolist() for indi in indices: combos.append(tensor_a[indi]) print("--- %s seconds ---" % (time.time() - start_time)) --- 3.3966853618621826 seconds --- start_time = time.time() indices = torch.tensor(indices) tensor_a=torch.index_select(tensor_a, 0, indices) print("--- %s seconds ---" % (time.time() - start_time)) --- 0.10641193389892578 seconds ---
https://stackoverflow.com/questions/70342431/
Create CNN model for video resolution recognition from split frames
I'm working on a student project involved with resolution recognition from videos My job is to prepare a training dataset from videos (I'm downloading these movies from YT) and does it in the following steps Downloading pre-selected videos in every quality (2160p, 1440p, 1080p,720p...) Extracting frames from every downloaded video (something about 20-30 frames) Upscaling every frame to the same resolution (in my case I upscale all frames to 4K) Extracted frames have different dimensions so I need to expand them all to the same resolution Splitting these upscaled frames to 100x100 blocks After completing this process, he gets a great deal of sorted data Below is a picture of what it looks like On the left, you can see sorted directories by resolutions In the middle directories for randomly downloaded videos On the right mentioned in the fourth point 100x100 blocks from each video for each quality The result I would like to achieve is that model from the same prepared dataset as for training would be able to properly recognize the quality (e.g. for a full had video output would be 1080p) Now I'm wondering about the choice of a ready-made model using CNN. My questions: What solution do you think I should use here? With the current set of data, how should I label it or do a different set of data? Thank you very much in advance for your answers
It seems like you are actually trying to solve an easier problem than the discriminator of KernelGAN: Sefi Bell-Kligler, Assaf Shocher, Michal Irani Blind Super-Resolution Kernel Estimation using an Internal-GAN (NeurIPS 2019). In their work, they tried to estimate an arbitrary downsampling kernel relating HR and LR images. Your work is much simpler: you only try to select between several known upsampling kernels. Since your upscaling method is known, you only need to recover the amount of upscaling. I suggest you start with a CNN that has an architecture similar to the discriminator of KernelGAN. However, I would consider increasing significantly the receptive field so it can reason about upscaling from 144p to 4K. Side notes: Do not change the aspect ratio of the frames when you upscale them! this will make your problem much more difficult: you will need to estimate two upscaling parameters (horizontal/vertical) instead of only one. Do not crop 100x100 regions in advance - let your Dataset's transformations do it for you as random augmentations.
https://stackoverflow.com/questions/70342773/
How does pytorch L1-norm pruning works?
Lets see the result that I got first. This is one of a convolution layer of my model, and im only showing 11 filter's weight of it (11 3x3 filter with channel=1) Left side is original weight Right side is Pruned weight So I was wondering how does the "TORCH.NN.UTILS.PRUNE.L1_UNSTRUCTURED" works because by the pytorch website said, it prune the lowest L1-norm unit, but as far as I know, L1-norm pruning is a filter pruning method which prune the whole filter which use this equation to fine the lowest filter value instead of pruning single weight. So I'm a bit curious about how does this function actually works? The following is my pruning code parameters_to_prune = ( (model.input_layer[0], 'weight'), (model.hidden_layer1[0], 'weight'), (model.hidden_layer2[0], 'weight'), (model.output_layer[0], 'weight') ) prune.global_unstructured( parameters_to_prune, pruning_method=prune.L1Unstructured, amount = (pruned_percentage/100), )
The nn.utils.prune.l1_unstructured utility does not prune the whole filter, it prunes individual parameter components as you observed in your sheet. That is components with the lower norm get masked. Here is a minimal example as discussed in the comments below: >>> m = nn.Linear(10,1,bias=False) >>> m.weight = nn.Parameter(torch.arange(10).float()) >>> prune.l1_unstructured(m, 'weight', .3) >>> m.weight tensor([0., 0., 0., 3., 4., 5., 6., 7., 8., 9.], grad_fn=<MulBackward0>)
https://stackoverflow.com/questions/70346398/
Tensor Operation and gradient
I was going some tutorials on youtube where below code sample was used to explain derivatives Create tensors. x = torch.tensor(3.) w = torch.tensor(4., requires_grad=True) b = torch.tensor(5., requires_grad=True) x, w, b Arithmetic operations y = w * x + b y Compute derivatives y.backward() Display gradients print('dy/dx:', x.grad) print('dy/dw:', w.grad) print('dy/db:', b.grad) OUTPUT dy/dx: None dy/dw: tensor(3.) dy/db: tensor(1.) Could anyone please explain me how we are getting tensor(3.) & tensor(1.) as an output of gradient. I need to understand how pytorch is performing this operation behind the scene. Any help would be appreciated.
You have y = w*x + b, then dy/dx = w dy/dw = x dy/db = 1 Since you've not set requires_grad=True for x, PyTorch won't calculate derivative w.r.t. it. Hence, dy/dx = None Rest are the values of corresponding tensors. Thus, the final output is dy/dx: None dy/dw: tensor(3.) dy/db: tensor(1.)
https://stackoverflow.com/questions/70348160/
What is the difference between x.view(x.size(0), -1) and torch.nn.Flatten() layer and torch.flatten(x)? pytorch question
I'm quite curious on what's the difference between using view(,-1) and flatten like the simple code here: Since I found that the size and data all flatten to one dimension. import torch from torch import nn from torch.utils.data import DataLoader from torchvision import datasets from torchvision.transforms import ToTensor, Lambda, Compose import matplotlib.pyplot as plt x = torch.rand(3,256,256) x.size() a = x.view(x.size(0), -1) print('after view:',a.size()) m = nn.Sequential(nn.Flatten()) y = m(x) print('after nn flatten:',y.size()) z = torch.flatten(x) print('after torch flatten:',y.size()) And also, it seems there is no difference between = and =.contiguous which said :Returns a contiguous in memory tensor containing the same data as self tensor. but for me, it seems just return the self tensor but not copy or new tensor with same data. c = y print(c) b = y.contiguous() print(b) # change original data y[0][0]=1 print(b) print(c) print(y)
A view is a way to modify the way you look at your data without modifying the data itself: torch.view returns a view on the data: the data is not copied, only the "window" which you look through on the data changes torch.flatten returns a one-dimensional output from a multi-dimensional input. It may not copy the data if [the] input can be viewed as the flattened shape (source) torch.nn.Flatten is just a wrapper for convenience around torch.flatten Contiugous data just means that the data is linearly adressable in memory, e.g. for two dimension data this would mean that element [i][j] is at position i * num_columns + j. If this is already the case then .contiguous will not change your data or copy anything.
https://stackoverflow.com/questions/70348437/
1. Weighted Loss in CrossEntropyLoss() 2. Combination of WeightedRandomSampler and subsampler
I wanted to implement class weights to my 3 class classification problem. Tried by just directly adding the weights, which gives me an error when passing my model output and the labels to my loss criterion = nn.CrossEntropyLoss(weight=torch.tensor([1,2,2])) The error: loss = criterion(out, labels) expected scalar type Float but found Long So I print dtypes and change them to float but it still gives me the same error labels = labels.float() print("Labels Training", labels, labels.dtype) print("Out Training ", out, out.dtype) loss = criterion(out, labels) >>Labels Training tensor([2.]) torch.float32 >>Out Training tensor([[ 0.0540, -0.1439, -0.0070]], grad_fn=<AddmmBackward0>) torch.float32 >>expected scalar type Float but found Long I also tried to change it to float64(), but it tells me that tensor Object has no attribute float64 Problem: I Havent tried this one out but I have seen that the more used approach would be the RandomWeightedSampler. My problem is that I use CV with K-Fold and use a SubSampler for that. Is it possible to use both? Havent foudn anything related to that.
For the first Problem, nn.CrossEntropyLoss requires the output to be of type float, label of type long, and weight of type float. Therefore, you should change the optional parameter of nn.CrossEntropyLoss "weight" to be float by: criterion = nn.CrossEntropyLoss(weight=torch.tensor([1.0,2.0,2.0])) loss = criterion(out, labels.long())
https://stackoverflow.com/questions/70349766/
Using BatchNorm1d layer with Embedding and Linear layers for NLP text-classification problem throws RuntimeError
I am trying to create a neural network and train my own Embeddings. The network has the following structure (PyTorch): import torch.nn as nn class MultiClassClassifer(nn.Module): #define all the layers used in model def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim): #Constructor super(MultiClassClassifer, self).__init__() #embedding layer self.embedding = nn.Embedding(vocab_size, embedding_dim) #dense layer self.hiddenLayer = nn.Linear(embedding_dim, hidden_dim) #Batch normalization layer self.batchnorm = nn.BatchNorm1d(hidden_dim) #output layer self.output = nn.Linear(hidden_dim, output_dim) #activation layer self.act = nn.Softmax(dim=1) #2d-tensor #initialize weights of embedding layer self.init_weights() def init_weights(self): initrange = 1.0 self.embedding.weight.data.uniform_(-initrange, initrange) def forward(self, text): embedded = self.embedding(text) hidden_1 = self.batchnorm(self.hiddenLayer(embedded)) return self.act(self.output(hidden_1)) My training_iterator object looks like: batch = next(iter(train_iterator)) batch.text_normalized_tweet[0] tensor([[ 240, 538, 305, 73, 9, 780, 2038, 13, 48, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [ 853, 57, 2, 70, 1875, 176, 466, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], ...]) with shape: torch.Size([32, 25]). 32= batch_size I used to create the training iterator with data.BucketIterator and 25 = the sequences in the batch. When I create a model instance: INPUT_DIM = len(TEXT.vocab) #~5,000 tokens EMBEDDING_DIM = 100 HIDDEN_DIM = 64 OUTPUT_DIM = 3 #target has 3 classes model = MultiClassClassifer(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM) and execute model(batch.text_normalized_tweet[0]).squeeze(1) I get back the following RuntimeError RuntimeError: running_mean should contain 15 elements not 64 You may also find my Golab Notebook here.
I found a workaround based on the example given by @jhso (above). INPUT_DIM = len(TEXT.vocab) #~5,000 tokens EMBEDDING_DIM = 100 HIDDEN_DIM = 64 e = nn.Embedding(INPUT_DIM, EMBEDDING_DIM) l = nn.Linear(EMBEDDING_DIM, HIDDEN_DIM) b = nn.BatchNorm1d(HIDDEN_DIM) soft = nn.Softmax(dim=1) out = nn.Linear(HIDDEN_DIM, 3) text, text_lengths = batch.text_normalized_tweet y = e(text) tensor, batch_size = nn.utils.rnn.pack_padded_sequence(y,text_lengths, batch_first=True)[0], nn.utils.rnn.pack_padded_sequence(y,text_lengths, batch_first=True)[1] #added rnn.pack_padded_sequence y = b(l(tensor)) Added pack_padded_sequence() method from utils.rnn package which will take the embeddings as input. I also had to calculate both the text and the text_lengths since the way I created the training_iteror it returns 2 outputs (text, text_lenght).
https://stackoverflow.com/questions/70356000/
Why do we need to pre-process image datasets?
Refer to this Complete guide on How to use Autoencoders in Python Notice the author add: x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) after they loaded the MNIST data. Why do they divide the image data by 255? And why 255? After that why do they reshape a 2d matrix into 1d? Thank you so much!
Why dividing by 255: The RGB value is of values up to 255 and you want to standardize your colors between 0 and 1. Then why the transformation to a 1D vector is to easily send the whole vector into a model. If you have a 2D vector you will have to use other forms of input layers or different kinds of models which are built especially for this. In many cases a 2D vector can be indicative of timeseries datasets which I actually do not know if there are CNN implementations which may use 2D inputs for images.
https://stackoverflow.com/questions/70356646/
Pytorch, retrieving values from a 3D tensor using several indices. Most computationally efficient solution
Related: Pytorch, retrieving values from a tensor using several indices. Most computationally efficient solution This is another question about retrieving values from a 3D tensor, using a list of indices. In this case, I have a 3d tensor, for example b = [[[4, 20], [1, -1]], [[1, 2], [8, -1]], [[92, 4], [23, -1]]] tensor_b = torch.tensor(b) tensor_b tensor([[[ 4, 20], [ 1, -1]], [[ 1, 2], [ 8, -1]], [[92, 4], [23, -1]]]) In this case, I have a list of 3D indices. So indices = [ [[1, 0, 1], [2, 0, 1]], [[1, 1, 1], [0, 0, 0]], [[2, 1, 0], [0, 1, 0]] ] Each triple is an index for tensor-b. The desired result is [[2, 4], [-1, 4], [23, 1]] Potential Approach Like in the last question, the first solution that comes to mind is a nested for loop, but there is probably a more computationally efficient solution using pytorch function. And like in the last question, perhaps reshape would be needed to get the desired shape for the last solution. So a desired solution could be [2, 4, -1, 4, 23, 1], which can come from a flattened list of indices [ [1, 0, 1], [2, 0, 1], [1, 1, 1], [0, 0, 0], [2, 1, 0], [0, 1, 0] ] But I am not aware of any pytorch functions so far which allow for a list of 3D indices. I have been looking at gather and index_select.
You can use advanced indexing specifically integer array indexing tensor_b = torch.tensor([[[4, 20], [1, -1]], [[1, 2], [8, -1]], [[92, 4], [23, -1]]]) indices = torch.tensor([ [[1, 0, 1], [2, 0, 1]], [[1, 1, 1], [0, 0, 0]], [[2, 1, 0], [0, 1, 0]] ]) result = tensor_b[indices[:, :, 0], indices[:, :, 1], indices[:, :, 2]] results in tensor([[ 2, 4], [-1, 4], [23, 1]])
https://stackoverflow.com/questions/70356988/
PyTorch multi-class: ValueError: Expected input batch_size (416) to match target batch_size (32)
I have created a mutli-class classification neural network. Training, and validation iterators where created with BigBucketIterator method with fields {'text_normalized_tweet':TEXT, 'label': LABEL} TEXT = a tweet LABEL = a float number (with 3 values: 0,1,2) Below I execute a dummy example of my neural network: import torch.nn as nn class MultiClassClassifer(nn.Module): #define all the layers used in model def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim): #Constructor super(MultiClassClassifer, self).__init__() #embedding layer self.embedding = nn.Embedding(vocab_size, embedding_dim) #dense layer self.hiddenLayer = nn.Linear(embedding_dim, hidden_dim) #Batch normalization layer self.batchnorm = nn.BatchNorm1d(hidden_dim) #output layer self.output = nn.Linear(hidden_dim, output_dim) #activation layer self.act = nn.Softmax(dim=1) #2d-tensor #initialize weights of embedding layer self.init_weights() def init_weights(self): initrange = 1.0 self.embedding.weight.data.uniform_(-initrange, initrange) def forward(self, text, text_lengths): embedded = self.embedding(text) #packed sequence packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths, batch_first=True) tensor, batch_size = packed_embedded[0], packed_embedded[1] hidden_1 = self.batchnorm(self.hiddenLayer(tensor)) return self.act(self.output(hidden_1)) Instantiate the model INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 100 HIDDEN_DIM = 64 OUTPUT_DIM = 3 model = MultiClassClassifer(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM) When I call text, text_lengths = batch.text_normalized_tweet predictions = model(text, text_lengths).squeeze() loss = criterion(predictions, batch.label) it returns, ValueError: Expected input batch_size (416) to match target batch_size (32). model(text, text_lengths).squeeze() = torch.Size([416, 3]) batch.label = torch.Size([32]) I can see that the two objects have different sizes, but I have no clue how to fix this? You may find the Google Colab notebook here Shapes of each in, out tensor of my forward() method: torch.Size([32, 10, 100]) #self.embedding(text) torch.Size([320, 100]) #nn.utils.rnn.pack_padded_sequence(embedded, text_lengths, batch_first=True) torch.Size([320, 64]) #self.batchnorm(self.hiddenLayer(tensor)) torch.Size([320, 3]) #self.act(self.output(hidden_1))
You shouldn't be using the squeeze function after the forward pass, that doesn't make sense. After removing the squeeze function, as you see, the shape of your final output is [320,3] whereas it is expecting [32,3]. One way to fix this is to average out the embeddings you obtain for each word after the self.Embedding function like shown below: def forward(self, text, text_lengths): embedded = self.embedding(text) embedded = torch.mean(embedded, dim=1, keepdim=True) packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths, batch_first=True) tensor, batch_size = packed_embedded[0], packed_embedded[1] hidden_1 = self.batchnorm(self.hiddenLayer(tensor)) return self.act(self.output(hidden_1))
https://stackoverflow.com/questions/70364824/
Can a PyTorch DataLoader start with an empty dataset?
I have a dataset which is in a deque buffer, and I want to load random batches from this with a DataLoader. The buffer starts empty. Data will be added to the buffer before the buffer is sampled from. self.buffer = deque([], maxlen=capacity) self.batch_size = batch_size self.loader = DataLoader(self.buffer, batch_size=batch_size, shuffle=True, drop_last=True) However, this causes the following error: File "env/lib/python3.8/site-packages/torch_geometric/loader/dataloader.py", line 78, in __init__ super().__init__(dataset, batch_size, shuffle, File "env/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 268, in __init__ sampler = RandomSampler(dataset, generator=generator) File "env/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 102, in __init__ raise ValueError("num_samples should be a positive integer " ValueError: num_samples should be a positive integer value, but got num_samples=0 Turns out that the RandomSampler class checks that num_samples is positive when it is initialised, which causes the error. if not isinstance(self.num_samples, int) or self.num_samples <= 0: raise ValueError("num_samples should be a positive integer " "value, but got num_samples={}".format(self.num_samples)) Why does it check for this here, even though RandomSampler does support datasets which change in size at runtime? One workaround is to use an IterableDataset, but I want to use the shuffle functionality of DataLoader. Can you think of a nice way to use a DataLoader with a deque? Much appreciated!
The problem here is neither the usage of deque nor the fact that the dataset is dynamically growable. The problem is that you are starting with a Dataset of size zero - which is invalid. The easiest solution would be to just start with any arbitrary object in the deque and dynamically remove it afterwards.
https://stackoverflow.com/questions/70369070/
Training of multi-headed neural network with labels only for certain heads at a time
I am trying to train NN with 3 heads sharing some initial layers. However each of my training targets has only output for 2 of them. I would like to create separate batches with samples that contains output only for the same heads and use them to update only respective heads. Is there any way how to achieve this in any DL framework?
As your question is somewhat general, I will answer assuming you are using PyTorchLightning. I suggest you use a model that looks like this: class MyModel(LightningModule): def training_step(self, batch: MyMultiTaskBatch): backbone_output = self.backbone(batch.x) head = self.heads[batch.task_name] head_output = head(backbone_output) loss = self.losses[batch.task_name] return loss(head_output, batch.y) Where your batch tells the model which head it should run, and which loss it should use out of dictionaries that map task names to heads and losses. You will also need to implement a dataloader that returns a MyMultiTaskBatch as its batches.
https://stackoverflow.com/questions/70369843/
Using Pytorch Dataloader with Probability Distribution
TL;DR: I want to use DataLoader to take a weighted random sample of the available rows. How do? I've put together some python code that fits a certain kind of input-driven dynamical system to data using batched gradient descent over the parameters that define the model. I have the following snippet of Python code that gets the job done using Pytorch. k_trn = self.linear.k_gen(in_trn,t) u_trn = torch.tensor(in_trn.T) x_trn = torch.tensor(out_trn.T, dtype = torch.float) data = TensorDataset(u_trn[:-1,:],k_trn[:-1,:],x_trn[1:,:]) loader = DataLoader(data, batch_size = 20, shuffle = True) Data types: u_trn: N x 1 tensor (pytorch's array) k_trn: N x K tensor x_trn: N x n tensor The rows of u_trn,k_trn,x_trn correspond to three trajectories (with u corresponding to the "input"). Each time I iterate over the loader (which can be done, e.g. with a loop for u,k,x in loader:), I get a batch of 20 rows from u_trn, 20 rows of k_trn, and 20 rows of x_trn. These rows are selected with a uniform probability, without replacement. The catch is that I would like to sample these rows with a non-uniform probability. In particular: denote S = (1/1 + 1/2 + ... + 1/N). I would like for the loader to select the jth row with probability 1/(S*j). After looking at the relevant doumentation, I suspect that this can be done by messing with either the sampler or batch_sampler keyword arguments when initializing the DataLoader object, but I'm having trouble parsing the documentation well enough to implement the behavior that I'm looking for. I'd appreciate any help with this. I've tried to keep my question brief; please let me know if I've left out any relevant information. Followup: with the help of Shai's answer, I've gotten things to work properly. Here's a quick script that I used to test this out and make sure that everything was working as expected. import numpy as np import torch from torch.utils.data import DataLoader, TensorDataset, WeightedRandomSampler import matplotlib.pyplot as plt import numpy as np import torch from torch.utils.data import DataLoader, TensorDataset, WeightedRandomSampler import matplotlib.pyplot as plt N = 100 x = np.zeros((N,2)) x[:,0] = 1 + np.arange(N) data = TensorDataset(torch.Tensor(x)) weights = [1/j for j in range(1, N+1)] # my weights sampler = WeightedRandomSampler(weights, 10000, replacement=True) loader = DataLoader(data, batch_size=20, sampler=sampler) sums = [] for y, in loader: for k in range(len(y)): sums.append(np.sum(y[k].numpy())) h = plt.hist(sums, bins = N) a = h[0][0] plt.plot([a/(n+1) for n in range(N)], lw = 3) And the resulting plot: Note that weights are automatically normalized, so there is no need to divide by the sum S. Note also that there is no need for shuffle=True in the loader; the sampler takes care of the randomization on its own.
Why don't you simply use WeightedRandomSampler? weights = [1./(S*j) for j in range(1, N+1)] # your weights sampler = WeightedRandomSampler(weights, replacement=True) loader = DataLoader(data, batch_size=20, sampler=sampler)
https://stackoverflow.com/questions/70371053/
How to register a forward hook for PyTorch matmul?
torch.matmul doesn't seem to have an nn.Module wrapper to allow the standard forward hook registration by name. In this case, the matrix multiply happens in the middle of a forward() function. I suppose the intermediate result can be returned by forward() in addition to the final result, such as return x, mm_res. But what's a good way to collect these additional outputs? What are the options for offloading torch.matmul outputs? TIA.
If your primary complaint is the fact that torch.matmul doesn't have a Module wrapper, how about just making one class Matmul(nn.Module): def forward(self, *args): return torch.matmul(*args) Now you can register a forward hook on a Matmul instance class Network(nn.Module): def __init__(self, ...): self.matmul = Matmul() self.matmul.register_module_forward_hook(...) def forward(self, x): y = ... z = self.matmul(x, y) ... Being said that, you must not overlook the warning (in red) in the doc that it should only be used for debugging purpose.
https://stackoverflow.com/questions/70372116/
only one element tensors can be converted to Python scalars error with pca.fit_transform
I am trying to perform dimensionality reduction using PCA, where outputs is a list of tensors where each tensor has a shape of (1, 3, 32,32). Here is the code: from sklearn.decomposition import PCA pca = PCA(10) pca_result = pca.fit_transform(output) But I keep getting this error, regardless of whatever I tried: ValueError: only one element tensors can be converted to Python scalars I know that the tensors with size(1,3, 32,32) is making the issue, since its looking for 1 element as the error puts it, but do not know how to solve it. I have tried flattening each tensor with looping over output (don't know if its the right way of solving this issue), using the following code but it leads to error in pca: new_outputs = [] for i in outputs: for j in i: j = j.cpu() j = j.detach().numpy() j = j.flatten() new_outputs.append(j) pca_result = pca.fit_transform(new_output) I would appreciate if anybody can help with this error whether the flattening approach I took, is correct. PS:I have read the existing posts (post1,post2) discussing this error but none of them could solve my problem.
Assuming your Tensors are stored in a matrix with shape like (10, 3, 32, 32) where 10 corresponds to number of Tensors, you should flatten each like that: import torch from sklearn.decomposition import PCA data= torch.rand((10, 3, 32, 32)) pca = PCA(10) pca_result = pca.fit_transform(data.flatten(start_dim=1)) data.flatten(start_dim=1) makes your data to be in shape (10, 3*32*32) The error you posted is actually related to one of the post you linked. The PCA estimator expects array-like object with fit() method and you provided a list of Tensors.
https://stackoverflow.com/questions/70375174/
How to extract patches from an image in pytorch/tensorflow into 4 equal parts?
I am using a 16x16 colour image; I wrote small code for that but could not perform it precisely. import numpy as np from patchify import patchify image = cv2.imread('subbu_i.jpg') print(image.shape) patches = patchify(image, (4,4), step=1) print(patches.shape) Any ideas?
For Tensorflow, try tf.image.extract_patches import tensorflow as tf from PIL import Image import matplotlib.pyplot as plt import numpy as np image = Image.open('/content/image.png') plt.imshow(image) image = tf.expand_dims(np.array(image), 0) image = tf.expand_dims(np.array(image), -1) patches = tf.image.extract_patches(images=image, sizes=[1, 4, 4, 1], strides=[1, 4, 4, 1], rates=[1, 1, 1, 1], padding='VALID') axes=[] fig=plt.figure() for i in range(4): axes.append( fig.add_subplot(2, 2, i + 1) ) subplot_title=("Patch "+str(i + 1)) axes[-1].set_title(subplot_title) patch = tf.reshape(patches[0, i, i], (4, 4)) plt.imshow(patch) fig.tight_layout() plt.show()
https://stackoverflow.com/questions/70377590/
How to define pytorch fullyconnect model more simple and convenient?
i am a beginner of pytorch, and i want to build a fully connect model using Pytorch; the model is very simple like: def forward(self, x): x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) return self.fc3(x) but when i want to add some layers or adjust the hidden layers, i found i have to write lots of Redundant code like: def forward(self, x): x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.relu(self.fc3(x)) x = self.relu(self.fc4(x)) x = self.relu(self.fc5(x)) ... return self.fcn(x) besides, if i want to change some layer's feature nums, i have to change the layer adjacent so i want to know a way which is more grace(maybe more pythonic and more easy to adjust hyper parameter). i tried to write code like: def __init__(self): super().__init__() self.hidden_num = [2881, 5500, 2048, 20] # i just want to change here! to try some new structure self.fc = [nn.Linear(self.hidden_num[i], self.hidden_num[i + 1]).to(DEVICE) for i in range(len(self.hidden_num) - 1)] self.relu = nn.ReLU() def forward(self, x): for i in range(len(self.fc)): x = self.fc[i](x) if i != (len(self.fc) - 1): x = self.relu(x) return x but i found this way doesn't work, the model can't be built so could any bro tell me, how to define a fullyconnect model like above?? (so i can adjust the model layers only by adjust the list named hidden_num )
If you want to keep the same approach then you can use nn.ModuleList to properly register all linear layers inside the module's __init__: class Model(nn.Module): def __init__(self, hidden_num=[2881, 5500, 2048, 20]): super().__init__() self.fc = nn.ModuleList([ nn.Linear(hidden_num[i], hidden_num[i+1]) for i in range(len(hidden_num) - 1)]) def forward(self, x): for i, m in enumerate(self.fc.children()): x = m(x) print(i) if i != len(self.fc) - 1: x = torch.relu(x) return x However, you may want to handle the logic inside the __init__ function once. One alternative is to use nn.Sequential. class Model(nn.Module): def __init__(self, hidden_num=[2881, 5500, 2048, 20]): super().__init__() fc = [] for i in range(len(hidden_num) - 1): fc.append(nn.Linear(hidden_num[i], hidden_num[i+1])) if i != len(self.fc) - 1: fc.append(nn.ReLU()) self.fc = nn.Sequential(fc) def forward(self, x): x = self.fc(x) return x Ideally, you would inherit from nn.Sequential directly to avoid re-writing the forward function which is unnecessary in this case: class Model(nn.Sequential): def __init__(self, hidden_num=[2881, 5500, 2048, 20]): fc = [] for i in range(len(hidden_num) - 1): fc.append(nn.Linear(hidden_num[i], hidden_num[i+1])) if i != len(self.fc) - 1: fc.append(nn.ReLU()) super().__init__(fc)
https://stackoverflow.com/questions/70381926/
DQN predicts same action value for every state (cart pole)
I'm trying to implement a DQN. As a warm up I want to solve CartPole-v0 with a MLP consisting of two hidden layers along with input and output layers. The input is a 4 element array [cart position, cart velocity, pole angle, pole angular velocity] and output is an action value for each action (left or right). I am not exactly implementing a DQN from the "Playing Atari with DRL" paper (no frame stacking for inputs etc). I also made a few non standard choices like putting done and the target network prediction of action value in the experience replay, but those choices shouldn't affect learning. In any case I'm having a lot of trouble getting the thing to work. No matter how long I train the agent it keeps predicting a higher value for one action over another, for example Q(s, Right)> Q(s, Left) for all states s. Below is my learning code, my network definition, and some results I get from training class DQN: def __init__(self, env, steps_per_episode=200): self.env = env self.agent_network = MlpPolicy(self.env) self.target_network = MlpPolicy(self.env) self.target_network.load_state_dict(self.agent_network.state_dict()) self.target_network.eval() self.optimizer = torch.optim.RMSprop( self.agent_network.parameters(), lr=0.005, momentum=0.95 ) self.replay_memory = ReplayMemory() self.gamma = 0.99 self.steps_per_episode = steps_per_episode self.random_policy_stop = 1000 self.start_learning_time = 1000 self.batch_size = 32 def learn(self, episodes): time = 0 for episode in tqdm(range(episodes)): state = self.env.reset() for step in range(self.steps_per_episode): if time < self.random_policy_stop: action = self.env.action_space.sample() else: action = select_action(self.env, time, state, self.agent_network) new_state, reward, done, _ = self.env.step(action) target_value_pred = predict_target_value( new_state, reward, done, self.target_network, self.gamma ) experience = Experience( state, action, reward, new_state, done, target_value_pred ) self.replay_memory.append(experience) if time > self.start_learning_time: # learning step experience_batch = self.replay_memory.sample(self.batch_size) target_preds = extract_value_predictions(experience_batch) agent_preds = agent_batch_preds( experience_batch, self.agent_network ) loss = torch.square(agent_preds - target_preds).sum() self.optimizer.zero_grad() loss.backward() self.optimizer.step() if time % 1_000 == 0: # how frequently to update target net self.target_network.load_state_dict(self.agent_network.state_dict()) self.target_network.eval() state = new_state time += 1 if done: break def agent_batch_preds(experience_batch: list, agent_network: MlpPolicy): """ Calculate the agent action value estimates using the old states and the actual actions that the agent took at that step. """ old_states = extract_old_states(experience_batch) actions = extract_actions(experience_batch) agent_preds = agent_network(old_states) experienced_action_values = agent_preds.index_select(1, actions).diag() return experienced_action_values def extract_actions(experience_batch: list) -> list: """ Extract the list of actions from experience replay batch and torchify """ actions = [exp.action for exp in experience_batch] actions = torch.tensor(actions) return actions class MlpPolicy(nn.Module): """ This class implements the MLP which will be used as the Q network. I only intend to solve classic control problems with this. """ def __init__(self, env): super(MlpPolicy, self).__init__() self.env = env self.input_dim = self.env.observation_space.shape[0] self.output_dim = self.env.action_space.n self.fc1 = nn.Linear(self.input_dim, 32) self.fc2 = nn.Linear(32, 128) self.fc3 = nn.Linear(128, 32) self.fc4 = nn.Linear(32, self.output_dim) def forward(self, x): if type(x) != torch.Tensor: x = torch.tensor(x).float() x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = self.fc4(x) return x Learning results: Here I'm seeing one action always valued over the others (Q(right, s) > Q(left, s)). It's also clear that the network is predicting the same action values for every state. Does anyone have an idea about what's going on? I've done a lot of debugging and careful reading of the original papers (also thought about "normalizing" the observation space even though the velocities can be infinite) and could be missing something obvious at this point. I can include more code for the helper functions if that would be useful.
There was nothing wrong with the network definition. It turns out the learning rate was too high and reducing it 0.00025 (as in the original Nature paper introducing the DQN) led to an agent which can solve CartPole-v0. That said, the learning algorithm was incorrect. In particular I was using the wrong target action-value predictions. Note the algorithm laid out above does not use the most recent version of the target network to make predictions. This leads to poor results as training progresses because the agent is learning based on stale target data. The way to fix this is to just put (s, a, r, s', done) into the replay memory and then make target predictions using the most up to date version of the target network when sampling a mini batch. See the code below for an updated learning loop. def learn(self, episodes): time = 0 for episode in tqdm(range(episodes)): state = self.env.reset() for step in range(self.steps_per_episode): if time < self.random_policy_stop: action = self.env.action_space.sample() else: action = select_action(self.env, time, state, self.agent_network) new_state, reward, done, _ = self.env.step(action) experience = Experience(state, action, reward, new_state, done) self.replay_memory.append(experience) if time > self.start_learning_time: # learning step. experience_batch = self.replay_memory.sample(self.batch_size) target_preds = target_batch_preds( experience_batch, self.target_network, self.gamma ) agent_preds = agent_batch_preds( experience_batch, self.agent_network ) loss = torch.square(agent_preds - target_preds).sum() self.optimizer.zero_grad() loss.backward() self.optimizer.step() if time % 1_000 == 0: # how frequently to update target net self.target_network.load_state_dict(self.agent_network.state_dict()) self.target_network.eval() state = new_state time += 1 if done: break
https://stackoverflow.com/questions/70382999/
Modify Python Class to filter files
New to python. How can I modify the class to filter files in the folder with a string. Right now it returns all files in folder_containing_the_content_folder which could be millions of items. The following works however I would like to isolate files that contain a specific string, for example, isolate all files that contain 'v_1234_frame': # Image loader transform = transforms.Compose([ transforms.ToTensor(), transforms.Lambda(lambda x: x.mul(255)) ]) image_dataset = utils.ImageFolderWithPaths(folder_containing_the_content_folder, transform=transform) image_loader = torch.utils.data.DataLoader(image_dataset, batch_size=batch_size) The class that works requires a modification to filter file names that contain 'v_1234_frame': class ImageFolderWithPaths(datasets.ImageFolder): """Custom dataset that includes image file paths. Extends torchvision.datasets.ImageFolder() Reference: https://discuss.pytorch.org/t/dataloader-filenames-in-each-batch/4212/2 """ # override the __getitem__ method. this is the method dataloader calls def __getitem__(self, index): # this is what ImageFolder normally returns original_tuple = super(ImageFolderWithPaths, self).__getitem__(index) # the image file path path = self.imgs[index][0] # make a new tuple that includes original and the path tuple_with_path = (*original_tuple, path) return tuple_with_path I am learning python and just can't seem to come up with the solution. Hope you can help/suggest a change to the class or calling method.
Built my own data loader to isolate files via a wildcard pattern in glob and then loop through those to create a tensor for each image, passing that to my model which required it to be converted to a float. Extract the base name (image name) from the path (ex. img_frame1.jpg). Save result to my style folder. This method gives me total control over the files via the wildcard. I have included the other functions used in the solution. Note: I am not using a gpu to process these so I can run it on a standard python web server. Hopefully this helps someone in the future. Simple is sometimes better :) # Load image file # def load_image(path): # # Images loaded as BGR # img = cv2.imread(path) # return img # def itot(img, max_size=None): # # Rescale the image # if (max_size == None): # itot_t = transforms.Compose([ # # transforms.ToPILImage(), # transforms.ToTensor(), # transforms.Lambda(lambda x: x.mul(255)) # ]) # else: # H, W, C = img.shape # image_size = tuple([int((float(max_size) / max([H, W])) * x) for x in [H, W]]) # itot_t = transforms.Compose([ # transforms.ToPILImage(), # transforms.Resize(image_size), # transforms.ToTensor(), # transforms.Lambda(lambda x: x.mul(255)) # ]) # # # Convert image to tensor # tensor = itot_t(img) # # # Add the batch_size dimension # tensor = tensor.unsqueeze(dim=0) # return tensor folder_data = glob.glob(folder_containing_the_content_folder + "content_folder/" + video_token + "_frame*.jpg") # image_dataset = utils.ImageFolderWithPaths(folder_containing_the_content_folder, transform) # image_loader = torch.utils.data.DataLoader(image_dataset, batch_size=batch_size) # Load Transformer Network net = transformer.TransformerNetwork() net.load_state_dict(torch.load(style_path)) net = net.to(device) with torch.no_grad(): for image_name in folder_data: img = utils.load_image(image_name) img = img / 255.0 # standardize the data/transform img_tensor = utils.itot(img) # style image tensor generated_tensor = net(img_tensor.float()) # convert image the model modified tensor back to an image generated_image = utils.ttoi(generated_tensor) image_name = os.path.basename(image_name) # save generated image to folder utils.saveimg(generated_image, save_folder + image_name)
https://stackoverflow.com/questions/70386249/
Pytorch RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x246016 and 3136x1000)
I'm building a CNN on Pytorch. I'm a little confused about the inputs. I'm getting the following error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x246016 and 3136x1000) Images are 250 x 250 in grayscale. Can anyone take a look at my constructor and tell me where I'm going wrong? More points if you can explain to me why I was wrong and the why your answer is right! ;) class CNN(nn.Module): # Contructor def __init__(self): super(CNN, self).__init__() self.cnn1 = nn.Conv2d(1, 32, kernel_size=5, stride=1, padding=2) self.conv1_bn = nn.BatchNorm2d(32) self.maxpool1=nn.MaxPool2d(kernel_size=2, stride=2) self.cnn2 = nn.Conv2d(32, 64, kernel_size=5,stride=1, padding=2) self.conv2_bn = nn.BatchNorm2d(64) self.maxpool2=nn.MaxPool2d(kernel_size=2, stride=2) self.drop_out1 = nn.Dropout() self.fc1 = nn.Linear(7 * 7 * 64, 1000) self.bn_fc1 = nn.BatchNorm2d(1000) self.fc2 = nn.Linear(1000, 1) # Prediction def forward(self, x): x = self.cnn1(x) x = self.conv1_bn(x) x = torch.relu(x) x = self.maxpool1(x) x = self.cnn2(x) x = self.conv2_bn(x) x = torch.relu(x) x = self.maxpool2(x) x = x.view(x.size(0), -1) x = self.drop_out1(x) x = self.fc1(x) x = self.bn_fc1(x) x = torch.relu(x) x = self.fc2(x) x = torch.sigmoid(x) return x
Your fc1 layer is expecting a tensor of shape (-1, 7*7*64) but you are passing it a tensor of shape [-1, 246016] (-1 being the batch size). To calculate output size of conv nets see this post or any neural networks textbook.
https://stackoverflow.com/questions/70386696/
Why i can't import torch?
I downloaded and installed CUDA10.0 pyhton3.6 pytorch1.2. , also loaded cuda/cuda-10.0-x86_6 envirnment. Commands like below module load cuda/cuda-10.0-x86_64 # CUDA 10.0 conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch Because it needs to be installed on a NO-NETWOEK centOs server! Therefore, all pakages need be downloaded in advance and run command like below. conda install --offline pytorch-1.2.0-cuda100py36h938c94c_0.tar.bz2 ...... The error occured below. Python 3.6.0 |Anaconda 4.3.0 (64-bit)| (default, Dec 23 2016, 12:22:00) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/data/users/CHDHPC/2020224009/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 81, in <module> from torch._C import * ImportError: libmkl_gnu_thread.so: cannot open shared object file: No such file or directory I have no idea to solve it! Anyone can help me! Thank you!!
Finally I fixed this issue by changing Python version to 3.6.2 and re-installing the cuda and cudnn. Pecfect!
https://stackoverflow.com/questions/70394993/
Finding several means in a large tensor
I have a large tensor (~10k). Here's a sampler with 200 values: sample_tensor = tensor([ 0.6676, 0.0917, 0.6083, 0.4536, 1.1882, 0.6672, 0.6058, -0.1615, 0.5254, 1.1642, 0.1994, -0.2274, 0.0511, 0.3707, 0.3675, -0.1629, -0.0638, -0.0118, 0.2668, 0.8586, 0.7027, 0.3018, -0.2930, 1.2613, 0.9374, 0.3154, 1.0396, -0.0263, 0.2012, 1.5710, -0.4640, -0.1657, -0.2670, 0.5783, 0.7420, 0.1886, -1.1255, 0.3682, 0.2597, 0.3697, 0.1404, -0.0289, 0.5903, 0.0461, 0.2288, -0.0414, 0.9736, 0.4891, -0.0593, 0.1694, 0.2426, -0.0339, 0.1683, 0.2374, 0.1349, 0.1672, 0.4174, 0.8038, 1.4121, -0.1046, 0.1169, 0.6447, -0.1168, 0.7392, 0.0578, -0.1398, 0.8974, 1.0977, 0.7102, 1.4012, 0.8541, 0.3314, -0.2045, 0.1540, 0.2779, -0.3912, 0.4068, -0.1868, 0.1796, 0.0318, 0.1354, -0.9689, 0.3460, 0.3762, 0.8637, -0.4735, 0.8413, 0.5261, 0.8362, -0.2226, -0.2772, -0.2757, 0.2079, 0.0895, 0.4352, 0.8868, 0.3707, 0.8412, 0.3026, 0.1568, 0.4442, 0.0789, 0.5050, 0.0102, 0.6944, 0.1852, 0.5215, -0.7028, -0.7591, 0.2139, 0.7411, 0.3830, 0.8048, -0.7532, 0.7710, 0.8526, 1.1322, 0.0939, -0.3318, 1.1003, 0.3066, 1.6501, 1.1300, 0.0062, 0.2600, 0.2605, -0.2236, 0.2516, 0.4460, 0.6813, 0.1876, -0.4710, -0.5939, 0.4144, 0.0783, 0.4282, 0.1744, 0.0569, 0.1043, 0.3329, 0.3561, 0.1618, -0.1184, 0.4183, 0.5722, -0.4459, 0.3354, 0.3373, 0.2290, 1.0164, -0.5191, 0.0992, 0.9188, -0.3634, 1.2128, 0.0457, 0.1028, -0.2206, 0.9355, 0.6074, 0.3834, 0.0802, 0.7016, 0.8777, 0.2769, -0.7512, 0.8667, -0.1056, 0.5435, 1.4568, -0.3943, 0.5740, 0.6328, 0.4063, -0.7712, 0.5113, 0.1578, 0.4571, 1.0314, 0.2863, -0.1470, 1.0763, -0.0019, 0.9103, 1.0114, -0.1229, -0.3118, 0.5383, 0.5566, 0.2280, 0.9320, 0.6770, 0.0908, 0.5056, 0.0445, -0.0810, 0.2611, 0.1223, -0.0108, 0.0611]) I also have an input value that corresponds to how many means I need to get from this tensor: sampler_number_of_means = 10 What is an efficient way to get a tensor of 10 means from this tensor, where each mean is a different set of values with size of len(sample_tensor)/sampler_number_of_means. That is, in this example the first mean will be the first 20 values, the second mean the next 20 values, etc. I'm currently iterating through the tensor and breaking it into equal size lists, then iterating through each list to get the mean. But it's quite slow with large tensors.
You can reshape the tensor, then take the mean. import torch sample_tensor = torch.tensor([ 0.6676, 0.0917, 0.6083, 0.4536, 1.1882, 0.6672, 0.6058, -0.1615, 0.5254, 1.1642, 0.1994, -0.2274, 0.0511, 0.3707, 0.3675, -0.1629, -0.0638, -0.0118, 0.2668, 0.8586, 0.7027, 0.3018, -0.2930, 1.2613, 0.9374, 0.3154, 1.0396, -0.0263, 0.2012, 1.5710, -0.4640, -0.1657, -0.2670, 0.5783, 0.7420, 0.1886, -1.1255, 0.3682, 0.2597, 0.3697, 0.1404, -0.0289, 0.5903, 0.0461, 0.2288, -0.0414, 0.9736, 0.4891, -0.0593, 0.1694, 0.2426, -0.0339, 0.1683, 0.2374, 0.1349, 0.1672, 0.4174, 0.8038, 1.4121, -0.1046, 0.1169, 0.6447, -0.1168, 0.7392, 0.0578, -0.1398, 0.8974, 1.0977, 0.7102, 1.4012, 0.8541, 0.3314, -0.2045, 0.1540, 0.2779, -0.3912, 0.4068, -0.1868, 0.1796, 0.0318, 0.1354, -0.9689, 0.3460, 0.3762, 0.8637, -0.4735, 0.8413, 0.5261, 0.8362, -0.2226, -0.2772, -0.2757, 0.2079, 0.0895, 0.4352, 0.8868, 0.3707, 0.8412, 0.3026, 0.1568, 0.4442, 0.0789, 0.5050, 0.0102, 0.6944, 0.1852, 0.5215, -0.7028, -0.7591, 0.2139, 0.7411, 0.3830, 0.8048, -0.7532, 0.7710, 0.8526, 1.1322, 0.0939, -0.3318, 1.1003, 0.3066, 1.6501, 1.1300, 0.0062, 0.2600, 0.2605, -0.2236, 0.2516, 0.4460, 0.6813, 0.1876, -0.4710, -0.5939, 0.4144, 0.0783, 0.4282, 0.1744, 0.0569, 0.1043, 0.3329, 0.3561, 0.1618, -0.1184, 0.4183, 0.5722, -0.4459, 0.3354, 0.3373, 0.2290, 1.0164, -0.5191, 0.0992, 0.9188, -0.3634, 1.2128, 0.0457, 0.1028, -0.2206, 0.9355, 0.6074, 0.3834, 0.0802, 0.7016, 0.8777, 0.2769, -0.7512, 0.8667, -0.1056, 0.5435, 1.4568, -0.3943, 0.5740, 0.6328, 0.4063, -0.7712, 0.5113, 0.1578, 0.4571, 1.0314, 0.2863, -0.1470, 1.0763, -0.0019, 0.9103, 1.0114, -0.1229, -0.3118, 0.5383, 0.5566, 0.2280, 0.9320, 0.6770, 0.0908, 0.5056, 0.0445, -0.0810, 0.2611, 0.1223, -0.0108, 0.0611]) sampler_number_of_means = 10 sample_tensor.reshape((sampler_number_of_means,int(sample_tensor.shape[0]/sampler_number_of_means))).mean(1) Output tensor([0.3729, 0.3248, 0.2977, 0.3431, 0.2499, 0.2993, 0.2740, 0.2841, 0.3611, 0.3170])
https://stackoverflow.com/questions/70396205/
Pytorch: set top 10% values of the tensor to zero
I have Pytorch 2d tensor with normal distribution. Is there a fast way to nullify top 10% max values of this tensor using Python? I see two possible ways here: Flatten tensor to 1d and just sort it Non-vectorized way using some native Python operators (for-if) But neither of these looks fast enough. So, what is the fastest way to set X max values of a tensor to zero?
Well, it seems that Pytorch has a useful operator torch.quantile() that helps here a lot. The solution (for 1d tensor): import torch x = torch.randn(100) y = torch.tensor(0.) #new value to assign split_val = torch.quantile(x, 0.9) x = torch.where(x < split_val, x, y)
https://stackoverflow.com/questions/70396859/
How to convert a matplotlib spectrogram image into a torch tensor
import numpy as np from numpy import asarray from matplotlib import pyplot as plt import torch # generate a signal fs = 50 # sampling freq ts = np.arange(0, 10, 1/fs) # times at which signal is sampled s1 = np.sin(2 * np.pi * 2 * ts) # 2 hz s2 = np.sin(2 * np.pi * 3 * ts) # 3 hz s3 = np.sin(2 * np.pi * 6 * ts) # 6 hz s = s1 + s2 + s3 # aggregate signal # generate specgram spectrum, freqs, t, im = plt.specgram(s, Fs=fs, xextent=((0, len(s)/fs))) # convert matplotlib image to torch tensor # bypassing the numpy part would be even better! torch_tensor = torch.from_numpy(asarray(im, np.float32)) print(torch_tensor) >>> TypeError: float() argument must be a string or a number, not 'AxesImage' I should add that the 'spectrum' variable is kind of what I am looking for, except that I am a little confused by it since it has only two columns for time, and I think the specgram image has many more than two timesteps. If there is a way to use the spectrum variable to represent the whole image as a torch tensor, then that would also work for me.
plt.specgram returns the spectrogram in the spectrum variable. This means that you need to pass that variable to the torch.from_numpy function. Additionally, according to this, specgram shows the 10*log10(spectrum) which means that you might want to do that operation ot compare the results shown by specgram with the plot of your tensor. See code below: import numpy as np from numpy import asarray import numpy as np from matplotlib import pyplot as plt import torch # generate a signal fs = 50 # sampling freq ts = np.arange(0, 10, 1/fs) # times at which signal is sampled s1 = np.sin(2 * np.pi * 2 * ts) # 2 hz s2 = np.sin(2 * np.pi * 3 * ts) # 3 hz s3 = np.sin(2 * np.pi * 6 * ts) # 6 hz s = s1 + s2 + s3 # aggregate signal # generate specgram ax1=plt.subplot(121) ax1.set_title('Specgram image') spectrum, freqs, t, im = ax1.specgram(s, Fs=fs, xextent=((0, len(s)/fs))) ax1.axis('tight') torch_tensor = torch.from_numpy(spectrum) #Plot torch tensor variable ax2=plt.subplot(122) ax2.set_title('Torch tensor image') ax2.imshow(10*np.log10(torch_tensor),origin='lower left',extent=[0,10,0,25]) ax2.axis('tight') plt.show() And the output gives:
https://stackoverflow.com/questions/70397729/
Distance to object python
I have an image of a car and a corresponding bounding box. For example: (xmin, ymin, xmax, ymax) (504.8863220214844, 410.2454833984375, 937.6451416015625, 723.9139404296875) That's how I draw boxes: def plot_results(pil_img, prob, boxes): plt.figure(figsize=(16,10)) plt.imshow(pil_img) ax = plt.gca() for p, (xmin, ymin, xmax, ymax), c in zip(prob, boxes.tolist(), COLORS * 100): ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin, fill=False, color=c, linewidth=3)) cl = p.argmax() text = f'{CLASSES[cl]}: {p[cl]:0.2f}' ax.text(xmin, ymin, text, fontsize=15, bbox=dict(facecolor='yellow', alpha=0.5)) plt.axis('off') plt.show() I want to measure the distance from car to camera. If the car is nearby, the distance value should be around 0.2-0.4 If the car is far from the camera, the distance value should be around 0.6-0.8. I also found a solution for my problem: https://pythonprogramming.net/detecting-distances-self-driving-car/ But here author uses an old model. This model doesn't work well.
In comments you requested code that works similarly to the link you provided. I want to make it clear your source example isn't measuring distance. It is only measuring the width of the bounding boxes on the vehicles. The logic is based on the concept that larger widths are closer to the camera, and a smaller widths are further from the camera. This approach has many flaws due to optical illusions and lack of size and scale context. At any rate: def plot_results(pil_img, prob, boxes): granularity = 3 # fiddle with this to scale img_width_inches = 16 img_height_inches = 10 fig = plt.figure(figsize=(img_width_inches, img_height_inches)) img_width_pixels = img_width_inches * fig.dpi img_height_pixels = img_height_inches * fig.dpi plt.imshow(pil_img) ax = plt.gca() for p, (xmin, ymin, xmax, ymax), c in zip(prob, boxes.tolist(), COLORS * 100): ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin, fill=False, color=c, linewidth=3)) cl = p.argmax() text = f'{CLASSES[cl]}: {p[cl]:0.2f}' ax.text(xmin, ymin, text, fontsize=15, bbox=dict(facecolor='yellow', alpha=0.5)) # get width of bounding box box_width_pixels = xmax - xmin # normalize the box width with image width normalized_width = box_width_pixels / img_width_pixels # invert with 1 - apply power of granularity and round to 1 place apx_distance = round(((1 - (normalized_width))**granularity), 1) # get middle of box in pixels mid_x = (xmin + xmax) / 2 mid_y = (ymin + ymax) / 2 # draw value ax.text(mid_x, mid_y, apx_distance, fontsize=15, color="white") # normalize the middle x position with image width mid_x_normalized = mid_x / img_width_pixels # create arbitrary ranges and logic to consider actionable if apx_distance <= 0.5: if mid_x_normalized > 0.3 and mid_x_normalized < 0.7: ax.text(50, 50, "WARNING!!!", fontsize=26, color="red") plt.axis('off') plt.show() Output: The main difference between this code and the example you provided is that the bounding box values you've given (504.8863220214844, 410.2454833984375, 937.6451416015625, 723.9139404296875) represent pixels. However, the code in the example has bounding box values that are already normalized between 0 and 1 in relation to the image size. This is why I verbosely defined the image width and height in inches and pixels (also for self explaining code). They are needed to normalize the pixel based widths and positions so they are between 0 and 1 to match the logic in your example, and which you requested. These values can also be helpful when trying to actually measure sizes and distances. If you are interested in taking this further. I recommend reading about the laws of perspective. Here is an interesting place to start: https://www.handprint.com/HP/WCL/perspect2.html#distance
https://stackoverflow.com/questions/70402043/
how to get sub 2d tensor with specific value condition in pytorch?
I want to copy a 2-d torch tensor to a destination tensor containing only values until the first occurrence of 202 value and zero for the rest of the items like this: source_t=tensor[[101,2001,2034,1045,202,3454,3453,1234,202] ,[101,1999,2808,202,17658,3454,202,0,0] ,[101,2012,3832,4027,3454,202,3454,9987,202]] destination_t=tensor[[101,2001,2034,1045,202,0,0,0,0] ,[101,1999,2808,202,0,0,0,0,0] ,[101,2012,3832,4027,3454,202,0,0,0]] how can I do it?
I make working and pretty efficient solution. I have made a little bit more complex source tensor with additional rows with 202 in different places: import copy import torch source_t = torch.tensor([[101, 2001, 2034, 1045, 202, 3454, 3453, 1234, 202], [101, 1999, 2808, 202, 17658, 3454, 202, 0, 0], [101, 2012, 3832, 4027, 3454, 2020, 3454, 9987, 2020], [101, 2012, 3832, 4027, 3454, 202, 3454, 9987, 202], [101, 2012, 3832, 4027, 3454, 2020, 3454, 9987, 202] ]) At the start, we should find occurrences of the first 202. We can find all occurrences and then choose the first one: index_202 = (source_t == 202).nonzero(as_tuple=False).numpy() rows_for_replace = list() columns_to_replace = list() elements = source_t.shape[1] current_ind = 0 while current_ind < len(index_202)-1: current = index_202[current_ind] element_ind = current[1] + 1 rows_for_replace.extend([current[0]]*(elements-element_ind)) while element_ind < elements: columns_to_replace.append(element_ind) element_ind += 1 if current[0] == index_202[current_ind+1][0]: current_ind += 1 current_ind += 1 After this operation, we have all our indexes which we should replace with zeros. 4 elements in the first row, 5 in the second, 3 in the fourth row and nothing in the third and fifth. rows_for_replace, columns_to_replace ([0, 0, 0, 0, 1, 1, 1, 1, 1, 3, 3, 3], [5, 5, 5, 5, 4, 4, 4, 4, 4, 6, 6, 6]) Then we just copy our source tensor and set new values in place: destination_t = copy.deepcopy(source_t) destination_t[rows_for_replace, columns_to_replace] = 0 Summary: source_t tensor([[ 101, 2001, 2034, 1045, 202, 3454, 3453, 1234, 202], [ 101, 1999, 2808, 202, 17658, 3454, 202, 0, 0], [ 101, 2012, 3832, 4027, 3454, 2020, 3454, 9987, 2020], [ 101, 2012, 3832, 4027, 3454, 202, 3454, 9987, 202], [ 101, 2012, 3832, 4027, 3454, 2020, 3454, 9987, 202]]) destination_t tensor([[ 101, 2001, 2034, 1045, 202, 0, 0, 0, 0], [ 101, 1999, 2808, 202, 0, 0, 0, 0, 0], [ 101, 2012, 3832, 4027, 3454, 2020, 3454, 9987, 2020], [ 101, 2012, 3832, 4027, 3454, 202, 0, 0, 0], [ 101, 2012, 3832, 4027, 3454, 2020, 3454, 9987, 202]])
https://stackoverflow.com/questions/70402412/
Pytorch Binary Classification RNN Model not Learning
I'm working on a binary classification task with Pytorch and my model is failing to learn, I can't figure out if it is a problem with the model or with the data. Here is my model: from torch import nn class RNN(nn.Module): def __init__(self, input_dim): super(RNN, self).__init__() self.rnn = nn.RNN(input_size=input_dim, hidden_size=64, num_layers=2, batch_first=True, bidirectional=True) self.norm = nn.BatchNorm1d(128) self.rnn2 = nn.RNN(input_size=128, hidden_size=64, num_layers=2, batch_first=True, bidirectional=False) self.drop = nn.Dropout(0.5) self.fc7 = nn.Linear(64, 2) self.sigmoid2 = nn.Softmax(dim=2) def forward(self, x): out, h_n = self.rnn(x) out = out.permute(0, 2, 1) out = self.norm(out) out = out.permute(0, 2, 1) out, h_n = self.rnn2(out) out = self.drop(out) out = self.fc7(out) out = self.sigmoid2(out) return out.squeeze() The model consists in two RNN layers, with a BatchNorm in between, then a Dropout and the last layer, I use Softmax function with two classes instead of Sigmoid for evaluation purposes. Then I create and train the model: model = RNN(2476) optimizer = torch.optim.Adam(model.parameters(), lr=1e-4) loss_function = nn.CrossEntropyLoss() lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=30, gamma=0.1) model.train() EPOCHS = 25 BATCH_SIZE = 64 epoch_loss = [] for ii in range(EPOCHS): for i in range(1, X_train.size()[0]//BATCH_SIZE+1): x_train = X_train[(i-1)*BATCH_SIZE:i*BATCH_SIZE] labels = y_train[(i-1)*BATCH_SIZE:i*BATCH_SIZE] optimizer.zero_grad() y_pred = model(x_train) y_pred = y_pred.round() single_loss = loss_function(y_pred, labels.long().squeeze()) single_loss.backward() optimizer.step() lr_scheduler.step() print(f"\rBatch {i}/{X_train.size()[0]//BATCH_SIZE+1} Trained: {i*BATCH_SIZE}/{X_train.size()[0]} Loss: {single_loss.item():10.8f} Step: {lr_scheduler.get_lr()}", end="") epoch_loss.append(single_loss.item()) print(f'\nepoch: {ii:3} loss: {single_loss.item():10.8f}') This is the output when training the model: Batch 353/354 Trained: 22592/22644 Loss: 0.86013699 Step: [1.0000000000000007e-21] epoch: 0 loss: 0.86013699 Batch 353/354 Trained: 22592/22644 Loss: 0.81326193 Step: [1.0000000000000014e-33] epoch: 1 loss: 0.81326193 Batch 353/354 Trained: 22592/22644 Loss: 0.87576205 Step: [1.0000000000000022e-45] epoch: 2 loss: 0.87576205 Batch 353/354 Trained: 22592/22644 Loss: 0.92263710 Step: [1.0000000000000026e-57] epoch: 3 loss: 0.92263710 Batch 353/354 Trained: 22592/22644 Loss: 0.90701210 Step: [1.0000000000000034e-68] epoch: 4 loss: 0.90701210 Batch 353/354 Trained: 22592/22644 Loss: 0.92263699 Step: [1.0000000000000039e-80] epoch: 5 loss: 0.92263699 Batch 353/354 Trained: 22592/22644 Loss: 0.82888693 Step: [1.0000000000000044e-92] epoch: 6 loss: 0.82888693 Batch 353/354 Trained: 22592/22644 Loss: 0.81326193 Step: [1.000000000000005e-104]] epoch: 7 loss: 0.81326193 Batch 353/354 Trained: 22592/22644 Loss: 0.87576205 Step: [1.0000000000000055e-115] epoch: 8 loss: 0.87576205 Batch 353/354 Trained: 22592/22644 Loss: 0.82888693 Step: [1.0000000000000062e-127] epoch: 9 loss: 0.82888693 Batch 353/354 Trained: 22592/22644 Loss: 0.81326199 Step: [1.0000000000000067e-139] epoch: 10 loss: 0.81326199 Batch 353/354 Trained: 22592/22644 Loss: 0.82888693 Step: [1.0000000000000072e-151] epoch: 11 loss: 0.82888693 Batch 353/354 Trained: 22592/22644 Loss: 0.89138699 Step: [1.0000000000000076e-162] epoch: 12 loss: 0.89138699 Batch 353/354 Trained: 22592/22644 Loss: 0.82888699 Step: [1.000000000000008e-174]] epoch: 13 loss: 0.82888699 Batch 353/354 Trained: 22592/22644 Loss: 0.82888687 Step: [1.0000000000000089e-186] epoch: 14 loss: 0.82888687 Batch 353/354 Trained: 22592/22644 Loss: 0.82888693 Step: [1.0000000000000096e-198] epoch: 15 loss: 0.82888693 Batch 353/354 Trained: 22592/22644 Loss: 0.84451199 Step: [1.0000000000000103e-210] epoch: 16 loss: 0.84451199 Batch 353/354 Trained: 22592/22644 Loss: 0.96951205 Step: [1.0000000000000111e-221] epoch: 17 loss: 0.96951205 Batch 353/354 Trained: 22592/22644 Loss: 0.87576205 Step: [1.0000000000000117e-233] epoch: 18 loss: 0.87576205 Batch 353/354 Trained: 22592/22644 Loss: 0.89138705 Step: [1.0000000000000125e-245] epoch: 19 loss: 0.89138705 Batch 353/354 Trained: 22592/22644 Loss: 0.79763699 Step: [1.0000000000000133e-257] epoch: 20 loss: 0.79763699 Batch 353/354 Trained: 22592/22644 Loss: 0.84451199 Step: [1.0000000000000138e-268] epoch: 21 loss: 0.84451199 Batch 353/354 Trained: 22592/22644 Loss: 0.84451205 Step: [1.0000000000000146e-280] epoch: 22 loss: 0.84451205 Batch 353/354 Trained: 22592/22644 Loss: 0.79763693 Step: [1.0000000000000153e-292] epoch: 23 loss: 0.79763693 Batch 353/354 Trained: 22592/22644 Loss: 0.87576205 Step: [1.000000000000016e-304]] epoch: 24 loss: 0.87576205 And this is the loss per epoch: For the data, each of the features in the input data have a dimension of (2474,), and the targets have 1 dimension (either [1] or [0]) , then I add the sequence length dimension (1) to the input data for the RNN layers : X_train.size(), X_test.size(), y_train.size(), y_test.size() (torch.Size([22644, 1, 2474]), torch.Size([5661, 1, 2474]), torch.Size([22644, 1]), torch.Size([5661, 1])) Distribution of the target classes: I can't figure out why my model is not learning, the classes are balanced and I haven't notified anything wrong with the data. Any suggestions?
This is not a direct solution to your problem, but what was the process that led to this architecture? I've found it helpful to build up complexity iteratively if only to make identifying issues more trivial (what did I add just before the issue arose?). To save time on constructing your RNN iteratively, you can try single-batch training by which you construct a network that can overfit a single training batch. If your network can overfit a single training batch, it should be complex enough to learn the features in the training data. Once you have an architecture that can easily overfit a single training batch, you can then train with the entire training set and explore additional strategies to account for overfitting through regularization. Your model doesn't seem overly complex but this may mean starting with a single rnn layer and a single linear layer to see if your loss will budge on a single batch.
https://stackoverflow.com/questions/70405429/
How can I prepare and split data set for ImageNet with vgg16
''' I am trying to classify image using PyTorch but I did manage to stipulate my our data set to use it with vgg16 architecture ''' # ADD YOUR CODE HERE def evaluate(): running_loss = 0.0 # counter = 0 # Tell torch not to calculate gradients with torch.no_grad(): for i, data in enumerate(testloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # Move to device inputs = inputs.to(device = device) labels = labels.to(device = device) # Forward pass outputs = model(inputs) # Calculate Loss loss = criterion(outputs, labels) # Add loss to the validation set's running loss running_loss += loss.item() # Since our model find the real percentages by the following val_loss = running_loss / len(testloader) print('val loss: %.3f' % (val_loss)) # Get the top class of the output return val_loss ## 1. Dataset Load the dataset you were given. Images should be stored in an X variable and your labels in a Y variable. Split your dataset into train, validation and test set and pre-process your data for training. def eval_acc(train=False): correct = 0 total = 0 # since we're not training, we don't need to calculate the #gradients #for our outputs with torch.no_grad(): loader = trainloader if train else testloader for data in loader: images, labels = data images = images.to(device = device) labels = labels.to(device = device) # calculate outputs by running images through the network outputs = model(images) # the class with the highest energy is what we choose as #prediction _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() # Print out the information print('Accuracy of the network on the 10000 %s images: %d %%' % ('train' if train else 'test', 100 * correct / total))
You are missing a return statement in your forward() method. def forward(self,x): x = F.relu(self.fc1(x)) x = self.fc2(x) return x # <--- THIS
https://stackoverflow.com/questions/70405787/
How to batch a nested list of graphs in pytorch geometric
I am currently training a model which is a mix of graph neural networks and LSTM. However that means for each of my training sample, I need to pass in a list of graphs. The current batch class in torch_geometric supports batching with torch_geometric.data.Batch.from_data_list() but this only allows one graph for each data point. How else can I go about batching the graphs?
Use diagonal batching: https://pytorch-geometric.readthedocs.io/en/latest/notes/batching.html Simply, you will put all the graphs as subgraphs into one big graph. All the subgraphs will be isolated. See the example from TUDataset: https://colab.research.google.com/drive/1I8a0DfQ3fI7Njc62__mVXUlcAleUclnb?usp=sharing
https://stackoverflow.com/questions/70419992/
Linear regression using Pytorch
I have classification problem. I am using Pytorch, My input is sequence of length 341 and output one of three classes {0,1,2}, I want to train linear regression model using pytorch, I created the following class but during the training, the loss values start to have numbers then inf then NAN. I do not know how to fix that . Also I tried to initialize the weights for linear model but it is the same thing. Any suggestions. class regression(nn.Module): def __init__(self, input_dim): super().__init__() self.input_dim = input_dim # One layer self.linear = nn.Linear(input_dim, 1) def forward(self, x): y_pred = self.linear(x) return y_pred criterion = torch.nn.MSELoss() def fit(model, data_loader, optim, epochs): for epoch in range(epochs): for i, (X, y) in enumerate(data_loader): X = X.float() y = y.unsqueeze(1).float() X = Variable(X, requires_grad=True) y = Variable(y, requires_grad=True) # Make a prediction for the input X pred = model(X) #loss = (y-pred).pow(2).mean() loss = criterion(y, pred) optim.zero_grad() loss.backward() optim.step() print(loss) print(type(loss)) # Give some feedback after each 5th pass through the data if epoch % 5 == 0: print("Epoch", epoch, f"loss: {loss}") return None regnet = regression(input_dim=341) optim = SGD(regnet.parameters(), lr=0.01) fit(regnet, data_loader, optim=optim, epochs=5) pred = regnet(torch.Tensor(test_set.data_info).float()) pred = pred.detach().numpy()
I would additionally suggest to replace MSE with CrossEntropy Loss as it is better suited for multi-class classificiation problems. import random import torch from torch import nn, optim from matplotlib import pyplot as plt # Generate random dataset with your shape to test # Replace this with your own dataset data = [] for label in [0, 1, 2]: for i in range(1000): data.append((torch.rand(341), label)) # train test split random.shuffle(data) train, val = data[:1500], data[1500:] def run_gradient_descent(model, data_train, data_val, batch_size=64, learning_rate=0.01, weight_decay=0, num_epochs=10): criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=learning_rate, weight_decay=weight_decay) iters, losses = [], [] iters_sub, train_acc, val_acc = [], [] ,[] train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size, shuffle=True) # training n = 0 # the number of iterations for epoch in range(num_epochs): for xs, ts in iter(train_loader): if len(ts) != batch_size: continue zs = model(xs) loss = criterion(zs, ts) # compute the total loss loss.backward() # compute updates for each parameter optimizer.step() # make the updates for each parameter optimizer.zero_grad() # a clean up step for PyTorch # save the current training information iters.append(n) losses.append(float(loss)/batch_size) # compute *average* loss if n % 10 == 0: iters_sub.append(n) train_acc.append(get_accuracy(model, data_train)) val_acc.append(get_accuracy(model, data_val)) # increment the iteration number n += 1 # plotting plt.title("Training Curve (batch_size={}, lr={})".format(batch_size, learning_rate)) plt.plot(iters, losses, label="Train") plt.xlabel("Iterations") plt.ylabel("Loss") plt.show() plt.title("Training Curve (batch_size={}, lr={})".format(batch_size, learning_rate)) plt.plot(iters_sub, train_acc, label="Train") plt.plot(iters_sub, val_acc, label="Validation") plt.xlabel("Iterations") plt.ylabel("Accuracy") plt.legend(loc='best') plt.show() return model def get_accuracy(model, data): loader = torch.utils.data.DataLoader(data, batch_size=500) correct, total = 0, 0 for xs, ts in loader: zs = model(xs) pred = zs.max(1, keepdim=True)[1] # get the index of the max logit correct += pred.eq(ts.view_as(pred)).sum().item() total += int(ts.shape[0]) return correct / total class MyRegression(nn.Module): def __init__(self, input_dim, output_dim): super(MyRegression, self).__init__() # One layer self.linear = nn.Linear(input_dim, output_dim) def forward(self, x): return self.linear(x) model = MyRegression(341, 3) run_gradient_descent(model, train, val, batch_size=64, learning_rate=0.01, num_epochs=10)
https://stackoverflow.com/questions/70420222/
convert boundingPoly to yolo format
Yolov5 doesn't support segmentation labels and I need to convert it into the correct format. How would you convert this to yolo format? "boundingPoly": { "normalizedVertices": [{ "x": 0.026169369 }, { "x": 0.99525446 }, { "x": 0.99525446, "y": 0.688811 }, { "x": 0.026169369, "y": 0.688811 }] } The yolo format looks like this 0 0.588196 0.474138 0.823607 0.441645 <object-class> <x> <y> <width> <height>
After our back and forth in the comments I have enough info to answer your question. This is output from the Google Vision API. The normalizedVertices are similar to the YOLO format, because they are "normalized" meaning the coordinates are scaled between 0 and 1 as opposed to being pixels from 1 to n. Still, you need to do some transformation to put into the YOLO format. In the YOLO format, the X and Y values in the 2nd and 3rd columns refer to the center of the bounding box, as opposed to one of the corners. Here is a code snipped that will sample at https://ghostbin.com/hOoaz/raw into the follow string in YOLO format '0 0.5080664305 0.5624289849999999 0.9786587390000001 0.56914843' #Sample annotation output json_annotation = """ [ { "mid": "/m/01bjv", "name": "Bus", "score": 0.9459266, "boundingPoly": { "normalizedVertices": [ { "x": 0.018737061, "y": 0.27785477 }, { "x": 0.9973958, "y": 0.27785477 }, { "x": 0.9973958, "y": 0.8470032 }, { "x": 0.018737061, "y": 0.8470032 } ] } } ] """ import json json_object = json.loads(json_annotation, strict=False) #Map all class names to class id class_dict = {"Bus": 0} #Get class id for this record class_id = class_dict[json_object[0]["name"]] #Get the max and min values from segmented polygon points normalizedVertices = json_object[0]["boundingPoly"]["normalizedVertices"] max_x = max([v['x'] for v in normalizedVertices]) max_y = max([v['y'] for v in normalizedVertices]) min_x = min([v['x'] for v in normalizedVertices]) min_y = min([v['y'] for v in normalizedVertices]) width = max_x - min_x height = max_y - min_y center_x = min_x + (width/2) center_y = min_y + (height/2) yolo_row = str(f"{class_id} {center_x} {center_y} {width} {height}") print(yolo_row) If you are trying to train a YOLO model there are a few more steps you will need to do: You need to setup the images and annotations in a particular folder structure. But this should help you convert your annotations.
https://stackoverflow.com/questions/70420807/
Concat two tensors with different dimensions
I have two tensors a and b which are of different dimensions. a is of shape [100,100] and b is of the shape [100,3,10]. I want to concatenate these two tensors. For example: a = torch.randn(100,100) tensor([[ 1.3236, 2.4250, 1.1547, ..., -0.7024, 1.0758, 0.2841], [ 1.6699, -1.2751, -0.0120, ..., -0.2290, 0.9522, -0.4066], [-0.3429, -0.5260, -0.7748, ..., -0.5235, -1.8952, 1.2944], ..., [-1.3465, 1.2641, 1.6785, ..., 0.5144, 1.7024, -1.0046], [-0.7652, -1.2940, -0.6964, ..., 0.4661, -0.3998, -1.2428], [-0.4720, -1.0981, -2.3715, ..., 1.6423, 0.0560, 1.0676]]) The tensor b is as follows: tensor([[[ 0.4747, -1.9529, -0.0448, ..., -0.9694, 0.8009, -0.0610], [ 0.5160, 0.0810, 0.1037, ..., -1.7519, -0.3439, 1.2651], [-0.5975, -0.2000, -1.6451, ..., 1.3082, -0.4023, -0.3105]], ..., [[ 0.4747, -1.9529, -0.0448, ..., -0.9694, 0.8009, -0.0610], [ 0.1939, 1.0365, -0.0927, ..., -2.4948, -0.2278, -0.2390], [-0.5975, -0.2000, -1.6451, ..., 1.3082, -0.4023, -0.3105]]], dtype=torch.float64, grad_fn=<CopyBackwards>) I want to concatenate such that the first row in tensor a of size [100] is concatenated with the first row in tensor b which is of size [3,10]. This should be applicable to all rows in both tensors. That is, in simple words, considering just the first row in a and b, I want to get an output with size [100,130] as follows: [ 1.3236, 2.4250, 1.1547, ..., -0.7024, 1.0758, 0.2841, 0.4747, -1.9529, -0.0448, ..., -0.9694, 0.8009, -0.0610, 0.5160, 0.0810, 0.1037, ..., -1.7519, -0.3439, 1.2651, -0.5975, -0.2000, -1.6451, ..., 1.3082, -0.4023, -0.3105] In order to do this, I performed unsqueezed to tensor a to get the two tensors in the same dimensions as follows. a = a.unsqueeze(1) When I perform torch.cat([a,b], I still get an error. Can somebody help me in solving this? Thanks in advance.
Reshape b tensor accordingly and then merge it to a using torch.cat on 1 dim torch.cat((a, b.reshape(100, -1)), dim=1)
https://stackoverflow.com/questions/70420882/
Numpy arrays comparison
I have 2 pytorch tensors (single column) of 40 elements To compare them element by element I converted them to numpy arrays with a single column of 40 elements. I want to compare both arrays element by element and if the value is greater than 0.5 in one array make it 1 else 0 and convert the result again to pytorch tensor. How do I do that.
Maybe this helps: import numpy as np a = np.array([1, 2, 3, 4, 5]) b = np.array([1.1, 2.6, 3.3, 4.6, 5.5]) (np.abs(a-b)>0.5).astype(int) >>> array([0, 1, 0, 1, 0])
https://stackoverflow.com/questions/70421593/
How to transform output of NN, while still being able to train?
I have a neural network which outputs output. I want to transform output before the loss and backpropogation happen. Here is my general code: with torch.set_grad_enabled(training): outputs = net(x_batch[:, 0], x_batch[:, 1]) # the prediction of the NN # My issue is here: outputs = transform_torch(outputs) loss = my_loss(outputs, y_batch) if training: scheduler.step() loss.backward() optimizer.step() Following the advice in How to transform output of neural network and still train? , I have a transformation function which I put my output through: def transform_torch(predictions): new_tensor = [] for i in range(int(len(predictions))): arr = predictions[i] a = arr.clone().detach() # My transformation, which results in a positive first element, and the other elements represent decrements of the first positive element. b = torch.negative(a) b[0] = abs(b[0]) new_tensor.append(torch.cumsum(b, dim = 0)) # new_tensor[i].requires_grad = True new_tensor = torch.stack(new_tensor, 0) return new_tensor Note: In addition to clone().detach(), I also tried the methods described in Pytorch preferred way to copy a tensor, to similar result. My problem is that no training actually happens with this tensor that is tranformed. If I try to modify the tensor in-place (e.g. directly modify arr), then Torch complains that I can't modify a tensor in-place with a gradient attached to it. Any suggestions?
Calling detach on your predictions stops gradient propagation to your model. Nothing you do after that can change your parameters. How about modifying your code to avoid this: def transform_torch(predictions): b = torch.cat([predictions[:, :1, ...].abs(), -predictions[:, 1:, ...]], dim=1) new_tensor = torch.cumsum(b, dim=1) return new_tensor A little test you can run, to verify that gradients do propagate through this transformation is: # start with some random tensor representing the input predictions # make sure it requires_grad pred = torch.rand((4, 5, 2, 3)).requires_grad_(True) # transform it tpred = transform_torch(pred) # make up some "default" loss function and back-prop tpred.mean().backward() # check to see all gradients of the original prediction: pred.grad # as you can see, all gradients are non-zero Out[]: tensor([[[[ 0.0417, 0.0417, 0.0417], [ 0.0417, 0.0417, 0.0417]], [[-0.0333, -0.0333, -0.0333], [-0.0333, -0.0333, -0.0333]], [[-0.0250, -0.0250, -0.0250], [-0.0250, -0.0250, -0.0250]], [[-0.0167, -0.0167, -0.0167], [-0.0167, -0.0167, -0.0167]], [[-0.0083, -0.0083, -0.0083], [-0.0083, -0.0083, -0.0083]]], [[[ 0.0417, 0.0417, 0.0417], [ 0.0417, 0.0417, 0.0417]], [[-0.0333, -0.0333, -0.0333], [-0.0333, -0.0333, -0.0333]], [[-0.0250, -0.0250, -0.0250], [-0.0250, -0.0250, -0.0250]], [[-0.0167, -0.0167, -0.0167], [-0.0167, -0.0167, -0.0167]], [[-0.0083, -0.0083, -0.0083], [-0.0083, -0.0083, -0.0083]]], [[[ 0.0417, 0.0417, 0.0417], [ 0.0417, 0.0417, 0.0417]], [[-0.0333, -0.0333, -0.0333], [-0.0333, -0.0333, -0.0333]], [[-0.0250, -0.0250, -0.0250], [-0.0250, -0.0250, -0.0250]], [[-0.0167, -0.0167, -0.0167], [-0.0167, -0.0167, -0.0167]], [[-0.0083, -0.0083, -0.0083], [-0.0083, -0.0083, -0.0083]]], [[[ 0.0417, 0.0417, 0.0417], [ 0.0417, 0.0417, 0.0417]], [[-0.0333, -0.0333, -0.0333], [-0.0333, -0.0333, -0.0333]], [[-0.0250, -0.0250, -0.0250], [-0.0250, -0.0250, -0.0250]], [[-0.0167, -0.0167, -0.0167], [-0.0167, -0.0167, -0.0167]], [[-0.0083, -0.0083, -0.0083], [-0.0083, -0.0083, -0.0083]]]]) If you'll try this little test with your original code you'll either get an error that you are trying to propagate through tensors that do not require_grad, or you'll get no grads for the input pred.
https://stackoverflow.com/questions/70426391/
Why does the loss decreases and the accuracy dosen't increases? PyTorch
I'm creating a CNN in Pytorch and I'm having some problem with the training function, I believe. For each epoch, the loss decreases. But the accuracy remains the same, it doesn't change. The output of the training function is this: Epoch: 1 correct: 234, N_test: 468 ------> loss: 58.2041027, accuracy_val: %50.0 Epoch: 2 correct: 234, N_test: 468 ------> loss: 51.47981386, accuracy_val: %50.0 Epoch: 3 correct: 234, N_test: 468 ------> loss: 51.57150275, accuracy_val: %50.0 Epoch: 4 correct: 234, N_test: 468 ------> loss: 39.14232715, accuracy_val: %50.0 Epoch: 5 correct: 234, N_test: 468 ------> loss: 32.23730827, accuracy_val: %50.0 I know that although they are correlated, loss and accuracy have their complications, but I believe there may be a problem with the code and I am not able to determine what. Here's the neural network: class CNN(nn.Module): # Contructor def __init__(self): super(CNN, self).__init__() # Conv1 self.cnn1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=5, stride=1, padding=0) self.conv1_bn = nn.BatchNorm2d(64) self.maxpool1=nn.MaxPool2d(kernel_size=2, stride=2) # Conv2 self.cnn2 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=5,stride=1, padding=0) self.conv2_bn = nn.BatchNorm2d(64) self.maxpool2=nn.MaxPool2d(kernel_size=2, stride=2) # Conv3 self.cnn3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=5,stride=1, padding=0) self.conv3_bn = nn.BatchNorm2d(128) self.maxpool3=nn.MaxPool2d(kernel_size=2, stride=2) # FCL 1 self.fc1 = nn.Linear(in_features=128 * 27 * 27, out_features=500) self.bn_fc1 = nn.BatchNorm1d(500) # FCL 2 self.fc2 = nn.Linear(in_features=500, out_features=500) self.bn_fc2 = nn.BatchNorm1d(500) # FCL3 self.fc3 = nn.Linear(in_features=500, out_features=1) # Prediction def forward(self, x): # conv1 x = self.cnn1(x) x = self.conv1_bn(x) x = torch.relu(x) x = self.maxpool1(x) # conv2 x = self.cnn2(x) x = self.conv2_bn(x) x = torch.relu(x) x = self.maxpool2(x) # conv3 x = self.cnn3(x) x = self.conv3_bn(x) x = torch.relu(x) x = self.maxpool3(x) # Fcl1 x = x.view(x.size(0), -1) x = self.fc1(x) x = self.bn_fc1(x) x = torch.relu(x) # Fcl2 x = self.fc2(x) x = self.bn_fc2(x) x = torch.relu(x) # final fcl x = self.fc3(x) x = torch.sigmoid(x) return x The training function: def train_model(model,train_loader,test_loader,optimizer,n_epochs=5): #global variable N_test=len(dataset_val) accuracy_list=[] loss_list=[] for epoch in range(n_epochs): cost = 0 model.train() print(f"Epoch: {epoch + 1}") for x, y in train_loader: x, y = x.to(device), y.to(device) optimizer.zero_grad() z = model(x) y = y.unsqueeze(-1) y = y.float() loss = criterion(z, y) loss.backward() optimizer.step() cost+=loss.item() correct=0 model.eval() #perform a prediction on the validation data for x_test, y_test in test_loader: x_test, y_test = x_test.to(device), y_test.to(device) z = model(x_test) _, yhat = torch.max(z.data, 1) correct += (yhat == y_test).sum().item() accuracy = correct / N_test accuracy_list.append(accuracy) loss_list.append(cost) print(f"------> loss: {round(cost, 8)}, accuracy_val: %{accuracy * 100}") return accuracy_list, loss_lis The plot is this: Plot with accuracy and loss
Your outputs are all going to be 1 since you have 1 output and you're taking the max over the 2nd dimension: _, yhat = torch.max(z.data, 1) correct += (yhat == y_test).sum().item() To do binary classification you need to pick a threshold and then threshold your data into two classes, or have 2 outputs (probably easier in this case).
https://stackoverflow.com/questions/70428592/
Pytorch DataLoader doesn't return batched data
My dataset is composed of image patches obtained from the original image (face patches and random outside of face patches). Patches are stored in a folder with a name of an original image from which patches originate. I created my own DataSet and DataLoader but when I iterate over the dataset data is not returned in batches. A batch of size 1 should include an array of tuples of patches and a label, so with the increased batch size, we should get an array of arrays of tuples with labels. But DataLoader returns only one array of tuples no matter the batch size. My dataset: import os import cv2 as cv import PIL.Image as Image import torchvision.transforms as Transforms from torch.utils.data import dataset class PatchDataset(dataset.Dataset): def __init__(self, img_folder, n_patches): self.img_folder = img_folder self.n_patches = n_patches self.img_names = sorted(os.listdir(img_folder)) self.transform = Transforms.Compose([ Transforms.Resize((50, 50)), Transforms.ToTensor() ]) def __len__(self): return len(self.img_names) def __getitem__(self, idx): img_name = self.img_names[idx] patch_dir = os.path.join(self.img_folder, img_name) patches = [] for i in range(self.n_patches): face_patch = cv.imread(os.path.join(patch_dir, f'{str(i)}_face.png')) face_patch = cv.cvtColor(face_patch, cv.COLOR_BGR2RGB) face_patch = Image.fromarray(face_patch) face_patch = self.transform(face_patch) patch = cv.imread(os.path.join(patch_dir, f'{str(i)}_patch.png')) patch = cv.cvtColor(patch, cv.COLOR_BGR2RGB) patch = Image.fromarray(patch) patch = self.transform(patch) patches.append((face_patch, patch)) return patches, int(img_name.split('-')[0]) Then I use it as such: X = PatchDataset(PATCHES_DIR, 9) train_dl = dataloader.DataLoader( X, batch_size=10, drop_last=True ) for batch_X, batch_Y in train_dl: print(len(batch_X)) print(len(batch_Y)) In this provided case the batch size is 10, so printing of the batch_Y returns the correct number (10). But the printing of the batch_X returns 9 which is number of patch pairs - returns only one sample from dataset instead of batch of 10 samples where each of them is length of 9.
You should return a one dimension higher tensor instead of a list of tensors in __get_item__ function call. You can use torch.stack(patches). def __getitem__(self, idx): img_name = self.img_names[idx] patch_dir = os.path.join(self.img_folder, img_name) patches = [] for i in range(self.n_patches): face_patch = cv.imread(os.path.join(patch_dir, f'{str(i)}_face.png')) face_patch = cv.cvtColor(face_patch, cv.COLOR_BGR2RGB) face_patch = Image.fromarray(face_patch) face_patch = self.transform(face_patch) patch = cv.imread(os.path.join(patch_dir, f'{str(i)}_patch.png')) patch = cv.cvtColor(patch, cv.COLOR_BGR2RGB) patch = Image.fromarray(patch) patch = self.transform(patch) patches.append((face_patch, patch)) return torch.stack(patches), int(img_name.split('-')[0])
https://stackoverflow.com/questions/70442533/
How to use conv2d in this case
I want to create an NN layer such that: for the input of size 100 assume every 5 samples create "block" the layer should compute let's say 3 values for every block so the input/output sizes of this layer should be: 100 -> 20*3 every block of size 5 (and only this block) is fully connected to the result block of size 3 If I understand it correctly I can use Conv2d for this problem. But I'm not sure how to correctly choose conv2d parameters. Is Conv2d suitable for this task? If so, what are the correct parameters? Is that input channels = 100 output channels = 20*3 kernel = (5,1) ?
You can use either Conv2D or Conv1D. With the data shaped like batch x 100 x n_features you can use Conv1D with this setup: Input channels: n_features Output channels: 3 * output_features kernel: 5 strides: 5 Thereby, the kernel is applied to 5 samples and generates 3 outputs. The values for n_features and output_features can be anything you like and might as well be 1. Setting the strides to 5 results in a non-overlapping convolution so that each block uniquely contributes to one output.
https://stackoverflow.com/questions/70443657/
RuntimeError: expected scalar type Float but found Long neural network
I know there are some questions that are like this question, but when I follow them it seems to lead me down a rabbit hole. As if The problem I just fixed causes another problem. Here are 2 of the rabbit hole solutions I have kept because they have seemed to fix their problems. I doubt they would be of any help but here they are just in case. one: batch_X = batch_X.to(device=device, dtype=torch.int64) batch_y = batch_y.to(device=device, dtype=torch.int64) two: x = x.view(x.size(0), -1) This is the error I'm getting. Traceback (most recent call last): File "c:/Users/14055/Desktop/Research/new 1.0.py", line 93, in <module> training() File "c:/Users/14055/Desktop/Research/new 1.0.py", line 63, in training output = model(batch_X) # passed input data here!!!!!!!!!!!!!!!!!!!!!!!!!! File "C:\Users\14055\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "c:/Users/14055/Desktop/Research/new 1.0.py", line 31, in forward x = F.relu(self.fc1(x)) File "C:\Users\14055\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\14055\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\linear.py", line 96, in forward return F.linear(input, self.weight, self.bias) File "C:\Users\14055\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\functional.py", line 1847, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: expected scalar type Float but found Long My code is below import torch.cuda import torch import numpy as np import sys import torch.nn as nn import torch.nn.functional as F from torchsummary import summary #------------------------------------------------------------------------- torch.cuda.set_device(0) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') np.set_printoptions(threshold=sys.maxsize) #------------------------------------------------------------------------- input_data = torch.Tensor(np.load("inputData.npy", allow_pickle=True)) predict_data = torch.Tensor(np.load("predict.npy", allow_pickle=True)) input_data = input_data.type(torch.FloatTensor) predict_data = predict_data.type(torch.FloatTensor) print(type(input_data)) class NeuralNet(nn.Module): def __init__(self, gpu = True): super(NeuralNet, self ).__init__() self.fc1 = nn.Linear(248, 750).to(device) self.fc2 = nn.Linear(750, 10).to(device) def forward(self, x): x = x.view(x.size(0), -1) x = F.relu(self.fc1(x)) x = self.fc2(x).to(device) return x.to(device) def training(): model.to(device) training.criterion = nn.CrossEntropyLoss() optimizer= torch.optim.SGD(model.parameters(), lr=0.003, weight_decay= 0.00005, momentum = .9, nesterov = True) n_epochs = 20000 a = np.float64([9,9,9,9,9]) #antioverfit testing_loss = 0.0 BATCH_SIZE = 10 EPOCHS = 3 for epoch in range(EPOCHS): if(testing_loss <= a[4]): # part of anti overfit train_loss = 0.0 testing_loss = 0.0 model.train() for i in (range(0, len(input_data), BATCH_SIZE)): batch_X = input_data[i:i+BATCH_SIZE] batch_y = predict_data[i:i+BATCH_SIZE] optimizer.zero_grad() batch_X = batch_X.to(device=device, dtype=torch.int64) #gpu # input data here!!!!!!!!!!!!!!!!!!!!!!!!!! batch_y = batch_y.to(device=device, dtype=torch.int64) #gpu # larget data here!!!!!!!!!!!!!!!!!!!!!!!!!! output = model(batch_X) # passed input data here!!!!!!!!!!!!!!!!!!!!!!!!!! loss = training.criterion(output, batch_y) loss.backward() optimizer.step() train_loss += loss.item()*batch_X.size(0) train_loss = train_loss/len(predict_data.train_loader.dataset) print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch+1, train_loss)) model.eval() # Gets Validation loss train_loss = 0.0 with torch.no_grad(): for i in (range(0, len(input_data), BATCH_SIZE)): batch_X = input_data[i:i+BATCH_SIZE] batch_y = predict_data[i:i+BATCH_SIZE] batch_X = batch_X.to(device=device, dtype=torch.int64) batch_y = batch_y.to(device=device, dtype=torch.int64) output = model(batch_X) loss = training.criterion(output, batch_y).to(device=device, dtype=torch.int64) testing_loss += loss.item()*batch_X.size(0) testing_loss = testing_loss / len(predict_data.test_loader.dataset) print('Validation loss = ' , testing_loss) a = np.insert(a,0,testing_loss) # part of anti overfit a = np.delete(a,5) print('Validation loss = ' , testing_loss) model = NeuralNet().to(device=device) #summary(model, input_size=(1, 248, 248)) training()
Why do you cast X and Y to int64? Mainly, this is the problem. batch_X = batch_X.to(device=device, dtype=torch.int64) #gpu # input data here!!!!!!!!!!!!!!!!!!!!!!!!!! batch_y = batch_y.to(device=device, dtype=torch.int64) #gpu You cast batch_X and batch_Y to int64 a.k.a long, but float was expected, hence the error. Replace this with batch_X = batch_X.to(device=device) batch_y = batch_y.to(device=device, dtype=torch.int64) or batch_X = batch_X.to(device=device, dtype=torch.float) batch_y = batch_y.to(device=device, dtype=torch.int64) And this should solve your problem. EDIT: You only need to keep y as int. Since you are using CrossEntropyLoss which expects target labels (expected to be an int or long). Overall, you need to keep the data type of x to be float, and y should be long or int.
https://stackoverflow.com/questions/70444040/
Locally save a dataframe from a remote server in VSCode
I'm running a python script in VSCode on a remote server and I want to save a dataframe that is generated in that script locally. Is this somehow possible? Thanks!
You can save the dataframe to a directory (maybe in .csv) on the remote server and download it from the explorer in VSCode by right-clicking on that file.
https://stackoverflow.com/questions/70447849/
Change last layer on pretrained huggingface model
I want to re-finetuned a transformer model but I get an unknown error when I tried to train the model. I can't change the "num_labels" on loading the model. So, I tried to change it manually model_name = "mrm8488/flaubert-small-finetuned-movie-review-sentiment-analysis" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name).to('cuda') num_labels = 3 model.sequence_summary.summary = torch.nn.Linear(in_features=model.sequence_summary.summary.in_features, out_features=num_labels, bias=True) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_train['train'], eval_dataset=tokenized_test['train'], tokenizer=tokenizer, compute_metrics=compute_metrics, #data_collator=data_collator, ) trainer.train() The error --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-93-8139f38c5ec6> in <module>() 20 ) 21 ---> 22 trainer.train() 7 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing) 2844 if size_average is not None or reduce is not None: 2845 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2846 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) 2847 2848 ValueError: Expected input batch_size (24) to match target batch_size (16).
So, There is a solution for this Just add ignore_mismatched_sizes=True when loading the model as: model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=3, ignore_mismatched_sizes=True).to('cuda')
https://stackoverflow.com/questions/70449122/
AttributeError: 'DataParallel' object has no attribute 'copy'
I am trying to resume training monkAI pytorch retinanet. I have loaded with .pt file instead of actual model. The changes are made in Monk_Object_Detection/5_pytorch_retinanet/lib/train_detector.py, check for '# change' in the places where its modified. def Model(self, model_name="resnet18",gpu_devices=[0]): ''' User function: Set Model parameters Available Models resnet18 resnet34 resnet50 resnet101 resnet152 Args: model_name (str): Select model from available models gpu_devices (list): List of GPU Device IDs to be used in training Returns: None ''' num_classes = self.system_dict["local"]["dataset_train"].num_classes(); if model_name == "resnet18": retinanet = model.resnet18(num_classes=num_classes, pretrained=True) elif model_name == "resnet34": retinanet = model.resnet34(num_classes=num_classes, pretrained=True) elif model_name == "resnet50": # retinanet = model.resnet50(num_classes=num_classes, pretrained=True) # change retinanet = torch.load('/content/drive/MyDrive/Object_detection_retinanet/trained_retinanet_40.pt') elif model_name == "resnet101": retinanet = model.resnet101(num_classes=num_classes, pretrained=True) elif model_name == "resnet152": retinanet = model.resnet152(num_classes=num_classes, pretrained=True) if self.system_dict["params"]["use_gpu"]: self.system_dict["params"]["gpu_devices"] = gpu_devices if len(self.system_dict["params"]["gpu_devices"])==1: os.environ["CUDA_VISIBLE_DEVICES"] = str(self.system_dict["params"]["gpu_devices"][0]) else: os.environ["CUDA_VISIBLE_DEVICES"] = ','.join([str(id) for id in self.system_dict["params"]["gpu_devices"]]) self.system_dict["local"]["device"] = 'cuda' if torch.cuda.is_available() else 'cpu' # change - added 3 lines below if isinstance(retinanet,torch.nn.DataParallel): retinanet = retinanet.module retinanet.load_state_dict(torch.load('/content/drive/MyDrive/Object_detection_retinanet/trained_retinanet_40.pt')) retinanet = retinanet.to(self.system_dict["local"]["device"]) retinanet = torch.nn.DataParallel(retinanet).to(self.system_dict["local"]["device"]) retinanet.training = True retinanet.train() retinanet.module.freeze_bn() self.system_dict["local"]["model"] = retinanet; I am getting attribute error, when I call the Model() from main function as shown below: from train_detector import Detector gtf = Detector() #Loading the dataset root_dir = './' coco_dir = 'coco_dir' img_dir = 'images' set_dir ='train' gtf.Train_Dataset(root_dir, coco_dir, img_dir, set_dir, batch_size=8, use_gpu=True) gtf.Model(model_name="resnet50", gpu_devices=[0, 1, 2, 3]) error: AttributeError Traceback (most recent call last) <ipython-input-22-1a0c8d446904> in <module>() 3 if PRE_TRAINED: 4 #Initialising Model ----> 5 gtf.Model(model_name="resnet50", gpu_devices=[0, 1, 2, 3]) 6 #Setting up hyperparameters 7 gtf.Set_Hyperparams(lr=0.001, val_interval=1, print_interval=20) 2 frames /content/Monk_Object_Detection/5_pytorch_retinanet/lib/train_detector.py in Model(self, model_name, gpu_devices) 245 if isinstance(retinanet,torch.nn.DataParallel): 246 retinanet = retinanet.module --> 247 retinanet.load_state_dict(torch.load('/content/drive/MyDrive/Object_detection_retinanet/trained_retinanet_40.pt')) 248 249 retinanet = retinanet.to(self.system_dict["local"]["device"]) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict) 1453 # copy state_dict so _load_from_state_dict can modify it 1454 metadata = getattr(state_dict, '_metadata', None) -> 1455 state_dict = state_dict.copy() 1456 if metadata is not None: 1457 # mypy isn't aware that "_metadata" exists in state_dict /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 1176 return modules[name] 1177 raise AttributeError("'{}' object has no attribute '{}'".format( -> 1178 type(self).__name__, name)) 1179 1180 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None: AttributeError: 'DataParallel' object has no attribute 'copy' kindly, help me with the solution!
I found this by simply googling your problem: retinanet.load_state_dict(torch.load('filename').module.state_dict()) The link to the discussion is here.
https://stackoverflow.com/questions/70451021/
How to load in graph from networkx into PyTorch geometric and set node features and labels?
Goal: I am trying to import a graph FROM networkx into PyTorch geometric and set labels and node features. (This is in Python) Question(s): How do I do this [the conversion from networkx to PyTorch geometric]? (presumably by using the from_networkx function) How do I transfer over node features and labels? (more important question) I have seen some other/previous posts with this question but they weren't answered (correct me if I am wrong). Attempt: (I have just used an unrealistic example below, as I cannot post anything real on here) Let us imagine we are trying to do a graph learning task (e.g. node classification) on a group of cars (not very realistic as I said). That is, we have a group of cars, an adjacency matrix, and some features (e.g. price at the end of the year). We want to predict the node label (i.e. brand of the car). I will be using the following adjacency matrix: (apologies, cannot use latex to format this) A = [(0, 1, 0, 1, 1), (1, 0, 1, 1, 0), (0, 1, 0, 0, 1), (1, 1, 0, 0, 0), (1, 0, 1, 0, 0)] Here is the code (for Google Colab environment): import pandas as pd import numpy as np import matplotlib.pyplot as plt import networkx as nx from torch_geometric.utils.convert import to_networkx, from_networkx import torch !pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.10.0+cpu.html # Make the networkx graph G = nx.Graph() # Add some cars (just do 4 for now) G.add_nodes_from([ (1, {'Brand': 'Ford'}), (2, {'Brand': 'Audi'}), (3, {'Brand': 'BMW'}), (4, {'Brand': 'Peugot'}), (5, {'Brand': 'Lexus'}), ]) # Add some edges G.add_edges_from([ (1, 2), (1, 4), (1, 5), (2, 3), (2, 4), (3, 2), (3, 5), (4, 1), (4, 2), (5, 1), (5, 3) ]) # Convert the graph into PyTorch geometric pyg_graph = from_networkx(G) So this correctly converts the networkx graph to PyTorch Geometric. However, I still don't know how to properly set the labels. The brand values for each node have been converted and are stored within: pyg_graph.Brand Below, I have just made some random numpy arrays of length 5 for each node (just pretend that these are realistic). ford_prices = np.random.randint(100, size = 5) lexus_prices = np.random.randint(100, size = 5) audi_prices = np.random.randint(100, size = 5) bmw_prices = np.random.randint(100, size = 5) peugot_prices = np.random.randint(100, size = 5) This brings me to the main question: How do I set the prices to be the node features of this graph? How do I set the labels of the nodes? (and will I need to remove the labels from pyg_graph.Brand when training the network?) Thanks in advance and happy holidays.
The easiest way is to add all information to the networkx graph and directly create it in the way you need it. I guess you want to use some Graph Neural Networks. Then you want to have something like below. Instead of text as labels, you probably want to have a categorial representation, e.g. 1 stands for Ford. If you want to match the "usual convention". Then you name your input features x and your labels/ground truth y. The splitting of the data into train and test is done via mask. So the graph still contains all information, but only part of it is used for training. Check the PyTorch Geometric introduction for an example, which uses the Cora dataset. import networkx as nx import numpy as np import torch from torch_geometric.utils.convert import from_networkx # Make the networkx graph G = nx.Graph() # Add some cars (just do 4 for now) G.add_nodes_from([ (1, {'y': 1, 'x': 0.5}), (2, {'y': 2, 'x': 0.2}), (3, {'y': 3, 'x': 0.3}), (4, {'y': 4, 'x': 0.1}), (5, {'y': 5, 'x': 0.2}), ]) # Add some edges G.add_edges_from([ (1, 2), (1, 4), (1, 5), (2, 3), (2, 4), (3, 2), (3, 5), (4, 1), (4, 2), (5, 1), (5, 3) ]) # Convert the graph into PyTorch geometric pyg_graph = from_networkx(G) print(pyg_graph) # Data(edge_index=[2, 12], x=[5], y=[5]) print(pyg_graph.x) # tensor([0.5000, 0.2000, 0.3000, 0.1000, 0.2000]) print(pyg_graph.y) # tensor([1, 2, 3, 4, 5]) print(pyg_graph.edge_index) # tensor([[0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 4], # [1, 3, 4, 0, 2, 3, 1, 4, 0, 1, 0, 2]]) # Split the data train_ratio = 0.2 num_nodes = pyg_graph.x.shape[0] num_train = int(num_nodes * train_ratio) idx = [i for i in range(num_nodes)] np.random.shuffle(idx) train_mask = torch.full_like(pyg_graph.y, False, dtype=bool) train_mask[idx[:num_train]] = True test_mask = torch.full_like(pyg_graph.y, False, dtype=bool) test_mask[idx[num_train:]] = True print(train_mask) # tensor([ True, False, False, False, False]) print(test_mask) # tensor([False, True, True, True, True])
https://stackoverflow.com/questions/70452465/
How to get generated tokens in T5 training_step for using user-defined metrics?
I am fine-tuning T5 for question answering generation and want to add additional measures (e.g., BLEU, ROUGE) for the generated answers, in addition to the loss function. For that, I believe it would be necessary to obtain the generated tokens (answers) at each training_step. However, after reading the source code, I still have no clue how to add that. Below I leave an excerpt of my code. I can extract the output.loss and output.logits, but I didn't find a way to get the generated tokens to use additional evaluation metrics. Thanks in advance. class MyQAModel(pl.LightningModule): def __init__(self): super().__init__() self.model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True) def forward(self, input_ids, attention_mask, labels=None): output = self.model( input_ids, attention_mask=attention_mask, labels=labels) return output.loss, output.logits def training_step(self, batch, batch_idx): input_ids = batch['input_ids'] attention_mask=batch['attention_mask'] labels = batch['labels'] loss, outputs = self(input_ids, attention_mask, labels) self.log("train_loss", loss, prog_bar=True, logger=True) return {"loss": loss, "predictions":outputs, "labels": labels} ... (code continues...) ....
You can obtain predicted tokens from output.logits [batch, seq_len, vocab_size] using torch.argmax(output.logits, dim=-1) [batch, seq_len]. Then, to decode the generated sentence from a batch of token ids, run generated_sentences = [] for predicted_token_ids in torch.argmax(output.logits, dim=-1): generated_sentences.append(tokenizer.decode(predicted_token_ids)) # For getting original sentences original_sentences = [] for sent_ids in input_ids: original_sentences.append(tokenizer.decode(sent_ids))
https://stackoverflow.com/questions/70452777/
Is there a difference between Keras Dense layer and Pytorch's nn.linear layer?
I noticed the definition of Keras Dense layer says: Activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x). So if we have a code like: model.add(Dense(10, activation = None)) Is it basically the same as: nn.linear(128, 10) ? Thank you so much!
Yes, it is the same. model.add (Dense(10, activation = None)) or nn.linear(128, 10) is the same, because it is not activated in both, therefore if you don't specify anything, no activation is applied. It is so!!! :)
https://stackoverflow.com/questions/70455403/
What is the purpose of [np.arange(0, self.batch_size), action] after the neural network?
I followed a PyTorch tutorial to learn reinforcement learning(TRAIN A MARIO-PLAYING RL AGENT) but I am confused about the following code: current_Q = self.net(state, model="online")[np.arange(0, self.batch_size), action] # Q_online(s,a) What's the purpose of [np.arange(0, self.batch_size), action] after the neural network?(I know that TD_estimate takes in state and action, just confused about this on the programming side) What is this usage(put a list after self.net)? More related code referenced from the tutorial: class MarioNet(nn.Module): def __init__(self, input_dim, output_dim): super().__init__() c, h, w = input_dim if h != 84: raise ValueError(f"Expecting input height: 84, got: {h}") if w != 84: raise ValueError(f"Expecting input width: 84, got: {w}") self.online = nn.Sequential( nn.Conv2d(in_channels=c, out_channels=32, kernel_size=8, stride=4), nn.ReLU(), nn.Conv2d(in_channels=32, out_channels=64, kernel_size=4, stride=2), nn.ReLU(), nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1), nn.ReLU(), nn.Flatten(), nn.Linear(3136, 512), nn.ReLU(), nn.Linear(512, output_dim), ) self.target = copy.deepcopy(self.online) # Q_target parameters are frozen. for p in self.target.parameters(): p.requires_grad = False def forward(self, input, model): if model == "online": return self.online(input) elif model == "target": return self.target(input) self.net: self.net = MarioNet(self.state_dim, self.action_dim).float() Thanks for any help!
Essentially, what happens here is that the output of the net is being sliced to get the desired part of the Q table. The (somewhat confusing) index of [np.arange(0, self.batch_size), action] indexes each axis. So, for axis with index 1, we pick the item indicated by action. For index 0, we pick all items between 0 and self.batch_size. If self.batch_size is the same as the length of dimension 0 of this array, then this slice can be simplified to [:, action] which is probably more familiar to most users.
https://stackoverflow.com/questions/70458347/
Clarification about Gradient Accumulation
I'm trying to get a better understanding of how Gradient Accumulation works and why it is useful. To this end, I wanted to ask what is the difference (if any) between these two possible PyTorch-like implementations of a custom training loop with gradient accumulation: gradient_accumulation_steps = 5 for batch_idx, batch in enumerate(dataset): x_batch, y_true_batch = batch y_pred_batch = model(x_batch) loss = loss_fn(y_true_batch, y_pred_batch) loss.backward() if (batch_idx + 1) % gradient_accumulation_steps == 0: # (assumption: the number of batches is a multiple of gradient_accumulation_steps) optimizer.step() optimizer.zero_grad() y_true_batches, y_pred_batches = [], [] gradient_accumulation_steps = 5 for batch_idx, batch in enumerate(dataset): x_batch, y_true_batch = batch y_pred_batch = model(x_batch) y_true_batches.append(y_true_batch) y_pred_batches.append(y_pred_batch) if (batch_idx + 1) % gradient_accumulation_steps == 0: # (assumption: the number of batches is a multiple of gradient_accumulation_steps) y_true = stack_vertically(y_true_batches) y_pred = stack_vertically(y_pred_batches) loss = loss_fn(y_true, y_pred) loss.backward() optimizer.step() optimizer.zero_grad() y_true_batches.clear() y_pred_batches.clear() Also, kind of as an unrelated question: Since the purpose of gradient accumulation is to mimic a larger batch size in cases where you have memory constraints, does it mean that I should also increase the learning rate proportionally?
1. The difference between the two programs: Conceptually, your two implementations are the same: you forward gradient_accumulation_steps batches for each weight update. As you already observed, the second method requires more memory resources than the first one. There is, however, a slight difference: usually, loss functions implementation use mean to reduce the loss over the batch. When you use gradient accumulation (first implementation) you reduce using mean over each mini-batch, but using sum over the accumulated gradient_accumulation_steps mini-batches. To make sure the accumulated gradient implementation is identical to large batches implementation you need to be very careful in the way the loss function is reduced. In many cases you will need to divide the accumulated loss by gradient_accumulation_steps. See this answer for a detailed imlpementation. 2. Batch size and learning rate: Learning rate and batch size are indeed related. When increasing the batch size one usually reduces the learning rate. See, e.g.: Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, Quoc V. Le, Don't Decay the Learning Rate, Increase the Batch Size (ICLR 2018).
https://stackoverflow.com/questions/70461130/
How to calculate perplexity of a sentence using huggingface masked language models?
I have several masked language models (mainly Bert, Roberta, Albert, Electra). I also have a dataset of sentences. How can I get the perplexity of each sentence? From the huggingface documentation here they mentioned that perplexity "is not well defined for masked language models like BERT", though I still see people somehow calculate it. For example in this SO question they calculated it using the function def score(model, tokenizer, sentence, mask_token_id=103): tensor_input = tokenizer.encode(sentence, return_tensors='pt') repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1) mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2] masked_input = repeat_input.masked_fill(mask == 1, 103) labels = repeat_input.masked_fill( masked_input != 103, -100) loss,_ = model(masked_input, masked_lm_labels=labels) result = np.exp(loss.item()) return result score(model, tokenizer, '我爱你') # returns 45.63794545581973 However, when I try to use the code I get TypeError: forward() got an unexpected keyword argument 'masked_lm_labels'. I tried it with a couple of my models: from transformers import pipeline, BertForMaskedLM, BertForMaskedLM, AutoTokenizer, RobertaForMaskedLM, AlbertForMaskedLM, ElectraForMaskedLM import torch 1) tokenizer = AutoTokenizer.from_pretrained("bioformers/bioformer-cased-v1.0") model = BertForMaskedLM.from_pretrained("bioformers/bioformer-cased-v1.0") 2) tokenizer = AutoTokenizer.from_pretrained("sultan/BioM-ELECTRA-Large-Generator") model = ElectraForMaskedLM.from_pretrained("sultan/BioM-ELECTRA-Large-Generator") This SO question also used the masked_lm_labels as an input and it seemed to work somehow.
There is a paper Masked Language Model Scoring that explores pseudo-perplexity from masked language models and shows that pseudo-perplexity, while not being theoretically well justified, still performs well for comparing "naturalness" of texts. As for the code, your snippet is perfectly correct but for one detail: in recent implementations of Huggingface BERT, masked_lm_labels are renamed to simply labels, to make interfaces of various models more compatible. I have also replaced the hard-coded 103 with the generic tokenizer.mask_token_id. So the snippet below should work: from transformers import AutoModelForMaskedLM, AutoTokenizer import torch import numpy as np model_name = 'cointegrated/rubert-tiny' model = AutoModelForMaskedLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) def score(model, tokenizer, sentence): tensor_input = tokenizer.encode(sentence, return_tensors='pt') repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1) mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2] masked_input = repeat_input.masked_fill(mask == 1, tokenizer.mask_token_id) labels = repeat_input.masked_fill( masked_input != tokenizer.mask_token_id, -100) with torch.inference_mode(): loss = model(masked_input, labels=labels).loss return np.exp(loss.item()) print(score(sentence='London is the capital of Great Britain.', model=model, tokenizer=tokenizer)) # 4.541251105675365 print(score(sentence='London is the capital of South America.', model=model, tokenizer=tokenizer)) # 6.162017238332462 You can try this code in Google Colab by running this gist.
https://stackoverflow.com/questions/70464428/
Evaluating my object detection model using COCO metric shows 0 and -1 values
I'm currently trying to solve an object detection problem and decided to use faster RCNN on for this. I followed this Youtube video and their Code. The loss decreases but the big problem is it won't evaluate correctly no matter how I try to. I've tried looking into the inputs, if there is any sort of size mismatch or missing information but it still doesn't work. It's always showing -1 and 0 values for all of its metrics like this. creating index... index created! Test: [0/1] eta: 0:00:08 model_time: 0.4803 (0.4803) evaluator_time: 0.0304 (0.0304) time: 8.4784 data: 7.9563 max mem: 7653 Test: Total time: 0:00:08 (8.6452 s / it) Averaged stats: model_time: 0.4803 (0.4803) evaluator_time: 0.0304 (0.0304) Accumulating evaluation results... DONE (t=0.01s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 <coco_eval.CocoEvaluator at 0x7ff9989fea10> Here is my current code: Colab notebook
My labels were given wrong. I figured this out by trying to plot my dataset image with its labels and I figured out that it either wasn't showing the labels or not showing it accurately. This evaluation function is based on COCO metric. It evaluates labels of all sizes so it is showing -1.000 for area=large. My current guess that it is because my dataset doesn't have varying sizes of labels. They are all of equal sizes and they are medium/small in size. I might be wrong.
https://stackoverflow.com/questions/70470353/
Pytorch Tensors using all RAM
I have a list of tensors, which is too heavy for my RAM. I would like to save them in filesystem and load them when needed torch.save(single_tensor, 'tensor_<idx>.pt') If I want to use batches while training, is there an automatic way to load tensors when needed? I was thinking about using TensorDataset and DataLoader, but since now I don't have tensors in a list but in filesystem, how should I build them?
Firstly save the tensors one by one to file with torch.save() torch.save(tensor, 'path/to/file.pt') Then this Dataset class allows to load the tensors only when they are really needed: class EmbedDataset(torch.utils.data.Dataset): def __init__(self, first_embed_path, second_embed_path, labels): self.first_embed_path = first_embed_path self.second_embed_path = second_embed_path self.labels = labels def __len__(self): return len(self.labels) def __getitem__(self, i): label = self.labels[i] embed = torch.load(os.path.join(self.first_embed_path, str(i) + '.pt')) pos = torch.load(os.path.join(self.second_embed_path, str(i) + '.pt')) tensor = torch.cat((embed, pos)) return tensor, label Here the tensors are named with numbers, eg 1.pt or 1816.pt
https://stackoverflow.com/questions/70471106/
Does pytorch use the cudatoolkit in the conda environment or the system?
I installed pytorch and torchvision in my conda environment with pip install torch==1.5.1+cu101 torchvision==0.6.1+cu101, to my understanding this means that the pytorch library is compiled with cuda10.1. And upon running nvcc --version , I get nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2017 NVIDIA Corporation Built on Fri_Nov__3_21:07:56_CDT_2017 Cuda compilation tools, release 9.1, V9.1.85 And I assume this means that the cudatoolkit in my system is cuda9.1, but if I then go on to install a different version of cudatoolkit in my conda environment with conda install -c anaconda cudatoolkit=10.1. Which cudatoolkit will pytorch use? I used pip install for pytorch because this was the instruction given in the original repo I am planning to use.
Yes, the pip wheels and conda binaries ship with their own CUDA runtime (as well as cuDNN. NCCL etc.) so you would only need to install the NVIDIA driver. If you want to build PyTorch from source or a custom CUDA extension, the local CUDA toolkit will be used. As answered in the link here.
https://stackoverflow.com/questions/70479396/
Concatenate layers with different sizes in PyTorch
In Keras, it is possible to concatenate two layers of different sizes: # Keras — this works, conceptually layer_1 = Embedding(50, 5)(inputs) layer_2 = Embedding(300, 20)(inputs) concat = Concatenate()([layer_1, layer_2]) # -> `concat` now has shape `(*, 25)`, as desired But PyTorch keeps complaining that the two layers have different sizes: # PyTorch — this does not work class MyModel(torch.nn.Module): def __init__(self): self.layer1 = Embedding(50, 5) self.layer2 = Embedding(300, 20) def forward(self, inputs): layer_1 = self.layer1(inputs) layer_2 = self.layer2(inputs) concat = torch.cat([layer_1, layer_2]) The code just above results in this error: RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 5 but got size 20 for tensor number 1 in the list. The final concat layer I want is a layer of size 25 made of the concatenation of the two source layers. As the two source layers are Embedding layers, I do not see as optimal that they would share the same dimension. In this example, using an embedding dimension of 5 for a vocabulary of 50 items, and an embedding dimension of size 20 for a vocabulary of 200 items. How should this problem be solved in PyTorch?
Indeed torch.cat will apply the concatenation on the first axis. Since you are looking to concatenate on the second axis, you should provide the dim argument as: >>> concat = torch.cat([layer_1, layer_2], dim=1)
https://stackoverflow.com/questions/70487666/
DeepSORT's Feature extractor cannot be used for Person ReIdentification
I am using this repo for DeepSORT - https://github.com/nwojke/deep_sort I am trying to build a Multi Camera Person Tracking system. I want to save and utilize the features extracted by one camera in footage from other cameras. The Feature extractor which is trained on Mars dataset, doesn't seem to help in differentiating between two different people. I wrote the below snippet to check the Cosine Distance between images from same person and different person. extr = Extractor("./deep_sort_pytorch/deep_sort/deep/checkpoint/ckpt.t7") list = glob.glob("mars/*.jpg") features_list = [] for i in list: im = cv2.imread(i) im_crops = [im] features = extr(im_crops) features_list.append(features) for f in features_list: print(_cosine_distance(f, features_list[0]),"<<") def _cosine_distance(a, b, data_is_normalized=False): cos = nn.CosineSimilarity(dim=1, eps=1e-6) return (1. - cos(torch.from_numpy(a), torch.from_numpy(b))) As expected, the cosine distance between images of same person is very low. But unexpectedly the cosine distance between crops of two different people is also similarly low. I thought the Feature extractor will help me in differentiating. Shall I increase the latent space dimensions from 512 to a bigger size? Or maybe I am mistaking the role of Feature extractor.
A slightly larger feature space may help. But your main issue is the architecture of the feature extractor. In order to match people and distinguish them from impostors, features corresponding small local regions (e.g. shoes, glasses) and global whole body regions are equally important. This is not captured by the simple feature extractor provided by https://github.com/nwojke/deep_sort. For more information on this check: https://arxiv.org/pdf/1905.00953.pdf. I recommend you to try any of the OSNet models provided here: https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO I can also recommend you to check out my repository: https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch. It seems to provide all you need: multi-camera multi-object tracking and OSNet models
https://stackoverflow.com/questions/70494402/
How to generate indices like [0,2,4,1,3,5] without using explicit loop for reorganizing rows of a tensors in Pytorch?
Suppose I have a Tensor like a = torch.tensor([[3, 1, 5, 0, 4, 2], [2, 1, 3, 4, 5, 0], [0, 4, 5, 1, 2, 3], [3, 1, 4, 5, 0, 2], [3, 5, 4, 2, 0, 1], [5, 3, 0, 4, 1, 2]]) and I want to reorganize the rows of the tensor by applying the transformation a[c] where c = torch.tensor([0,2,4,1,3,5]) to get b = torch.tensor([[3, 1, 5, 0, 4, 2], [0, 4, 5, 1, 2, 3], [3, 5, 4, 2, 0, 1], [2, 1, 3, 4, 5, 0], [3, 1, 4, 5, 0, 2], [5, 3, 0, 4, 1, 2]]) For doing it, I want to generate the tensor c so that I can do this transformation irrespective of the size of tensor a and the stepping size (which I have taken to be equal to 2 in this example for simplicity). Can anyone let me know how do I generate such a tensor for the general case without using an explicit for loop in PyTorch?
You can use torch.index_select, so: b = torch.index_select(a, 0, c) The explanation in the official docs is pretty clear.
https://stackoverflow.com/questions/70497773/
name '_get_ade20k_pairs' is not defined
So I'm trying to make function for preprocessing dataaset in semantic segmentation. but it tells me that my function is not define. Whereas is actually define on there. my code is like this import os import random import numpy as np import torch import torch.utils.data as data from PIL import Image, ImageOps, ImageFilter __all__ = ['ADE20KSegmentation'] class ADE20KSegmentation(data.Dataset): BASE_DIR = 'ADEChallengeData2016' NUM_CLASS = 150 CLASSES = ("wall", "building, edifice", "sky", "floor, flooring", "tree", "ceiling", "road, route", "bed", "windowpane, window", "grass", "cabinet", "sidewalk, pavement", "person, individual, someone, somebody, mortal, soul", "earth, ground", "door, double door", "table", "mountain, mount", "plant, flora, plant life", "curtain, drape, drapery, mantle, pall", "chair", "car, auto, automobile, machine, motorcar", "water", "painting, picture", "sofa, couch, lounge", "shelf", "house", "sea", "mirror", "rug, carpet, carpeting", "field", "armchair", "seat", "fence, fencing", "desk", "rock, stone", "wardrobe, closet, press", "lamp", "bathtub, bathing tub, bath, tub", "railing, rail", "cushion", "base, pedestal, stand", "box", "column, pillar", "signboard, sign", "chest of drawers, chest, bureau, dresser", "counter", "sand", "sink", "skyscraper", "fireplace, hearth, open fireplace", "refrigerator, icebox", "grandstand, covered stand", "path", "stairs, steps", "runway", "case, display case, showcase, vitrine", "pool table, billiard table, snooker table", "pillow", "screen door, screen", "stairway, staircase", "river", "bridge, span", "bookcase", "blind, screen", "coffee table, cocktail table", "toilet, can, commode, crapper, pot, potty, stool, throne", "flower", "book", "hill", "bench", "countertop", "stove, kitchen stove, range, kitchen range, cooking stove", "palm, palm tree", "kitchen island", "computer, computing machine, computing device, data processor, " "electronic computer, information processing system", "swivel chair", "boat", "bar", "arcade machine", "hovel, hut, hutch, shack, shanty", "bus, autobus, coach, charabanc, double-decker, jitney, motorbus, " "motorcoach, omnibus, passenger vehicle", "towel", "light, light source", "truck, motortruck", "tower", "chandelier, pendant, pendent", "awning, sunshade, sunblind", "streetlight, street lamp", "booth, cubicle, stall, kiosk", "television receiver, television, television set, tv, tv set, idiot " "box, boob tube, telly, goggle box", "airplane, aeroplane, plane", "dirt track", "apparel, wearing apparel, dress, clothes", "pole", "land, ground, soil", "bannister, banister, balustrade, balusters, handrail", "escalator, moving staircase, moving stairway", "ottoman, pouf, pouffe, puff, hassock", "bottle", "buffet, counter, sideboard", "poster, posting, placard, notice, bill, card", "stage", "van", "ship", "fountain", "conveyer belt, conveyor belt, conveyer, conveyor, transporter", "canopy", "washer, automatic washer, washing machine", "plaything, toy", "swimming pool, swimming bath, natatorium", "stool", "barrel, cask", "basket, handbasket", "waterfall, falls", "tent, collapsible shelter", "bag", "minibike, motorbike", "cradle", "oven", "ball", "food, solid food", "step, stair", "tank, storage tank", "trade name, brand name, brand, marque", "microwave, microwave oven", "pot, flowerpot", "animal, animate being, beast, brute, creature, fauna", "bicycle, bike, wheel, cycle", "lake", "dishwasher, dish washer, dishwashing machine", "screen, silver screen, projection screen", "blanket, cover", "sculpture", "hood, exhaust hood", "sconce", "vase", "traffic light, traffic signal, stoplight", "tray", "ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, " "dustbin, trash barrel, trash bin", "fan", "pier, wharf, wharfage, dock", "crt screen", "plate", "monitor, monitoring device", "bulletin board, notice board", "shower", "radiator", "glass, drinking glass", "clock", "flag") def __init__(self, root='/content/dataset', split='training', mode=None, transform=None, base_size=520, crop_size=480, **kwargs): super(ADE20KSegmentation, self).__init__() self.root = root self.split = split self.mode = mode if mode is not None else split self.transform = transform self.base_size = base_size self.crop_size = crop_size self.images, self.mask_paths = _get_ade20k_pairs(self.root, self.split) assert (len(self.images) == len(self.mask_paths)) if len(self.images) == 0: raise RuntimeError("Found 0 images in subfolders of: " + self.root + "\n") self.valid_classes = [7, 8, 11, 12, 13, 17, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 31, 32, 33] self._key = np.array([-1, -1, -1, -1, -1, -1, -1, -1, 0, 1, -1, -1, 2, 3, 4, -1, -1, -1, 5, -1, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, -1, -1, 16, 17, 18]) self._mapping = np.array(range(-1, len(self._key) - 1)).astype('int32') def _class_to_index(self, mask): values = np.unique(mask) for value in values: assert (value in self._mapping) index = np.digitize(mask.ravel(), self._mapping, right=True) return self._key[index].reshape(mask.shape) def __getitem__(self, index): img = PIL.Image.open(self.images[index]).convert('RGB') if self.mode == 'test': if self.transform is not None: img = self.transform(img) return img, os.path.basename(self.images[index]) mask = PIL.Image.open(self.mask_paths[index]) # synchrosized transform if self.mode == 'training': img, mask = self._sync_transform(img, mask) elif self.mode == 'valdation': img, mask = self._val_sync_transform(img, mask) else: assert self.mode == 'testval' img, mask = self._img_transform(img), self._mask_transform(mask) # general resize, normalize and toTensor if self.transform is not None: img = self.transform(img) return img, mask def _val_sync_transform(self, img, mask): outsize = self.crop_size short_size = outsize w, h = img.size if w > h: oh = short_size ow = int(1.0 * w * oh / h) else: ow = short_size oh = int(1.0 * h * ow / w) img = img.resize((ow, oh), Image.BILINEAR) mask = mask.resize((ow, oh), Image.NEAREST) # center crop w, h = img.size x1 = int(round((w - outsize) / 2.)) y1 = int(round((h - outsize) / 2.)) img = img.crop((x1, y1, x1 + outsize, y1 + outsize)) mask = mask.crop((x1, y1, x1 + outsize, y1 + outsize)) # final transform img, mask = self._img_transform(img), self._mask_transform(mask) return img, mask def _sync_transform(self, img, mask): # random mirror if random.random() < 0.5: img = img.transpose(Image.FLIP_LEFT_RIGHT) mask = mask.transpose(Image.FLIP_LEFT_RIGHT) crop_size = self.crop_size # random scale (short edge) short_size = random.randint(int(self.base_size * 0.5), int(self.base_size * 2.0)) w, h = img.size if h > w: ow = short_size oh = int(1.0 * h * ow / w) else: oh = short_size ow = int(1.0 * w * oh / h) img = img.resize((ow, oh), Image.BILINEAR) mask = mask.resize((ow, oh), Image.NEAREST) # pad crop if short_size < crop_size: padh = crop_size - oh if oh < crop_size else 0 padw = crop_size - ow if ow < crop_size else 0 img = ImageOps.expand(img, border=(0, 0, padw, padh), fill=0) mask = ImageOps.expand(mask, border=(0, 0, padw, padh), fill=0) # random crop crop_size w, h = img.size x1 = random.randint(0, w - crop_size) y1 = random.randint(0, h - crop_size) img = img.crop((x1, y1, x1 + crop_size, y1 + crop_size)) mask = mask.crop((x1, y1, x1 + crop_size, y1 + crop_size)) # gaussian blur as in PSP if random.random() < 0.5: img = img.filter(ImageFilter.GaussianBlur( radius=random.random())) # final transform img, mask = self._img_transform(img), self._mask_transform(mask) return img, mask def _img_transform(self, img): return np.array(img) def _mask_transform(self, mask): target = self._class_to_index(np.array(mask).astype('int32')) return torch.LongTensor(np.array(target).astype('int32')) def __len__(self): return len(self.images) @property def classes(self): """Category names.""" return type(self).CLASSES @property def pred_offset(self): return 1 def _get_ade20k_pairs(folder, mode='training'): def get_path_pairs(img_folder, mask_folder): img_paths = [] mask_paths = [] for root, _, files in os.walk(img_folder): for filename in files: if filename.endswith(".jpg"): imgpath = os.path.join(root, filename) foldername = os.path.basename(os.path.dirname(imgpath)) maskname = filename.replace('images', 'annotations') maskpath = os.path.join(mask_folder, foldername, maskname) if os.path.isfile(imgpath) and os.path.isfile(maskpath): img_paths.append(imgpath) mask_paths.append(maskpath) else: print('cannot find the mask or image:', imgpath, maskpath) print('Found {} images in the folder {}'.format(len(img_paths), img_folder)) return img_paths, mask_paths if split in ('training', 'validation'): img_folder = os.path.join(folder, 'images/' + split) mask_folder = os.path.join(folder, 'annotations/' + split) img_paths, mask_paths = get_path_pairs(img_folder, mask_folder) return img_paths, mask_paths else: assert split == 'trainval' print('trainval set') train_img_folder = os.path.join(folder, 'images/training') train_mask_folder = os.path.join(folder, 'annotations/training') val_img_folder = os.path.join(folder, 'images/validation') val_mask_folder = os.path.join(folder, 'annotations/validation') train_img_paths, train_mask_paths = get_path_pairs(train_img_folder, train_mask_folder) val_img_paths, val_mask_paths = get_path_pairs(val_img_folder, val_mask_folder) img_paths = train_img_paths + val_img_paths mask_paths = train_mask_paths + val_mask_paths return img_paths, mask_paths if __name__ == '__main__': dataset = ADE20KSegmentation() img, label = dataset[0] when I run that it always showing NameError: name '_get_ade20k_pairs' is not defined. whats wrong with my definition? because sometime it can get run and sometime it can't
I suppose you were copying the code from here and you failed to copy _get_ade20k_pairs correctly. You need it indented with 0 tabs.
https://stackoverflow.com/questions/70499881/
Is it possible to execute from the point where the neural network model is interrupted?
Assume that I am training a neural network model. I am storing the tensor file of the neural network model for every 15 epochs in .pth format. I need to run 1000 epochs in total. Suppose I stopped my program during the 501st epoch, then I have the following files 15.pth, 30.pth, 45.pth, 60.pth, 75.pth,.... 420.pth, 435.pth, 450.pth, 465.pth, 480.pth, 495.pth Then my doubt is Is it possible to use the last stored model 495.pth and continue execution as it generally happens if done without any interruption? In short, I am asking for something similar to the "resumption" of the training phase with a few modifications to the existing code. I am just asking for such a possibility. I am asking for general practice and not particular to any code. If such a method exists, I will be free to stop any program under execution and can resume later. Currently, I cannot use resources for shorter programs if longer programs are in execution and hence I am asking this question.
I order to resume training from a checkpoint, you need to save the entire state of your training process. This includes: Current weights of the model. State of the optimizer: most optimizers keep track of different statistics of the updates, e.g., momentum, variance etc. State of the learning rate scheduler. Additional "state" variables unique to your code. If you saved all this information, you should be able to fully restore the "state" of your training process and resume from that point.
https://stackoverflow.com/questions/70501155/
Why torch does train using mini-batches?
I'm currently tring to understand how to train models via pytorch. And while this I saw a pretty interesting feature: passing to train data --- is a mini-batch. For ex. There is a code fragment from official pytorch web-site ... data_dir = 'data/hymenoptera_data' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} ... ... for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) optimizer.zero_grad() with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) ... According to this code, inputs that passing to model is a mini-batch. I've tied to fiend some info about this, but unsuccessfully. But I'm really curious, is it some kind of boost (parallel run, etc.) or neccussary thing. So, would you mind to help me figure it out and tell me, why there is a mini-batch passing to train function? N.B. Will not refuse a link to the paper [smily :)].
Imagine it this way; say you want to learn the difference between a dog and a cat, and you have never seen them before. The batches would be that we show you, say, 10 images of a dog and a cat at a time. You can, rather fast, learn the some of differences of cats and dogs after say 4x10 images (4 batches) but you'll of course be biased if e.g all the dogs you've been showed so far have been large dogs, thus you might classify all small dogs as cats. After enough batches you'll learn, and then unlearn, different features since you don't see them all at once, but the important thing is, you start learning something fast. On the other hand, say we show you 100 images instead of 10. It will take much longer for you to look all the images through and compare them to each other, but you'll learn the differences in "one go" so to speak. Either way; when you have processed those images (either as batches or the entire dataset), I can then show you one image, and you can tell me if it is a dog or a cat, even though you have learned from multiple images.
https://stackoverflow.com/questions/70501627/
Get the mean of each column for a 2D PyTorch tensor object
I want to know what is the most simple way to get the mean of the matrix along each column, namely my tensor only has two dimensions with shape (m X n). For example, if I have a tensor object T = torch.FloatTensor([[2.6, 5], [1.7, 6], [3.2, 7], [2.1, 8]]) I want some function that could return a tensor object as following ([2.4, 6.5]) I know I could do with for loop, but I wonder if there is a PyTorch built-in method that could do this for me, or is there some more elegant way for this purpose. for i in range(T.size(1)): mean[i] = torch.mean(T[:, i]) Thanks for any help.
Please read the doc on torch.mean. It has an optional parameter dim: dim (int or tuple of ints) – the dimension or dimensions to reduce. In your case: T.mean(dim=0) Will give you the mean along the first dim. Note that many other "reduction" operations in pytorch (and in numpy) also have this option to reduce along a specific dimension only (e.g., std, sum etc.).
https://stackoverflow.com/questions/70503356/
Pytorch Model Optimization: Automatic Mixed Precision vs Quantization?
I'm trying to optimize my pytorch model. I understand the basics of quantization (changing 32 bit floats to other data types in 16 bit or 8bit), but I'm lost on how the two methods differ or what to choose. I see AMP (Automatic Mixed Precision) https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html and regular Quantization https://pytorch.org/tutorials/recipes/quantization.html. Could someone please explain the difference and applications? Thank you.
Automatic Mixed Precision (AMP)'s main goal is to reduce training time. On the other hand, quantization's goal is to increase inference speed. AMP: Not all layers and operations require the precision of fp32, hence it's better to use lower precision. AMP takes care of what precision to use for what operation. It eventually helps speed up the training. Mixed precision tries to match each op to its appropriate datatype, which can reduce your network’s runtime and memory footprint. Also, note that the max performance gain is observed on Tensor Core-enabled GPU architectures. Quantization converts 32-bit floating numbers in your model parameters to 8-bit integers. This will significantly decrease the model size and increase the inference speed. However, it could severly impact the model's accuracy. That's why you can utilize techniques like Quantization Aware Training (QAT). Rest you can read on the tutorials you shared.
https://stackoverflow.com/questions/70503585/
How to invert the permutations in each row of a 2D Pytorch tensor without using for loop?
Suppose I have a 2D tensor that contains a permutation in each row, for example a = torch.tensor([[4, 3, 5, 0, 2, 1], [5, 0, 3, 4, 2, 1], [3, 1, 0, 2, 4, 5], [5, 0, 4, 3, 2, 1], [2, 4, 0, 1, 5, 3]]) I want to invert all the permutations in tensor a to get a tensor b. For example, after doing this operation on the above tensor a my desired output should be >>> b tensor([[3, 5, 4, 1, 0, 2], [1, 5, 4, 2, 3, 0], [2, 1, 3, 0, 4, 5], [1, 5, 4, 3, 2, 0], [2, 3, 0, 5, 1, 4]]) I tried to search online and found out this answer. How should I generalize the approach mentioned in that answer to invert all the permutations without using a for loop?
It can be done using torch.Tensor.scatter_ b = torch.zeros_like(a) b.scatter_(1, a, torch.arange(a.size(1)).expand(a.size(0),-1))
https://stackoverflow.com/questions/70506236/
How to index tensor by using all but given indices in pytorch?
Suppose I have a Tensor a >>> a tensor([[2, 8], [3, 0], [4, 2], [2, 2], [6, 8]]) I also have a tensor idx consisting of indices that I don't want to select >>> idx tensor([0, 3, 4]) How should I go ahead and select all rows except the ones whose indices are present in the tensor idx? For the above tensors a and idx, after selection, I should get the output Tensor b as >>> b tensor([[3, 0], [4, 2]])
a[[i for i in range(len(a)) if i not in idx]]
https://stackoverflow.com/questions/70508567/
How to dynamically slice array without eval?
I am in a situation where I need to dynamically slice an array of variable shape/ dimension. So I assemble a string to access it. This works fine when accessing the data. some_data = eval("arr[:,[ci]]") #or some_data = eval("arr[:,[ci],:]") #or some_data = eval("arr[:,:,:,[ci]]") #etc. But the limitations of eval() prevent me from assigning using this approach. eval("arr[:,[ci]]") = some_data Is there a way I can slice without eval()?
In Python, the indexing operation for a type is implemented by the __getitem__/__setitem__/__delitem__ magic methods. To understand how indexing syntax is interpreted, and thus how these methods are called, we can simply define our own class: class test: def __getitem__(self, idx): return idx test()[:, [1,2,3], ...] # result: (slice(None, None, None), [1, 2, 3], Ellipsis) This slice type is a built-in type mentioned earlier in the same documentation, with a link to more specific information. Ellipsis is simply the name of the object created by the literal syntax .... It's what Numpy uses to represent the complete extent of any number of axes - so this lets us avoid worrying about the dimensionality of the array we're working with. The important thing to realize is that we can use that same index tuple directly, to get the same result. Thus, given a dimension along which to slice with [ci], we can construct the necessary tuple, and index with it: all = slice(None) def make_index(ci, dimension): return (all,) * dimension + ([ci], Ellipsis) # Now we can do things like: arr[make_index(ci, dimension)] = some_data
https://stackoverflow.com/questions/70510989/
Does using an image transform significantly slow down training?
I see image transforms used quite often by many deep learning researchers. They seem to be treated as if they are free GPU or CPU cycles. Example: transformations = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) train_set = datasets.ImageFolder(data_dir + "/train", transform = transformations) In this specific case would it not be infinitely better to process the images upfront and save them out for future use in some other format? I see this sometimes but extremely rarely. Or am I wrong, and transformers on a GPU are just so fast it's not worth the extra code or hassle?
It really depends on how you set up the dataloader. Generally, the transforms are performed on the CPU, and then the transformed data is moved to the GPU. Pytorch dataloaders have a 'prefetch_factor' argument that allows them to pre-compute your data (with transforms) in parallel with the GPU computing the model. That being said, with fixed transforms like you have here, pre-computing the entire dataset and saving it prior to computing could also be a valid strategy.
https://stackoverflow.com/questions/70511065/
Cython_bbox and lap installation error, #include "Python.h" not found
I have encountered these strange errors upon trying to install these 2 libraries (Cython_bbox and lap), which are part of other libraries that I need when running pip install -r requirements.txt, which contains the following yacs opencv-python PyYAML cython-bbox scipy progress motmetrics matplotlib lap openpyxl Pillow tensorboardX fvcore This is found from the following link at :https://github.com/ifzhang/FairMOT This is the big bunch of errors that I got. rc/cython_bbox.c:31:10: fatal error: Python.h: No such file or directory 31 | #include "Python.h" | ^~~~~~~~~~ compilation terminated. error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 ---------------------------------------- ERROR: Failed building wheel for cython-bbox Running setup.py clean for cython-bbox Building wheel for lap (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/aevas/Desktop/pyenvs/fairmotpy39/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-dweujmo4/lap_14d06a4a011f45a69d78f1a214bd5715/setup.py'"'"'; __file__='"'"'/tmp/pip-install-dweujmo4/lap_14d06a4a011f45a69d78f1a214bd5715/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-pzh48d16 cwd: /tmp/pip-install-dweujmo4/lap_14d06a4a011f45a69d78f1a214bd5715/ Complete output (34 lines): Partial import of lap during the build process. Generating cython files running bdist_wheel running build running config_cc running config_fc running build_src SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build_py creating build creating build/lib.linux-x86_64-3.9 creating build/lib.linux-x86_64-3.9/lap copying lap/lapmod.py -> build/lib.linux-x86_64-3.9/lap copying lap/__init__.py -> build/lib.linux-x86_64-3.9/lap running build_ext creating /tmp/tmpijpimqcg/home creating /tmp/tmpijpimqcg/home/aevas creating /tmp/tmpijpimqcg/home/aevas/Desktop creating /tmp/tmpijpimqcg/home/aevas/Desktop/pyenvs creating /tmp/tmpijpimqcg/home/aevas/Desktop/pyenvs/fairmotpy39 creating /tmp/tmpijpimqcg/home/aevas/Desktop/pyenvs/fairmotpy39/lib creating /tmp/tmpijpimqcg/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9 creating /tmp/tmpijpimqcg/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages creating /tmp/tmpijpimqcg/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages/numpy creating /tmp/tmpijpimqcg/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages/numpy/distutils creating /tmp/tmpijpimqcg/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages/numpy/distutils/checks CCompilerOpt.generate_dispatch_header[2281] : dispatch header dir build/src.linux-x86_64-3.9/numpy/distutils/include does not exist, creating it creating build/temp.linux-x86_64-3.9/lap lap/_lapjv.cpp:4:10: fatal error: Python.h: No such file or directory 4 | #include "Python.h" | ^~~~~~~~~~ compilation terminated. error: Command "x86_64-linux-gnu-g++ -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -ffile-prefix-map=/build/python3.9-FZ7wim/python3.9-3.9.5=. -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages/numpy/core/include -Ilap -I/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages/numpy/core/include -Ibuild/src.linux-x86_64-3.9/numpy/distutils/include -I/home/aevas/Desktop/pyenvs/fairmotpy39/include -I/usr/include/python3.9 -c lap/_lapjv.cpp -o build/temp.linux-x86_64-3.9/lap/_lapjv.o -MMD -MF build/temp.linux-x86_64-3.9/lap/_lapjv.o.d -msse -msse2 -msse3" failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for lap Running setup.py install for lap ... error ERROR: Command errored out with exit status 1: warnings.warn( running build running config_cc running config_fc running build_src running build_py creating build creating build/lib.linux-x86_64-3.9 creating build/lib.linux-x86_64-3.9/lap copying lap/lapmod.py -> build/lib.linux-x86_64-3.9/lap copying lap/__init__.py -> build/lib.linux-x86_64-3.9/lap running build_ext creating /tmp/tmplmbzif7h/home creating /tmp/tmplmbzif7h/home/aevas creating /tmp/tmplmbzif7h/home/aevas/Desktop creating /tmp/tmplmbzif7h/home/aevas/Desktop/pyenvs creating /tmp/tmplmbzif7h/home/aevas/Desktop/pyenvs/fairmotpy39 creating /tmp/tmplmbzif7h/home/aevas/Desktop/pyenvs/fairmotpy39/lib creating /tmp/tmplmbzif7h/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9 creating /tmp/tmplmbzif7h/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages creating /tmp/tmplmbzif7h/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages/numpy creating /tmp/tmplmbzif7h/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages/numpy/distutils creating /tmp/tmplmbzif7h/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages/numpy/distutils/checks CCompilerOpt.generate_dispatch_header[2281] : dispatch header dir build/src.linux-x86_64-3.9/numpy/distutils/include does not exist, creating it creating build/temp.linux-x86_64-3.9/lap lap/_lapjv.cpp:4:10: fatal error: Python.h: No such file or directory 4 | #include "Python.h" | ^~~~~~~~~~ compilation terminated. error: Command "x86_64-linux-gnu-g++ -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -ffile-prefix-map=/build/python3.9-FZ7wim/python3.9-3.9.5=. -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages/numpy/core/include -Ilap -I/home/aevas/Desktop/pyenvs/fairmotpy39/lib/python3.9/site-packages/numpy/core/include -Ibuild/src.linux-x86_64-3.9/numpy/distutils/include -I/home/aevas/Desktop/pyenvs/fairmotpy39/include -I/usr/include/python3.9 -c lap/_lapjv.cpp -o build/temp.linux-x86_64-3.9/lap/_lapjv.o -MMD -MF build/temp.linux-x86_64-3.9/lap/_lapjv.o.d -msse -msse2 -msse3" failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: I was installing it in a brand new virtual environment running python3.9. I am not using any conda environment. Here are my system specs, if required 11th Gen Intel® Core™ i7-11850H @ 2.50GHz × 16 llvmpipe (LLVM 12.0.0, 256 bits) / Mesa Intel® UHD Graphics (TGL GT1) Ubuntu 21.04 64-bit Things I've tried looking through stackoverflow.com/search?q=%23include+%22Python.h%22+not+found
Try this : sudo apt install libpython3.9-dev which will install header files like Python.h
https://stackoverflow.com/questions/70516037/
PyTorch training loop within a sklearn pipeline
What I am playing around with right now is to work with PyTorch within a pipeline, where all of the preprocessing will be handled. I am able to make it work. However, the results I am getting are a bit off. The loss function seems to be not decreasing and gets stuck (presumably in local optima?) as the training loop progresses. I follow the standard PyTorch training loop and wrap it inside the fit method as this is what sklearn wants: import torch from sklearn.base import BaseEstimator, TransformerMixin import torch.nn.functional as F from IPython.core.debugger import set_trace # + import pandas as pd import seaborn as sns import numpy as np from tqdm import tqdm import random # - df = sns.load_dataset("tips") df.head() # + class LinearRegressionModel(torch.nn.Module, BaseEstimator, TransformerMixin): def __init__(self, loss_func = torch.nn.MSELoss()): super(LinearRegressionModel, self).__init__() self.linear = torch.nn.Linear(3, 1) # One in and one out self.loss_func = loss_func self.optimizer = torch.optim.SGD(self.parameters(), lr = 0.01) def forward(self, x): y_pred = F.relu(self.linear(x)) return y_pred def fit(self, X, y): # set_trace() X = torch.from_numpy(X.astype(np.float32)) y = torch.from_numpy(y.values.astype(np.float32)) for epoch in tqdm(range(0, 12)): pred_y = self.forward(X) # Compute and print loss loss = self.loss_func(pred_y, X) # Zero gradients, perform a backward pass, # and update the weights. self.optimizer.zero_grad() loss.backward() self.optimizer.step() print('epoch {}, loss {}'.format(epoch, loss.item())) # + from sklearn.pipeline import Pipeline from sklego.preprocessing import PatsyTransformer # - my_model = LinearRegressionModel() pipe = Pipeline([ ("patsy", PatsyTransformer("tip + size")), ("model", my_model) ]) pipe.fit(df, df['total_bill']) It is not only due to the model being to simple. If I use sklearn linear regression estimated via stochastic gradient descent (SGDRegressor) the results seem nice. Therefore, I am concluding that problem is within my PyTorch class # + from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error pipe2 = Pipeline([ ("patsy", PatsyTransformer("tip + C(size) + C(time)")), ("model", LinearRegression()) ]) pipe2.fit(df, df['total_bill']) # - mean_squared_error(df['total_bill'], pipe2.predict(df))
The problem in this implementation is in the fit method. We are comparing prediction and design matrix # Compute and print loss loss = self.loss_func(pred_y, X) Should be prediction and real value y: loss = self.loss_func(pred_y, y)
https://stackoverflow.com/questions/70518288/
PyTorch - save only positive pictures where pedestrian are detected
I'm a bit stuck on some code. I want to save ONLY POSITIVE IMAGE when one or more pedestrian are detected. When nothing is detected, do nothing. I started by reading : https://github.com/ultralytics/yolov5/issues/36 I wrote this : import torch import os f = [] for dirpath, subdirs, files in os.walk('MyFolderWithPictures'): for x in files: if x.endswith(".jpg"): f.append(os.path.join(dirpath, x)) model = torch.hub.load('ultralytics/yolov5', 'yolov5s') model.conf = 0.25 # NMS confidence threshold model.iou = 0.45 # NMS IoU threshold model.classes = 0 # Only pedestrian model.multi_label = False # NMS multiple labels per box model.max_det = 1000 # maximum number of detections per image img = f # list of pictures results = model(img) results.print() results.save() But this print and save ALL images (positive and negative). I want to save only images with pedestrian. Can you help me ? Thanks in advance. ps : the output give : image 1/13: 1080x1920 1 person image 2/13: 1080x1920 (no detections) image 3/13: 1080x1920 (no detections) image 4/13: 1080x1920 (no detections) image 5/13: 1080x1920 (no detections) image 6/13: 1080x1920 (no detections) image 7/13: 1080x1920 (no detections) image 8/13: 1080x1920 (no detections) image 9/13: 1080x1920 (no detections) image 10/13: 1080x1920 (no detections) image 11/13: 1080x1920 (no detections) image 12/13: 1080x1920 1 person image 13/13: 1080x1920 (no detections) Speed: 18.6ms pre-process, 119.8ms inference, 1.9ms NMS per image at shape (13, 3, 384, 640) Saved 13 images to runs\detect\exp ADD SOLUTION : for item in f: # Images img = item # or file, Path, PIL, OpenCV, numpy, list # Inference results = model(img) # Results results.print() # or .show(), .save(), .crop(), .pandas(), etc. if 0 in results.pandas().xyxy[0]['class']: results.save()
model(img) will always return some kinds of results, even if there are no objects detected. What you need to do is inspect the results and see if it includes the class that you are interested in. The results can easily be converted to a Pandas Dataframe so that you can query them. Here is an example to check if the results contain an instance of class 0 and then save the results if it does. if 0 in results.pandas().xyxy[0]['class']: results.save()
https://stackoverflow.com/questions/70531482/