instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Background removal with U2Net is too strong
I am successfully using U2Net to remove the background of images using the terminal and I am also using the nice interface of this repo to do the same thing just in an easier way and validate the similarity of the results. However, my issue is that the background removal is too strong for images like this: Where I get the following result (i.e. packaging is also removed): If I upload the image in Foco clipping website and I select Type=='Graphic' I get exactly the same results. That means that the website is using the same algorithm to remove the background for Graphic-type images. Nevertheless, if I select Type=='Product', then the result is the following and is exactly what I want: Does anyone have any idea on what to do to obtain the same result?
you should sharpen the image first (use strong sharpen) from PIL import Image from PIL import ImageFilter from PIL.ImageFilter import (UnsharpMask) simg = Image.open('data/srcimg07.jpg') dimg = simg.filter(UnsharpMask(radius=4.5, percent=200, threshold=0)) dimg.save("result/ImageFilter_UnsharpMask_2_150_3.jpg") use U2Net to remove the background of the sharpened image use the result from step (2) as mask using the mask from step (3) extract wanted result from original picture note: this is very quick example, you could refine it more
https://stackoverflow.com/questions/69180832/
How can I concatenate an additional input data in Alexnet with the output of the last dropout layer using Pytorch implementation?
Here is the implementation architecture class AlexNet(nn.Module): def __init__(self, num_classes=10): super(AlexNet, self).__init__() #1 self.features= nn.Sequential( nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=0), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), #2 nn.Conv2d(96, 256, kernel_size=5, stride=1, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), #3 nn.Conv2d(256, 384, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), #4 nn.Conv2d(384, 384, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), #5 nn.Conv2d(384, 256, kernel_size=5, stride=1, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), ) self.avgpool= nn.AvgPool2d(6) self.classifier= nn.Sequential( nn.Dropout(), nn.Linear(256*6*6, 4096), #128*2*2, 1024 nn.ReLU(inplace=True), nn.Dropout(),torch.cat((nn.Dropout(),PIs_features)), nn.Linear(4096, num_classes)) def forward(self, x): x= self.features(x) x=x.view(x.size(0), 256*6*6) x= self.classifier(x) return x So I wanna implement say 'y' input data with the output of the last dropout layer 'nn.dropout()' in the self.classifier. Thanks in advance.
You can do so in the forward definition by simply calling torch.cat((x, y), 1) to concatenate the two feature vectors together. class AlexNet(nn.Module): def __init__(self, num_classes=10): super().__init__() #1 self.features= nn.Sequential( nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=0), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), #2 nn.Conv2d(96, 256, kernel_size=5, stride=1, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), #3 nn.Conv2d(256, 384, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), #4 nn.Conv2d(384, 384, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), #5 nn.Conv2d(384, 256, kernel_size=5, stride=1, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2)) self.avgpool= nn.AvgPool2d(6) self.classifier= nn.Sequential( nn.Dropout(), nn.LazyLinear(4096), nn.ReLU(inplace=True), nn.Dropout()) self.fc = nn.LazyLinear(num_classes) def forward(self, x, y): x = self.features(x) x = self.avgpool(x) x = x.flatten(1) x = torch.cat((x, y), 1) x = self.classifier(x) return x Additionally, I have replaced the fully connected nn.Linear layers with LazyLayer. But you can replace them with fixed neurons if you prefer.
https://stackoverflow.com/questions/69181623/
How to get continuous value from output layer
I have a code (from here) to classify the MINST digits. The code is working fine. Here they used CrossEntropyLoss and Adam optimizer. The model code is given below class CNN(nn.Module):     def __init__(self):         super(CNN, self).__init__()         self.conv1 = nn.Sequential(                     nn.Conv2d(                 in_channels=1,                               out_channels=16,                             kernel_size=5,                               stride=1,                                   padding=2,                               ),                                           nn.ReLU(),                                   nn.MaxPool2d(kernel_size=2),             )         self.conv2 = nn.Sequential(                     nn.Conv2d(16, 32, 5, 1, 2),                 nn.ReLU(),                                   nn.MaxPool2d(2),                         )         # fully connected layer, output 10 classes         self.out = nn.Linear(32 * 7 * 7, 10) # self.softmax = torch.nn.Softmax(dim=1)     def forward(self, x):         x = self.conv1(x)         x = self.conv2(x)         # flatten the output of conv2 to (batch_size, 32 * 7 * 7)         x = x.view(x.size(0), -1)               output = self.out(x) # output = self.softmax(output)         return output, x    # return x for visualization The shape of the `b_x` and `b_y` is torch.Size([100, 1, 28, 28]) torch.Size([100]) Now, I wanted to get continuous value from the output layer. Say, I want the output as alike i.e, 1.0, 0.9, 8.6, 7.0, etc. If the value of the output layer is 1.0 and the label is 1 that means the prediction is perfect. Otherwise, not perfect. More simply, I want to think the MNIST digits as a regression problem. So, I changed the loss function to MSELoss and optimizer to SGD (the rest of the code remains as the same as the website). But now, I am getting an error /home/Opps_0/.local/lib/python3.8/site-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([100])) that is different to the input size (torch.Size([100, 10])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.   return F.mse_loss(input, target, reduction=self.reduction) Traceback (most recent call last):   File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main     return _run_code(code, main_globals, None,   File "/usr/lib/python3.8/runpy.py", line 87, in _run_code     exec(code, run_globals)   File "/home/Opps_0/Desktop/MNIST/src/train.py", line 60, in <module>     train(NB_EPOCS, model, loaders)   File "/home/Opps_0/Desktop/MNIST/src/train.py", line 45, in train     loss = criterion(output, b_y)   File "/home/Opps_0/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl     result = self.forward(*input, **kwargs)   File "/home/Opps_0/.local/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 528, in forward     return F.mse_loss(input, target, reduction=self.reduction)   File "/home/Opps_0/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 2925, in mse_loss     expanded_input, expanded_target = torch.broadcast_tensors(input, target)   File "/home/Opps_0/.local/lib/python3.8/site-packages/torch/functional.py", line 74, in broadcast_tensors     return _VF.broadcast_tensors(tensors)  # type: ignore RuntimeError: The size of tensor a (10) must match the size of tensor b (100) at non-singleton dimension 1 Could you tell me what I have to change to get the continuous value for the output layer?
Assuming your targets are shape as (batch_size,), something along the lines of: >>> model = CNN() >>> criterion = nn.MSELoss() >>> output, _ = model(torch.rand(2, 1, 28, 28)) >>> b_y = torch.randint(0, 10, (2,)) tensor([1, 2, 6, 5, 7]) Loss computation with MSELoss will result in: >>> loss = criterion(output, b_y) RuntimeError: The size of tensor a (10) must match the size of tensor b (2) at non-singleton dimension 1. This means the shape of your target b_y is incorrect, it needs to match output's shaped, i.e. it needs to be a two-dimensional tensor. Since you are optimizing this task with a regression loss you could encode your target as a sparse vector also known as one-hot encoding. You can do so with ease using the builtin torch.nn.functional.one_hot: >>> ohe_target = torch.nn.functional.one_hot(b_y, num_classes=10) tensor([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0]]) Now you can compute the loss properly: >>> criterion(output, ohe_target) tensor(0.1169, grad_fn=<MseLossBackward>)
https://stackoverflow.com/questions/69184746/
PyTorch: Inconsistent behavior in torch.arange
I'm using Google Colab and when I run the following code: from torch import tensor, arange print( arange(0.0, 1.2, 0.2) ) print( arange(tensor(0.0), tensor(1.2), tensor(0.2)) ) I get the output: tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000, 1.0000]) tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000, 1.0000, 1.2000]) This discrepancy also occurs with arange(0.0,1.1,0.1) but there is no discrepancy with arange(0.0,1.5,0.5). Why does seemingly similar code give different results, and how can I anticipate when this will occur?
This is a numerical precision issue :( By default, Python stores floating point numbers in double precision (aka float64), while PyTorch uses float32 by default. If you try: tensor(1.2).item() tensor(1.2).dtype # torch.float32 you'll get 1.2000000476837158, and arange with that number will be different from the one with 1.2. In this specific case, if you try: import torch from torch import tensor, arange print(arange(tensor(0.0), tensor(1.2, dtype=torch.float64), tensor(0.2))) you'll get what you expect, however even float64 will have some precision issue eventually. Anyway, as Python also uses double precision, you won't have this problem converting a scalar back and forth in this case. I'm not sure if you can anticipate it.
https://stackoverflow.com/questions/69185671/
Why does torch.nn.Upsample return a junk image?
When I execute the code segment below, nn.Upsample seems to be completely destroying my image. Am I applying it in the wrong way? import torch import imageio import torch.nn as nn from matplotlib import pyplot as plt small = imageio.imread('small.png') # shape 200, 390, 4 small_reshaped = small.reshape(4, 200, 390) # shape 4, 200, 390 batch = torch.as_tensor(small_reshaped).unsqueeze(0) # shape 1, 4, 200, 390 ups = nn.Upsample((500, 970)) upsampled_batch = ups(batch) # shape 1, 4, 500, 970 upsampled_small = upsampled_batch[0].reshape(500, 970, 4) # shape 500, 970, 4 plt.imshow(small) plt.imshow(upsampled_small) plt.show() Before upsampling: After upsampling: Original image (small.png):
Resolved it. Reshaping destroys the image. I should have transposed instead. See https://discuss.pytorch.org/t/for-beginners-do-not-use-view-or-reshape-to-swap-dimensions-of-tensors/75524 for more details. A working solution: ... small_reshaped = small.transpose(2, 0, 1) # shape 4, 200, 390 ... upsampled_small = upsampled_batch[0].transpose(0,1).transpose(1,2) # shape 500, 970, 4 ...
https://stackoverflow.com/questions/69190337/
Logistic regression with dropout regularization in PyTorch
I want to implement a logistic regression with dropout regularization but so far the only working example is the following: class logit(nn.Module): def __init__(self, input_dim = 69, output_dim = 1): super(logit, self).__init__() # Input Layer (69) -> 1 self.fc1 = nn.Linear(input_dim, input_dim) self.fc2 = nn.Linear(input_dim, 1) self.dp = nn.Dropout(p = 0.2) # Feed Forward Function def forward(self, x): x = self.fc1(x) x = self.dp(x) x = torch.sigmoid(self.fc2(x)) return x Now the problem of setting dropout in between layers is that at the end I do not have a logistic regression anymore (correct me if I'm wrong). What I would like to do is drop out at the input level.
Actually, you still have a logistic regression with the dropout as it is. The dropout between fc1 and fc2 will drop some (with p=0.2) of the input_dim features produced by fc1, requiring fc2 to be robust to their absence. This fact doesn't change the logit at the output of your model. Moreover, remember that at test time, (usually) the dropout will be disabled. Note that you could also apply dropout at the input level: def forward(self, x): x = self.dp(x) x = self.fc1(x) x = self.dp(x) x = torch.sigmoid(self.fc2(x)) In this case, fc1 would have to be robust to the absence of some of the input features.
https://stackoverflow.com/questions/69192850/
Using Hugging-face transformer with arguments in pipeline
I am working on using a transformer. Pipeline to get BERT embeddings to my input. using this without a pipeline i am able to get constant outputs but not with pipeline since I was not able to pass arguments to it. How can I pass transformer-related arguments for my Pipeline? # These are BERT and tokenizer definitions tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") inputs = ['hello world'] # Normally I would do something like this to initialize the tokenizer and get the result with constant output tokens = tokenizer(inputs,padding='max_length', truncation=True, max_length = 500, return_tensors="pt") model(**tokens)[0].detach().numpy().shape # using the pipeline pipeline("feature-extraction", model=model, tokenizer=tokenizer, device=0) # or other option tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT",padding='max_length', truncation=True, max_length = 500, return_tensors="pt") model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") nlp=pipeline("feature-extraction", model=model, tokenizer=tokenizer, device=0) # to call the pipeline nlp("hello world") I have tried several ways like the options listed above but was not able to get results with constant output size. one can achieve constant output size by setting the tokenizer arguments but have no idea how to give arguments for the pipeline. any idea?
The max_length tokenization parameter is not supported per default (i.e. no padding to max_length is applied), but you can create your own class and overwrite this behavior: from transformers import AutoTokenizer, AutoModel from transformers import FeatureExtractionPipeline from transformers.tokenization_utils import TruncationStrategy tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") inputs = ['hello world'] class MyFeatureExtractionPipeline(FeatureExtractionPipeline): def _parse_and_tokenize( self, inputs, max_length, padding=True, add_special_tokens=True, truncation=TruncationStrategy.DO_NOT_TRUNCATE, **kwargs ): """ Parse arguments and tokenize """ # Parse arguments if getattr(self.tokenizer, "pad_token", None) is None: padding = False inputs = self.tokenizer( inputs, add_special_tokens=add_special_tokens, return_tensors=self.framework, padding=padding, truncation=truncation, max_length=max_length ) return inputs mynlp = MyFeatureExtractionPipeline(model=model, tokenizer=tokenizer) o = mynlp("hello world", max_length = 500, padding='max_length', truncation=True) Let us compare the size of the output: print(len(o)) print(len(o[0])) print(len(o[0][0])) Output: 1 500 768 Please note: that this will only work with transformers 4.10.X and previous versions. The team is currently refactoring the pipeline classes and future releases will require different adjustments (i.e. that will not work as soon as the refactored pipelines are released).
https://stackoverflow.com/questions/69196995/
Torchvision ImageFolder "Could not find any class folder"
The code below plastic_train_image_folder = torchvision.datasets.ImageFolder(plastic_dir, transform=transforms) throws the following error: Could not find any any class folder in /Users/username/Documents/Jupyter/archive/Garbage classification/Garbage classification/plastic. Yet, there are files there. The code below prints 482. list_plastic = os.listdir(plastic_dir) number_files_plastic = len(list_plastic) print(number_files_plastic) Why is this error happening?
As you can see in the documentation, the ImageFolder class expects images to be within directories, one for each class of interest: A generic data loader where the images are arranged in this way: root/dog/xxx.png root/dog/xxy.png root/dog/xxz.png root/cat/123.png root/cat/nsdf3.png root/cat/asd932_.png Your images are probably in the root directory, which is not the way it is expecting, hence the error.
https://stackoverflow.com/questions/69199273/
Send all parameters and objects of a class to same device in PyTorch
I have the following dummy code in PyTorch class inside(nn.Module): def __init__(self): super(inside, self).__init__() self.weight_h = nn.Parameter(SOMETHING GOES HERE) # send to CPU or GPU self.weight_v = nn.Conv2d(SOMETHING GOES HERE) # send to CPU or GPU def forward(self, x): ... return x class main_class(nn.Module): def __init__(self): super(main_class, self).__init__() self.paramA = nn.Conv2d(...) self.paramB = nn.Conv2d(...) self.in_class = inside() def forward(self, x): ... return x device = #Define what GPU device to use or CPU object = main_class() object = object.to(device) Suppose in this code the device is GPU2. Then I know that the parameters self.paramA and self.paramB have definitely been loaded to GPU2 and not on CPU or any other GPU. But what can be said of self.weight_h and self.weight_v? Are they guaranteed to be on GPU2 or do I need to explicitly state this for parameters of inside class? I am using PyTorch 1.8.1 but perhaps suggest a method which is quite general and which will be true for any PyTorch version>=1.0
I guess when you said this code - the term can be clarified a bit more. There are two things that can be put in GPU. One of the thing is regarding the data. You can keep your data in GPU and things like that. There is another part to it, the model can be transferred to GPU. In this case, when you do final_model.to(...) then all the modules inside of it as part of the final model would be transferred to GPU. I differentiated this two because sometimes it is easy to mess these two things up. So the final answer is, yes they are guaranteed to be on GPU. (Those inside model weights which are part of the large model).
https://stackoverflow.com/questions/69202259/
torch.cuda.is_available() returns False why?
Has anyone encountered this? I tried updating drivers and reinstalling cuda Cuda Version: 11.4 GPU: GeForce RTX 3060 Laptop(6gb) OS: Windows 10 home torch.version: 1.9.0+cpu
You are using a PyTorch version compiled for CPU, you should install the appropriate version instead: Using conda: conda install pytorch torchvision cudatoolkit=11.1 -c pytorch -c conda-forge Using pip: python -m pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
https://stackoverflow.com/questions/69204737/
Which version of pysyft works with torch 1.8 or 1.9?
Which version of pysyft works with torch 1.8 or 1.9? I have tried to install pysyft using following code !pip install syft !pip install syft_proto This version install old of torch. I want pysyft to install latest version of torch.
The current latest version of syft declares compatibility with torch <= 1.8.1. Just install the latest syft: !pip install --upgrade --upgrade-strategy=eager syft
https://stackoverflow.com/questions/69208382/
Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead
This is my second question on this problem. Initially, I had the AttributeError error: 'numpy.ndarray' object has no attribute 'log'. Then U12-Forward helped me solve it. But a new problem has arisen. import torch import numpy as np import matplotlib.pyplot as plt x = torch.tensor([[5., 10.], [1., 2.]], requires_grad=True) var_history = [] fn_history = [] alpha = 0.001 optimizer = torch.optim.SGD([x], lr=alpha) def function_parabola(variable): return np.prod(np.log(np.log(variable + 7))) def make_gradient_step(function, variable): function_result = function(variable) function_result.backward() optimizer.step() optimizer.zero_grad() for i in range(500): var_history.append(x.data.numpy().copy()) fn_history.append(function_parabola(x).data.cpu().detach().numpy().copy()) make_gradient_step(function_parabola, x) print(x) def show_contours(objective, x_lims=[-10.0, 10.0], y_lims=[-10.0, 10.0], x_ticks=100, y_ticks=100): x_step = (x_lims[1] - x_lims[0]) / x_ticks y_step = (y_lims[1] - y_lims[0]) / y_ticks X, Y = np.mgrid[x_lims[0]:x_lims[1]:x_step, y_lims[0]:y_lims[1]:y_step] res = [] for x_index in range(X.shape[0]): res.append([]) for y_index in range(X.shape[1]): x_val = X[x_index, y_index] y_val = Y[x_index, y_index] res[-1].append(objective(np.array([[x_val, y_val]]).T)) res = np.array(res) plt.figure(figsize=(7,7)) plt.contour(X, Y, res, 100) plt.xlabel('$x_1$') plt.ylabel('$x_2$') show_contours(function_parabola) plt.scatter(np.array(var_history)[:,0], np.array(var_history)[:,1], s=10, c='r'); plt.show()
Modify function_parabola() to operate on PyTorch tensors and leverage PyTorch equivalents of the original numpy operations, like so: import torch import numpy as np import matplotlib.pyplot as plt x = torch.tensor([[5., 10.], [1., 2.]], requires_grad=True) var_history = [] fn_history = [] alpha = 0.001 optimizer = torch.optim.SGD([x], lr=alpha) def function_parabola(variable): return (torch.prod(torch.log(torch.log(torch.as_tensor(variable + 7))))) def make_gradient_step(function, variable): function_result = function(variable) function_result.backward() optimizer.step() optimizer.zero_grad() for i in range(500): var_history.append(x.data.numpy().copy()) fn_history.append(function_parabola(x).data.cpu().detach().numpy()) make_gradient_step(function_parabola, x) print(x) def show_contours(objective, x_lims=[-10.0, 10.0], y_lims=[-10.0, 10.0], x_ticks=100, y_ticks=100): x_step = (x_lims[1] - x_lims[0]) / x_ticks y_step = (y_lims[1] - y_lims[0]) / y_ticks X, Y = np.mgrid[x_lims[0]:x_lims[1]:x_step, y_lims[0]:y_lims[1]:y_step] res = [] for x_index in range(X.shape[0]): res.append([]) for y_index in range(X.shape[1]): x_val = X[x_index, y_index] y_val = Y[x_index, y_index] res[-1].append(objective(np.array([[x_val, y_val]]).T)) res = np.array(res) plt.figure(figsize=(7,7)) plt.contour(X, Y, res, 100) plt.xlabel('$x_1$') plt.ylabel('$x_2$') show_contours(function_parabola) plt.scatter(np.array(var_history)[:,0], np.array(var_history)[:,1], s=10, c='r'); plt.show()
https://stackoverflow.com/questions/69209038/
What is equivalent to pytorch lstm num_layers?
I'm a beginner in PyTorch. From the lstm description, I learned that I can create a stacked lstm with 3 layers by: layer = torch.nn.LSTM(128, 512, num_layers=3) Then in the forward function, I can do: def forward(x, state): x, state = layer(x, state) return x, (state[0].detach(), state[1].detach()) And I can pass state from batch to batch. But if I create 3 lstm layers, what is the equivalent to that if I want to implement the same stacked layers myself? layer1 = torch.nn.LSTM(128, 512, num_layers=1) layer2 = torch.nn.LSTM(128, 512, num_layers=1) layer3 = torch.nn.LSTM(128, 512, num_layers=1) In this case, what should go into the forward function and get the returned state? I also tried to look at the source code of pytorch lstm, but in the forward function it calls a _VF module which I cannot find where it is defined.
If you define state as a list of the 3 layers' states, then def forward(x, state): x, s0 = layer1(x, state[0]) x, s1 = layer2(x, state[1]) x, s2 = layer3(x, state[2]) return x, [s0.detach(), s1.detach(), s2.detach()]
https://stackoverflow.com/questions/69209674/
Timing Neural Network Inference Standards
I need to measure neural network inference times for a project. I want my results presented to be aligned with the standard practices for measuring this in academic papers. What I have managed to figure out is that we first want to warm up the GPU with a few inferences before the timing, and I need to use the torch provided timing feature (instead of Python's time.time()). My questions are as follows: Is it standard to time with a batch size of 1, or with the best batch size for that hardware? Am I only timing the neural network inference, or am I also timing the moving of data to the GPU, as well as data transformations that precede inference? How many iterations would be reasonable to time to get a good average inference time? Any advice would be greatly appreciated. Thank you.
If you're concerned with inference time, batch size should be something to optimize for in the first place. Not all operations in a NN will be affected in the same way by a change in batch size (you could have not change thanks to parallelization, or linear change if all kernels are busy for instance). If you need to compare between models I'd optimize per model. If you don't want to do that then I'd use the train-time batch size. I think it would be unlikely that in production you'd have a batch size of 1, except if it does not fit in memory. You should time both. If you're comparing models, data loading and transforms should not impact your decision but in a production environment it will matter a lot. So report both numbers, in some settings, scaling up the data-loading or the model may be easier than the other. I would say around a 100. It's just a rule of thumb. You want your numbers to be statistically significant. You can also report the std in addition to the average, or even plot the distribution (percentiles or histograms or else) You can also compare performance loss vs inference time gain when using half float types for your data and model weights.
https://stackoverflow.com/questions/69217453/
What is the Best way to define Adam Optimizer in PyTorch?
For most PyTorch codes we use the following definition of Adam optimizer, optim = torch.optim.Adam(model.parameters(), lr=cfg['lr'], weight_decay=cfg['weight_decay']) However, after repeated trials, I found that the following definition of Adam gives 1.5 dB higher PSNR which is huge. optim = torch.optim.Adam( [ {'params': get_parameters(model, bias=False)}, {'params': get_parameters(model, bias=True), 'lr': cfg['lr'] * 2, 'weight_decay': 0}, ], lr=cfg['lr'], weight_decay=cfg['weight_decay']) The Model is a usual U-net with parameters defined in init and forward action as in any other PyTorch model. The get_parameters is defined as below. def get_parameters(model, bias=False): for k, m in model._modules.items(): print("get_parameters", k, type(m), type(m).__name__, bias) if bias: if isinstance(m, nn.Conv2d): yield m.bias else: if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d): yield m.weight Could someone explain why the latter definition is better than the previous one?
In the second method, different configurations are being provided to update weights and biases. This is being done using per-parameter options for the optimizer. optim = torch.optim.Adam( [ {'params': get_parameters(model, bias=False)}, {'params': get_parameters(model, bias=True), 'lr': cfg['lr'] * 2, 'weight_decay': 0}, ], lr=cfg['lr'], weight_decay=cfg['weight_decay']) As per this, the learning rate for biases is 2 times that of weights, and weight decay is 0. Now, the reason why it's being done could be the network not learning properly. Read more Why is the learning rate for the bias usually twice as large as the the LR for the weights?
https://stackoverflow.com/questions/69217682/
Default input dimensions for an image in Neural networks?
Reading a PyTorch book, I came across this code where the authors change the order of the axis. img_t.permute(1, 2, 0) (Changes the order of the axes from C × H × W to H × W × C) Is H x W x C the default input dimensions for an input image in Neural networks?
In PyTorch inputs are bathes in shape N x C x H x W. So N is a batch size, C is a number of image channels, H and W are height and width as you know. But when you work with for instance cv2, default shape of images is HxWxC so you need to swap dimensions for pytorch.
https://stackoverflow.com/questions/69218349/
Use of dim=0/1 in pytorch and nn.softmax?
When using nn.softmax(), we use dim=1 or 0. Here dim=0 should mean row according to intuition but seems it means along the column. Is this true? >>> x = torch.tensor([[1,2],[3,4]],dtype=torch.float) >>> F.softmax(x,dim=0) tensor([[0.1192, 0.1192], [0.8808, 0.8808]]) >>> F.softmax(x,dim=1) tensor([[0.2689, 0.7311], [0.2689, 0.7311]]) Here when dim=0, probabilities along the columns sum to 1. Similarly when dim=1 probabilities along the rows sum to 1. Can someone explain how dim is used in PyTorch?
Indeed, in the 2D case: row refers to axis=0, while column refers to axis=1. The dim option specifies along which dimension the softmax is apply, i.e. summing back on that same axis will lead to 1s: >>> x = torch.arange(1, 7, dtype=float).reshape(2,3) tensor([[1., 2., 3.], [4., 5., 6.]], dtype=torch.float64) On axis=0: >>> F.softmax(x, dim=0).sum(0) tensor([1.0000, 1.0000, 1.0000], dtype=torch.float64) On axis=1: >>> F.softmax(x, dim=1).sum(1) >>> tensor([1.0000, 1.0000], dtype=torch.float64) This is the expected behavior for torch.nn.functional.softmax [...] Parameters: dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1).
https://stackoverflow.com/questions/69222705/
Building a Neural Network for Binary Classification on Top of Pre-Trained Embeddings Not Working?
I am trying to build a Neural Network on top of the embeddings that a pre-trained model outputs. In specific: I have the logits of a base model saved to disk where each example is an array of shape 512 (which originally corresponds to an image) with an associated label (0 or 1). This is what I am doing right now: Here's the model definition and training loop that I have. Right now it is a simple Linear layer, just to make sure that it works, however, when I run this script, the loss starts at .4 and not ~.7 which is the standard for binary classification. Can anyone spot where I am going wrong? from transformers.modeling_outputs import SequenceClassifierOutput class ClassNet(nn.Module): def __init__(self, num_labels=2): super(ClassNet, self).__init__() self.num_labels = num_labels self.classifier = nn.Linear(512, num_labels) if num_labels > 0 else nn.Identity() def forward(self, inputs): logits = self.classifier(inputs) loss_fct = nn.CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) return SequenceClassifierOutput( loss=loss, logits=logits ) model = ClassNet() optimizer = optim.Adam(model.parameters(), lr=1e-4,weight_decay=5e-3) # L2 regularization loss_fct=nn.CrossEntropyLoss() for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(train_loader, 0): # get the inputs; data is a list of [inputs, labels] #data['embeddings'] -> torch.Size([1, 512]) #data['labels'] -> torch.Size([1]) inputs, labels = data['embeddings'], data['labels'] # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs) loss = loss_fct(outputs.logits.squeeze(1), labels.squeeze()) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 an example of printing the outputs.logits.squeeze(1) and labels.squeeze(): #outputs.logits.squeeze(1) tensor([[-0.2214, 0.2187], [ 0.3838, -0.3608], [ 0.9043, -0.9065], [-0.3324, 0.4836], [ 0.6775, -0.5908], [-0.8017, 0.9044], [ 0.6669, -0.6488], [ 0.4253, -0.5357], [-1.1670, 1.1966], [-0.0630, -0.1150], [ 0.6025, -0.4755], [ 1.8047, -1.7424], [-1.5618, 1.5331], [ 0.0802, -0.3321], [-0.2813, 0.1259], [ 1.3357, -1.2737]], grad_fn=<SqueezeBackward1>) #labels.squeeze() tensor([1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0]) #loss tensor(0.4512, grad_fn=<NllLossBackward>)
You are only printing from the second iteration. The above will effectively print for every 200k+1 steps, but i starts at 0 if i % 2000 == 1: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) i.e. one gradient descent step has already occurred. This might be enough to go from the initial loss value -log(1/2) = ~0.69 to the one you observed ~0.45.
https://stackoverflow.com/questions/69223955/
Difference In the output image when using traced model(.pt) with C++ and OpenCV
I have retrained the model based on EnlightenGAN. Further I have traced the model in order to execute it in a C++ application using libTorch v1.6. However, I am getting slightly different results as compared to the python(executing the traced model) version. The model requires the input RGB tensor and the attention map Image tensor as input. The attention map is basically to inform the model about the image region which requires contrast enhancement. Below is the code to get inference the output from PT model in python. def getTransform(): transform_list = [] transform_list += [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))] return transforms.Compose(transform_list) def convertToCV(tensor): tensor = torch.squeeze(tensor) tensor = tensor.cpu().float().detach() tensor = torch.unsqueeze(tensor, 0) tensor = tensor.permute(1, 2, 0) tensor = ((tensor +1)/2.0) * 255.0 tensor = tensor.numpy() return tensor def proprocess(image): transform = getTransform() trgbImage = transform(image) r,g,b = trgbImage[0]+1, trgbImage[1]+1, trgbImage[2]+1 tattentionImage = 1. - ((0.299*r+0.587*g+0.114*b)/2.) tattentionImage = torch.unsqueeze(tattentionImage, 0) trgbImage = torch.unsqueeze(trgbImage, 0) tattentionImage = torch.unsqueeze(tattentionImage, 0) return trgbImage, tattentionImage def run(inputPath, OutputPath): modelToLoad = torch.jit.load("./EGAN.pt") print("OK") count =0 for filename in os.listdir(inputPath): print("Processing Image : ", filename) inputImage = cv2.imread(os.path.join(inputPath,filename)) rgbImage, attentionImage = proprocess(inputImage) fake, real = modelToLoad.forward(rgbImage,attentionImage ) fake_B = convertToCV(fake) fake_B1 = cv2.cvtColor(fake_B, cv2.COLOR_RGB2BGR) cv2.imwrite(OutputPath + "pic1.png" , fake_B ) The C++ version for the inference code is below #define A 0.299 #define B 0.5870 #define C 0.114 cv::Mat torchTensortoCVMat1C(torch::Tensor& tensor) { tensor = tensor.squeeze(0); tensor = tensor.to(torch::kCPU).to(torch::kFloat32).detach(); tensor = tensor.permute({ 1, 2, 0 }).contiguous(); tensor = tensor.mul(0.5).add(0.5).mul(255.0); tensor = tensor.to(torch::kU8); int64_t height = tensor.size(0); int64_t width = tensor.size(1); cv::Mat mat = cv::Mat(cv::Size(width, height), CV_8UC3, tensor.data_ptr<uchar>()); return mat.clone(); } std::vector<torch::jit::IValue> CV2Tensor(const cv::Mat& cv_Image) { torch::Tensor tInputImage = (torch::from_blob(cv_Image.data, { cv_Image.rows, cv_Image.cols, cv_Image.channels() }, torch::kByte)); tInputImage = tInputImage.to(torch::kFloat).div(255); tInputImage = tInputImage.sub(0.5).div(0.5).permute({ 2, 0, 1 }); torch::Tensor red = tInputImage[0] + 1 ; torch::Tensor green = tInputImage[1] + 1 ; torch::Tensor blue = tInputImage[2] + 1; red = red.mul(A); green = green.mul(B); blue = blue.mul(C); torch::Tensor channelSum = red.add(green).add(blue); channelSum = channelSum.div(2.); torch::Tensor tGrayImage = 1. - channelSum; tGrayImage.unsqueeze_(0); tGrayImage.unsqueeze_(0); tInputImage.unsqueeze_(0); std::vector<torch::jit::IValue> input; input.push_back(tInputImage); input.push_back(tGrayImage); return input; } void enhanceImage(const std::string& Img, torch::jit::script::Module& network,const std::string& outputPath, std::string& fileName) { cv::Mat rgbImage; cv::Mat inputImage = cv::imread(Img); std::vector<torch::jit::IValue> input = CV2Tensor(inputImage); try { auto outputs = network.forward(input).toTuple(); torch::Tensor resultFake = outputs->elements()[0].toTensor(); cv::Mat output1 = torchTensortoCVMat(resultFake); cv::imshow("out1.png", output1); cv::waitKey(0); } catch (std::exception& e) { std::cout << e.what() << std::endl; } } I have also checked the tensor output at all the steps, and they are same. However, after the conversion the output image has color flowing out from brighter regions of input image as show below. Python Version C++ Version I have tried many attempt but I am totally puzzeled as to how I should solve the problem. Any help is most welcome. Thanks. PS : Let me know if more info is required.
I found the problem in the above implementation. In the CPP version, I was not clamping the values after doing de-normalisation. I have put a clamping function and now it is working as expected. The edit part if anyone stumbles on the same problem is below: tensor = tensor.mul(0.5).add(0.5).mul(255.0); -- > tensor = tensor.mul(0.5).add(0.5).mul(255.0).clamp(0, 255); Without clamping it as causing overflow in brighter regions of the image.
https://stackoverflow.com/questions/69225244/
NumPy + PyTorch Tensor assignment
lets assume we have a tensor representing an image of the shape (910, 270, 1) which assigned a number (some index) to each pixel with width=910 and height=270. We also have a numpy array of size (N, 3) which maps a 3-tuple to an index. I now want to create a new numpy array of shape (920, 270, 3) which has a 3-tuple based on the original tensor index and the mapping-3-tuple-numpy array. How do I do this assignment without for loops and other consuming iterations? This would look simething like: color_image = np.zeros((self._w, self._h, 3), dtype=np.int32) self._colors = np.array(N,3) # this is already present indexed_image = torch.tensor(920,270,1) # this is already present #how do I assign it to this numpy array? color_image[indexed_image.w, indexed_image.h] = self._colors[indexed_image.flatten()]
Assuming you have _colors, and indexed_image. Something that ressembles to: >>> indexed_image = torch.randint(0, 10, (920, 270, 1)) >>> _colors = np.random.randint(0, 255, (N, 3)) A common way of converting a dense map to a RGB map is to loop over the label set: >>> _colors = torch.FloatTensor(_colors) >>> rgb = torch.zeros(indexed_image.shape[:-1] + (3,)) >>> for lbl in range(N): ... rgb[lbl == indexed_image[...,0]] = _colors[lbl]
https://stackoverflow.com/questions/69225294/
How does one install torchtext with cuda >=11.0 (and pytorch 1.9)?
I've tried multiple things e.g. conda install -y pytorch==1.9 torchvision torchaudio torchtext cudatoolkit=11.0 -c pytorch -c nvidia but it never ends up downloading the version with cuda 11.0 or above for some reason. The error message is too large to paste but you can see details here: https://github.com/pytorch/text/issues/1395 It should be easy to reproduce with an empty env as follow: conda create -n env_a40 python=3.9 conda activate env_a40 conda install -y pytorch==1.9 torchvision torchaudio torchtext cudatoolkit=11.0 -c pytorch -c nvidia crossposted: https://discuss.pytorch.org/t/how-does-one-install-a-torchtext-version-compatible-with-cuda-11-0/132276 https://github.com/pytorch/text/issues/1395 related: How does one install pytorch 1.9 in an HPC that seems to refuse to cooperate? https://github.com/pytorch/text/issues/1397 note you can also try it with pip: pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html with no success yet.
For me it worked with torchtext 0.10.1. The order of how I did things is install pytorch first with: pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html (probably use the most recent command from the pytorch website https://pytorch.org/get-started/locally/, but if that doesn't work then go to the torchtext website to see what versions of python and pytorch they support and install that. Hopefully in the future torchtext will be in line with the main pytorch branch https://github.com/pytorch/text) Then since I was using my personal library I installe it in editable mode: pip install -e ~/ultimate-utils/ultimate-utils-proj-src or from pypi pip install ultimate-utils then go to python to test the pytorch version: (uutils_env) miranda9~/type-parametric-synthesis $ python Python 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torchtext >>> >>> from torchtext.vocab import Vocab >>> it knows to install the right version due to my setup.py file. But you can install the right version as follows: pip install torchtext==0.10.1 In the future the above versions might change and you might have to open an issue in torchtext's github. Note: If you are using pip do: pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html pip3 install torchtext==0.10.1 probably can be compressed to: pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 torchtext==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html but have not tried it. Acks: In particular thanks @ivan for the help!
https://stackoverflow.com/questions/69229975/
How does one install pytorch 1.9 in an HPC that seems to refuse to cooperate?
I've been trying to install PyTorch 1.9 with Cuda (ideally 11) on my HPC but I cannot. The cluster says: Package typing-extensions conflicts for: typing-extensions torchvision -> pytorch==1.8.1 -> typing-extensionsThe following specifications were found to be incompatible with your system: - feature:/linux-64::__glibc==2.17=0 - feature:|@/linux-64::__glibc==2.17=0 - cffi -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - cudatoolkit=11.0 -> __glibc[version='>=2.17,<3.0.a0'] - cudatoolkit=11.0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - freetype -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - jpeg -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - lcms2 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - libffi -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - libgcc-ng -> __glibc[version='>=2.17'] - libmklml -> libgcc-ng -> __glibc[version='>=2.17'] - libpng -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - libstdcxx-ng -> __glibc[version='>=2.17'] - libtiff -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - libwebp-base -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - lz4-c -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - mkl-service -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - mkl_fft -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - mkl_random -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - ncurses -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - ninja -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - numpy -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - numpy-base -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - openjpeg -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - openssl -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - pillow -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - python=3.9 -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - pytorch==1.9 -> cudatoolkit[version='>=11.1,<11.2'] -> __glibc[version='>=2.17,<3.0.a0'] - readline -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - sqlite -> libgcc-ng[version='>=7.5.0'] -> __glibc[version='>=2.17'] - tk -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - torchvision -> cudatoolkit[version='>=11.1,<11.2'] -> __glibc[version='>=2.17|>=2.17,<3.0.a0'] - xz -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - zlib -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] - zstd -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17'] Your installed version is: 2.17 but I don't understand how to use that info to install it. Is it something I can do for the system admins? When I try to install it with conda, I get a message telling me that it's already installed. However, a conda list greps shows the version is only CPU, not GPU: (metalearning_gpu) miranda9~/automl-meta-learning $ conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia Collecting package metadata (current_repodata.json): done Solving environment: done # All requested packages already installed. (metalearning_gpu) miranda9~/automl-meta-learning $ (metalearning_gpu) miranda9~/automl-meta-learning $ conda list | grep torch cpuonly 1.0 0 pytorch ffmpeg 4.3 hf484d3e_0 pytorch pytorch 1.9.0 py3.9_cpu_0 [cpuonly] pytorch torch 1.9.0+cpu pypi_0 pypi torchaudio 0.9.0 pypi_0 pypi torchmeta 1.7.0 pypi_0 pypi torchvision 0.10.0+cpu pypi_0 pypi Attempting to install it with pip completely fails: (metalearning_gpu) miranda9~/automl-meta-learning $ pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html Looking in links: https://download.pytorch.org/whl/torch_stable.html Collecting torch==1.9.0+cu111 ERROR: Exception: Traceback (most recent call last): File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 173, in _main status = self.run(options, args) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 203, in wrapper return func(self, options, args) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/commands/install.py", line 315, in run requirement_set = resolver.resolve( File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 94, in resolve result = self._result = resolver.resolve( File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 472, in resolve state = resolution.resolve(requirements, max_rounds=max_rounds) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 341, in resolve self._add_to_criteria(self.state.criteria, r, parent=None) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _add_to_criteria if not criterion.candidates: File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__ return bool(self._sequence) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 140, in __bool__ return any(self) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 128, in <genexpr> return (c for c in iterator if id(c) not in self._incompatible_ids) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 32, in _iter_built candidate = func() File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 204, in _make_candidate_from_link self._link_candidate_cache[link] = LinkCandidate( File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 295, in __init__ super().__init__( File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 156, in __init__ self.dist = self._prepare() File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 227, in _prepare dist = self._prepare_distribution() File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 305, in _prepare_distribution return self._factory.preparer.prepare_linked_requirement( File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 508, in prepare_linked_requirement return self._prepare_linked_requirement(req, parallel_builds) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 550, in _prepare_linked_requirement local_file = unpack_url( File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 239, in unpack_url file = get_http_url( File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 102, in get_http_url from_path, content_type = download(link, temp_dir.path) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/network/download.py", line 132, in __call__ resp = _http_get_download(self._session, link) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/network/download.py", line 115, in _http_get_download resp = session.get(target_url, headers=HEADERS, stream=True) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/requests/sessions.py", line 555, in get return self.request('GET', url, **kwargs) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/network/session.py", line 454, in request return super().request(method, url, *args, **kwargs) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/cachecontrol/adapter.py", line 44, in send cached_response = self.controller.cached_request(request) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_vendor/cachecontrol/controller.py", line 139, in cached_request cache_data = self.cache.get(cache_url) File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/pip/_internal/network/cache.py", line 54, in get return f.read() MemoryError Current install script: ## Installation script # to install do: bash ~/automl-meta-learning/install.sh #conda update conda #conda create -y -n metalearning_gpu python=3.9 #conda activate metalearning_gpu #conda remove --name metalearning_gpu --all module load cuda-toolkit/11.1 module load gcc/9.2.0 # A40, needs cuda at least 11.0, but 1.9 requires 11 conda activate metalearning_gpu conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html #conda activate metalearning_cpu #conda install pytorch torchvision torchaudio cpuonly -c pytorch #pip3 install torch==1.9.0+cpu torchvision==0.10.0+cpu torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html # uutils installs conda install -y dill conda install -y networkx>=2.5 conda install -y scipy conda install -y scikit-learn conda install -y lark-parser -c conda-forge # due to compatibility with torch=1.7.x, https://stackoverflow.com/questions/65575871/torchtext-importerror-in-colab #conda install -y torchtext==0.8.0 -c pytorch conda install -y tensorboard conda install -y pandas conda install -y progressbar2 conda install -y transformers conda install -y requests conda install -y aiohttp conda install -y numpy conda install -y plotly conda install -y matplotlib pip install wandb # for automl conda install -y pyyml conda install -y torchviz #conda install -y graphviz #pip install tensorflow #pip install learn2learn #pip install -U git+https://github.com/brando90/pytorch-meta.git #pip install --no-deps torchmeta==1.6.1 pip install --no-deps torchmeta==1.7.0 # 'torch>=1.4.0,<1.9.0', # 'torchvision>=0.5.0,<0.10.0', #pip install -y numpy pip install Pillow pip install h5py #pip install requests pip install ordered-set pip install higher # 'torch' #pip install -U git+https://github.com/moskomule/anatome pip install --no-deps -U git+https://github.com/moskomule/anatome # 'torch>=1.9.0', # 'torchvision>=0.10.0', pip install tqdm # - using conda develop rather than pip because uutils installs incompatible versions with the vision cluster ## python -c "import sys; [print(p) for p in sys.path]" conda install conda-build # conda develop ~/ultimate-utils/ultimate-utils-proj-src # conda develop ~/automl-meta-learning/automl-proj-src # pip install ultimate-utils # -- extra notes # local editable installs # HAL installs, make sure to clone from wmlce 1.7.0 that has h5py ~= 2.9.0 and torch 1.3.1 and torchvision 0.4.2 # pip install torchmeta==1.3.1
First of all, as @Francois suggested, try to uninstall the CPU only version of pytorch. Also in your installation script, you should use either conda or pip3. Then you may want to try the following attempts: using conda: add conda-forge channel to your command (conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia -c conda-forge). And make sure conda is updated. using pip: insert --no-cache-dir into your command (pip3 --no-cache-dir install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html) to avoid the MemoryError.
https://stackoverflow.com/questions/69230502/
How to do batched dot product in PyTorch?
I have a input tensor that is of size [B, N, 3] and I have a test tensor of size [N, 3] . I want to apply a dot product of the two tensors such that I get [B, N] basically. Is this actually possible?
Yes, it's possible: a = torch.randn(5, 4, 3) b = torch.randn(4, 3) c = torch.einsum('ijk,jk->ij', a, b) # torch.Size([5, 4])
https://stackoverflow.com/questions/69230570/
How to train a model with loss calculated by another model in pytorch?
There are two models A and B. Model A outputs a bike deployment plan for all stations in a city, and model B takes this plan as input and gives the evaluation of each station. Now, the model B is pretrained, and i want to use the evaluation given by model B as loss to optimize parameters of model A. Here is the sample code. A = modelA() B = modelB() optimizer = torch.optim.Adam(A.parameters()) def my_loss(deploy): shape = deploy.size() state = torch.zeros((shape[0], shape[1], 2 + shape[1]), dtype=torch.long) # Notice: this step will copy deploy state[:, :, 2:] = torch.reshape(deploy, (shape[0], 1, shape[1])) state[:, :, 0] = torch.arange(0, shape[1]) state = torch.reshape(state, (-1, 2 + shape[1])) eval = B(state) eval = torch.reshape(eval, (shape[0], shape[1])) return torch.mean(eval) # Train model A for epoch in range(EPOCHS): for batch_idx, (x, useless_y) in enumerate(dataloader): optimizer.zero_gard() pred = A(x) loss = my_loss(pred) loss.backward() optimizer.step() But in fact, during training, nothing happens, parameters of model A is not updated. I also tried optimizer = torch.optim.Adam([{'params': A.parameters()}, {'params': B.parameters(), 'lr':0}]) and nothing happens too. Any ideas?
The computational graph is cut off at state, so the loss does not back propagate to A. Try; state = torch.zeros((shape[0], shape[1], 2 + shape[1]), dtype=torch.long) -> state = torch.zeros((shape[0], shape[1], 2 + shape[1]), requires_grad=True) # add requires_grad=True, dtype=torch.long may throw error However, I still don't think it will work with your code. Optional suggestion; I think state_tensors is not defined. In-place operation state[:, :, 2:] = may not good. (In my case, this throw error) To copy tensor, .expand() or .repeat(), and to expand dim, .unsqueeze() may useful to avoid this.
https://stackoverflow.com/questions/69233188/
ValueError: Target size (torch.Size([128])) must be the same as input size (torch.Size([112]))
I have a training function, in which inside there are two vectors: d_labels_a = torch.zeros(128) d_labels_b = torch.ones(128) Then I have these features: # Compute output features_a = nets[0](input_a) features_b = nets[1](input_b) features_c = nets[2](inputs) And then a domain classifier (nets[4]) makes predictions: d_pred_a = torch.squeeze(nets[4](features_a)) d_pred_b = torch.squeeze(nets[4](features_b)) d_pred_a = d_pred_a.float() d_pred_b = d_pred_b.float() print(d_pred_a.shape) The error raises in the loss function: ` pred_a = torch.squeeze(nets3) pred_b = torch.squeeze(nets3) pred_c = torch.squeeze(nets3) loss = criterion(pred_a, labels_a) + criterion(pred_b, labels_b) + criterion(pred_c, labels) + d_criterion(d_pred_a, d_labels_a) + d_criterion(d_pred_b, d_labels_b) The problem is that d_pred_a/b is different from d_labels_a/b, but only after a certain point. Indeed, when I print the shape of d_pred_a/b it istorch.Size([128])but then it changes totorch.Size([112])` independently. It comes from here: # Compute output features_a = nets[0](input_a) features_b = nets[1](input_b) features_c = nets[2](inputs) because if I print the shape of features_a is torch.Size([128, 2048]) but it changes into torch.Size([112, 2048]) nets[0] is a VGG, like this: class VGG16(nn.Module): def __init__(self, input_size, batch_norm=False): super(VGG16, self).__init__() self.in_channels,self.in_width,self.in_height = input_size self.block_1 = VGGBlock(self.in_channels,64,batch_norm=batch_norm) self.block_2 = VGGBlock(64, 128,batch_norm=batch_norm) self.block_3 = VGGBlock(128, 256,batch_norm=batch_norm) self.block_4 = VGGBlock(256,512,batch_norm=batch_norm) @property def input_size(self): return self.in_channels,self.in_width,self.in_height def forward(self, x): x = self.block_1(x) x = self.block_2(x) x = self.block_3(x) x = self.block_4(x) # x = self.avgpool(x) x = torch.flatten(x,1) return x
I solved. The problem was the last batch. I used drop_last=True in the dataloader and It worked.
https://stackoverflow.com/questions/69233691/
How can I solve the problem that method is not iterable?
I have this Variational autoencoder and I want to use Adam for its optimizer but it has this error I don't know what is wrong here class VAE(nn.Module): def __init__(self): super().__init__() #encoder self.enc = nn.Sequential( nn.Linear(1200, 786), nn.ReLU(), nn.Flatten() ) self.mean = nn.Linear(1200, 2) self.log = nn.Linear(1200, 2) #decoder self.dec = nn.Sequential( nn.Linear(2, 1200), nn.ReLU(), ) def param(self, mu, Log): eps = torch.randn(2, 1200) z = mu + (eps * torch.exp(Log * 0.5)) return z def forward(self, x): x = self.enc(x) mu , log = self.mean(x), self.log(x) z = self.param(mu, log) x = self.dec(z) return x, mu, log model = VAE() optim = torch.optim.Adam(model.param, lr=0.01) criterion = nn.CrossEntropyLoss() and here is the error Traceback (most recent call last): File "C:\Users\khashayar\PycharmProjects\pythonProject2\VAE.py", line 40, in <module> optim = torch.optim.Adam(model.param, lr=0.01) File "C:\Users\khashayar\anaconda3\envs\deeplearning\lib\site-packages\torch\optim\adam.py", line 48, in __init__ super(Adam, self).__init__(params, defaults) File "C:\Users\khashayar\anaconda3\envs\deeplearning\lib\site-packages\torch\optim\optimizer.py", line 47, in __init__ param_groups = list(params) TypeError: 'method' object is not iterable how I can solve this?
The problem is probably in model.param. param is a method, and as write in the error : "'method' object is not iterable". The optimizer should receive the model parameters, and not the method "param" of the model class. Try convert optim = torch.optim.Adam(model.param, lr=0.01) To optim = torch.optim.Adam(model.parameters(), lr=0.01)
https://stackoverflow.com/questions/69234232/
pytorch custom loss function nn.CrossEntropyLoss
After studying autograd, I tried to make loss function myself. And here are my loss def myCEE(outputs,targets): exp=torch.exp(outputs) A=torch.log(torch.sum(exp,dim=1)) hadamard=F.one_hot(targets, num_classes=10).float()*outputs B=torch.sum(hadamard, dim=1) return torch.sum(A-B) and I compared with torch.nn.CrossEntropyLoss here are results for i,j in train_dl: inputs=i targets=j break outputs=model(inputs) myCEE(outputs,targets) : tensor(147.5397, grad_fn=<SumBackward0>) loss_func = nn.CrossEntropyLoss(reduction='sum') : tensor(147.5397, grad_fn=<NllLossBackward>) values were same. I thought, because those are different functions so grad_fn are different and it won't cause any problems. But something happened! After 4 epochs, loss values are turned to nan. Contrary to myCEE, with nn.CrossEntropyLoss learning went well. So, I wonder if there is a problem with my function. After read some posts about nan problems, I stacked more convolutions to the model. As a result 39-epoch training did not make an error. Nevertheless, I'd like to know difference between myCEE and nn.CrossEntropyLoss
torch.nn.CrossEntropyLoss is different to your implementation because it uses a trick to counter instable computation of the exponential when using numerically big values. Given the logits output {l_1, ... l_j, ..., l_n}, the softmax is defined as: softmax(l_i) = exp(l_i) / sum_j(exp(l_j)) The trick is to multiple both the numerator and denominator by exp(-β): softmax(l_i) = exp(l_i)*exp(-β) / [sum_j(exp(l_j))*exp(-β)] = exp(l_i-β) / sum_j(exp(l_j-β)) Then the log-softmax comes down to: logsoftmax(l_i) = l_i - β - log[sum_j(exp(l_j-β))] In practice β is chosen as the highest logit value i.e. β = max_j(l_j). You can read more about it on this question: Numerically Stable Softmax.
https://stackoverflow.com/questions/69235383/
Dataloaders Zip
dataloaders = zip(labeled_trainloader, [None] * len(labeled_trainloader)) What is this line of code means? I know to zip 2 dataloaders using zip. But why are they adding the length of the dataloader .
[None] * len(labeled_trainloader) will make a list of None values the same length as labeled_trainloader. For example, [None] * 3 results in [None, None, None]. The zip expression will result in each object in labeled_trainloader being paired with None. labeled_trainloader = [1, 2, 3] dataloaders = zip(labeled_trainloader, [None] * len(labeled_trainloader)) print(list(dataloaders)) Output: [(1, None), (2, None), (3, None)]
https://stackoverflow.com/questions/69236407/
How to vectorize this computation of a mask for triplet loss function
Suppose I have a list of numbers lst of length N, along with two numbers epsilon and tau. I want to find the (N,N,N) mask matrix mask such that mask[i][j][k]=1 if and only if abs(lst[i] - lst[j]) <= epsilon and abs(lst[i] - lst[k]) >= tau This is what I tried: d_mat = torch.cdist(lst.unsqueeze(0), lst.unsqueeze(0)) within_eps = torch.where(dmat <= eps, 1, 0) over_tau = torch.where(dmat >= tau, 1, 0) mask = torch.zeros((N,N,N)) for i in range(N): for j in range(N): for k in range(N): if within_eps[i][j] == 1 and over_tau[i][k] == 1: mask[i][j][k] = 1 else: mask[i][j][k] = 0 So basically I did it naively. Can you show me, with steps, how you come up with a vectorization for this?
You successfully created the 2d dmat of pairwise distances. Now you can use torch.logical_and for creating the mask: mask = torch.logical_and(dmat[..., None] <= eps, dmat[:, None, :] >= tau) If you want to be explicit about the distance computation (and less efficient) you can: mask = torch.logical_and(torch.abs(lst[:, None, None] - lst[None, :, None]) <= eps, torch.abs(lst[:, None, None] - lst[None, None, :]) >= tau)
https://stackoverflow.com/questions/69239561/
TypeError in torch.argmax() when want to find the tokens with the highest `start` score
I want to run this code for question answering using hugging face transformers. import torch from transformers import BertForQuestionAnswering from transformers import BertTokenizer #Model model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad') #Tokenizer tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad') question = '''Why was the student group called "the Methodists?"''' paragraph = ''' The movement which would become The United Methodist Church began in the mid-18th century within the Church of England. A small group of students, including John Wesley, Charles Wesley and George Whitefield, met on the Oxford University campus. They focused on Bible study, methodical study of scripture and living a holy life. Other students mocked them, saying they were the "Holy Club" and "the Methodists", being methodical and exceptionally detailed in their Bible study, opinions and disciplined lifestyle. Eventually, the so-called Methodists started individual societies or classes for members of the Church of England who wanted to live a more religious life. ''' encoding = tokenizer.encode_plus(text=question,text_pair=paragraph) inputs = encoding['input_ids'] #Token embeddings sentence_embedding = encoding['token_type_ids'] #Segment embeddings tokens = tokenizer.convert_ids_to_tokens(inputs) #input tokens start_scores, end_scores = model(input_ids=torch.tensor([inputs]), token_type_ids=torch.tensor([sentence_embedding])) start_index = torch.argmax(start_scores) but I get this error at the last line: Exception has occurred: TypeError argmax(): argument 'input' (position 1) must be Tensor, not str File "D:\bert\QuestionAnswering.py", line 33, in <module> start_index = torch.argmax(start_scores) I don't know what's wrong. can anyone help me?
BertForQuestionAnswering returns a QuestionAnsweringModelOutput object. Since you set the output of BertForQuestionAnswering to start_scores, end_scores, the return QuestionAnsweringModelOutput object is forced convert to a tuple of strings ('start_logits', 'end_logits') causing the type mismatch error. The following should work: outputs = model(input_ids=torch.tensor([inputs]), token_type_ids=torch.tensor([sentence_embedding])) start_index = torch.argmax(outputs.start_logits)
https://stackoverflow.com/questions/69239925/
what is the best choice for an activation function in case of small sized neural networks
I am using pytorch and autograd to build my neural network architecture. It is a small 3 layered network with a sinngle input and output. Suppose I have to predict some output function based on some initial conditions and I am using a custom loss function. The problem I am facing is: My loss converges initially but gradients vanish eventually. I have tried sigmoid activation and tanh. tanh gives slightly better results in terms of loss convergence. I tried using ReLU but since I don't have much weights in my neural network, the weights become dead and it doesn't give good results. Is there any other activation function apart from sigmoid and tanh that handles the problem of vanishing gradients well enough for small sized neural networks? Any suggestions on what else can I try
In the deep learning world, ReLU is usually prefered over other activation functions, because it overcomes the vanishing gradient problem, allowing models to learn faster and perform better. But it could have downsides. Dying ReLU problem The dying ReLU problem refers to the scenario when a large number of ReLU neurons only output values of 0. When most of these neurons return output zero, the gradients fail to flow during backpropagation and the weights do not get updated. Ultimately a large part of the network becomes inactive and it is unable to learn further. What causes the Dying ReLU problem? High learning rate: If learning rate is set too high, there is a significant chance that new weights will be in negative value range. Large negative bias: Large negative bias term can indeed cause the inputs to the ReLU activation to become negative. How to solve the Dying ReLU problem? Use of a smaller learning rate: It can be a good idea to decrease the learning rate during the training. Variations of ReLU: Leaky ReLU is a common effective method to solve a dying ReLU problem, and it does so by adding a slight slope in the negative range. There are other variations like PReLU, ELU, GELU. If you want to dig deeper check out this link. Modification of initialization procedure: It has been demonstrated that the use of a randomized asymmetric initialization can help prevent the dying ReLU problem. Do check out the arXiv paper for the mathematical details Sources: Practical guide for ReLU ReLU variants Dying ReLU problem
https://stackoverflow.com/questions/69240517/
I am trying to import:from torchtext.legacy.data import Field, BucketIterator,Iterator,data, but get error 'No module named 'torchtext.legacy'
I am trying to execute the following code for a nlp proj import torchtext from torchtext.legacy.data import Field, BucketIterator, Iterator from torchtext.legacy import data ----> 6 from torchtext.legacy.data import Field, BucketIterator, Iterator 7 from torchtext.legacy import data 8 ModuleNotFoundError: No module named 'torchtext.legacy'. I have tried it on both kaggle notebook and jupyter notebook and found the same error in both. i even tried to install !pip install -qqq deepmatcher==0.1.1 in kaggle to solve the issue but it still gives the same error. is there any solution to this?
Before you import torchtext.legacy, you need to !pip install torchtext==0.10.0. Maybe legacy was removed in version 0.11.0.
https://stackoverflow.com/questions/69240815/
Defining a loss function such that an external array is used
In my neural network (RNN), I am defining the loss function such that the output of the neural network is used to find the index (binary) and then the index is used to extract the required element from an array which in turn will be used to calculate MSELoss. However, the program gives parameter().grad = None error which is mostly because the graph is breaking somewhere. What is the problem with the error function defined. Framework: Pytorch The codes are as follow: Neural Network: class RNN(nn.Module): def __init__(self): super(RNN, self).__init__() self.hidden_size = 8 # self.input_size = 2 self.h2o = nn.Linear(self.hidden_size, 1) self.h2h = nn.Linear(self.hidden_size, self.hidden_size) self.sigmoid = nn.Sigmoid() def forward(self,hidden): output = self.h2o(hidden) output = self.sigmoid(output) hidden = self.h2h(hidden) return output, hidden def init_hidden(self): return torch.zeros(1, self.hidden_size) Loss Function, train step and training rnn = RNN() criterion = nn.MSELoss() def loss_function(previous, output, index): code = 2*(output > 0.5).long() current = Q_m2[code:code+2, i] return criterion(current, previous), current def train_step(): hidden = rnn.init_hidden() rnn.zero_grad() # Q_m2.requires_grad = True # Q_m2.create_graph = True loss = 0 previous = Q_m[0:2, 0] for i in range(1, samples): output, hidden = rnn(hidden) l, previous = loss_function(previous, output, i) loss+=l loss.backward() # Q_m2.retain_grad() for p in rnn.parameters(): p.data.add_(p.grad.data, alpha=-0.05) return output, loss.item()/(samples - 1) def training(epochs): running_loss = 0 for i in range(epochs): output, loss = train_step() print(f'Epoch Number: {i+1}, Loss: {loss}') running_loss +=loss Q_m2 Q_m = np.zeros((4, samples)) for i in range(samples): Q_m[:,i] = q_x(U_m[:,i]) Q_m = torch.FloatTensor(Q_m) Q_m2 = Q_m Q_m2.requires_grad = True Q_m2.create_graph = True Error: <ipython-input-36-feefd257c97a> in train_step() 21 # Q_m2.retain_grad() 22 for p in rnn.parameters(): ---> 23 p.data.add_(p.grad.data, alpha=-0.05) 24 return output, loss.item()/(samples - 1) 25 AttributeError: 'NoneType' object has no attribute 'data'
This is a possible solution suggested to me by K. Frank at discuss.pytorch.org As I read it, code is calculated to be either 0 or 2. You could instead interpret output (processed appropriately, as necessary) to be the probability that code should be 0 vs. 2, and then use that probability to form a weighted average of the 0 and 2 entries in your Q_m2 array.
https://stackoverflow.com/questions/69240965/
is there any way to optimize pytorch inference in cpu?
I am going to serve pytorch model(resnet18) in website. However, inference in cpu(amd3600) requires 70% of cpu resources. I don't think the server(heroku) can handle this computation. Is there any way to optimize inference in cpu? many thanks
Admittedly, I'm not an expert on Heroku but probably you can use OpenVINO. OpenVINO is optimized for Intel hardware but it should work with any CPU. It optimizes the inference performance by e.g. graph pruning or fusing some operations together. Here are the performance benchmarks for Resnet-18 converted from PyTorch. You can find a full tutorial on how to convert the PyTorch model here. Some snippets below. Install OpenVINO The easiest way to do it is using PIP. Alternatively, you can use this tool to find the best way in your case. pip install openvino-dev[pytorch,onnx] Save your model to ONNX OpenVINO cannot convert PyTorch model directly for now but it can do it with ONNX model. This sample code assumes the model is for computer vision. dummy_input = torch.randn(1, 3, IMAGE_HEIGHT, IMAGE_WIDTH) torch.onnx.export(model, dummy_input, "model.onnx", opset_version=11) Use Model Optimizer to convert ONNX model The Model Optimizer is a command line tool which comes from OpenVINO Development Package so be sure you have installed it. It converts the ONNX model to OV format (aka IR), which is a default format for OpenVINO. It also changes the precision to FP16 (to further increase performance). Run in command line: mo --input_model "model.onnx" --input_shape "[1,3, 224, 224]" --mean_values="[123.675, 116.28 , 103.53]" --scale_values="[58.395, 57.12 , 57.375]" --data_type FP16 --output_dir "model_ir" Run the inference on the CPU The converted model can be loaded by the runtime and compiled for a specific device e.g. CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what is the best choice for you, just use AUTO. # Load the network ie = Core() model_ir = ie.read_model(model="model_ir/model.xml") compiled_model_ir = ie.compile_model(model=model_ir, device_name="CPU") # Get output layer output_layer_ir = compiled_model_ir.output(0) # Run inference on the input image result = compiled_model_ir([input_image])[output_layer_ir] Disclaimer: I work on OpenVINO.
https://stackoverflow.com/questions/69241400/
Why same notebook allocating large different vram in two different environment?
you can see this notebook is trainable in kaggle using kaggle’s 16gb vram limit : https://www.kaggle.com/firefliesqn/g2net-gpu-newbie i just tried to run this same notebook locally on rtx3090 gpu where i have torch 1.8 installed and same notebook allocating around 23.3 gb vram,why is this happening and how can i optimize my local environment like kaggle? even if i reduce batch size compared to what is used in kaggle,still locally my notebook allocates around 23gb vram in kggle i see torch 1.7,tensorflow 2.4 installed and locally as i use rtx3090 so new version of torch and tf is recommended,hence i used torch 1.8.1 and tensorflow 2.6
By default, TensorFlow allocates max available memory detected. When using TensorFlow, one can limit the memory used by the following snippet: gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: # Restrict TensorFlow to only allocate 12GB of memory on the first GPU try: tf.config.experimental.set_virtual_device_configuration(gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=12288)], where 12228 = 1024x12 Another solution (see discussion below) is to use (works for OP) (use this only if you do not have a specific upper limit of memory to be used) : tf.config.experimental.set_memory_growth(physical_devices[0], True) https://www.tensorflow.org/api_docs/python/tf/config/experimental/set_memory_growth In PyTorch, this is even easier: import torch # use 1/2 memory of the GPU 0 (should allocate very similar amount like TF) torch.cuda.set_per_process_memory_fraction(0.5, 0) #Can then check with total_memory_available = torch.cuda.get_device_properties(0).total_memory
https://stackoverflow.com/questions/69241715/
Why hasn't a's ID changed?
a = torch.tensor([1.0, 2.0, 3.0], requires_grad=True) b = torch.tensor([5.0, 5.0, 5.0]) los = sum(a*b) los.backward() print(id(a.grad)) a.grad = torch.tensor([1., 2., 3.]) print(id(a.grad)) The output: 2792503915000 2792503915000 Why didn't the ID of a.grad change?
Python's id() simply returns a unique number for the object. IDs of objects with non-overlapping lifetimes can be the same; that is, when an object is destroyed, and then a new object is created, the new object can have the same ID as the last. Python object IDs are often compared to memory addresses in C, and (apparently) in some python implementations, they are just memory addresses. For instance, consider the following snippet: class a: pass my_a = a() print(id(my_a)) my_a = a() print(id(my_a)) my_a = a() print(id(my_a)) With my particular python implementation, an example output was: 139647922983888 139647922986768 139647922983888 The first ID was repeated in the third instance; this is permissible by the id() api.
https://stackoverflow.com/questions/69243357/
PyTorch DataLoader returning list instead of tensor on custom Dataset
I do a trial about my dataset, this is my complete code: data_root='D:/AuxiliaryDocuments/NYU/' raw_data_transforms=transforms.Compose([#transforms.ToPILImage(), transforms.CenterCrop((224,101)), transforms.ToTensor()]) depth_data_transforms=transforms.Compose([transforms.CenterCrop((74,55)), transforms.ToTensor()]) filename_txt={'image_train':'image_train.txt','image_test':'image_test.txt', 'depth_train':'depth_train.txt','depth_test':'depth_test.txt'} class Mydataset(Dataset): def __init__(self,data_root,transformation,data_type): self.transform=transformation self.image_path_txt=filename_txt[data_type] self.sample_list=list() f=open(data_root+'/'+data_type+'/'+self.image_path_txt) lines=f.readlines() for line in lines: line=line.strip() line=line.replace(';','') self.sample_list.append(line) f.close() def __getitem__(self, index): item=self.sample_list[index] img=Image.open(item) if self.transform is not None: img=self.transform(img) idx=index print(type(img)) return idx,img def __len__(self): return len(self.sample_list) I print the type of img that is <class 'torch.Tensor'>, then I used the coding below: test=Mydataset(data_root,raw_data_transforms,'image_train') test_1=Mydataset(data_root,depth_data_transforms,'depth_train') test2=DataLoader(test,batch_size=4,num_workers=0,shuffle=False) test_2=DataLoader(test_1,batch_size=4,num_workers=0,shuffle=False) print the information: for idx,data in enumerate(test_2): print(idx,data) print(type(data)) but the type of data is '<class 'list'>', which I need is tensor.
This is the expected output. DataLoader in your case is supposed to return a list. The output of DataLoader is (inputs batch, labels batch). e.g. for idx, data in enumerate(test_dataloader): if idx == 0: print(type(data)) print(len(data), data[0].shape, data[1].shape) <class 'list'> 2 torch.Size([64, 1, 28, 28]) torch.Size([64]) Here, the 64 labels corresponds to 64 images in the batch. In order to pass it to the model, you can do #If you return img first in your Dataset return img, idx # Either for idx, data in enumerate(test_dataloader): # pass inputs to model out = model(data[0]) # your labels are data[1] # Or for idx, (inputs, labels) in enumerate(test_dataloader): # pass inputs to model out = model(inputs) # your labels are in "labels" variable
https://stackoverflow.com/questions/69248704/
How to test a model before fine-tuning in Pytorch Lightning?
Doing things on Google Colab. transformers: 4.10.2 pytorch-lightning: 1.2.7 import torch from torch.utils.data import DataLoader from transformers import BertJapaneseTokenizer, BertForSequenceClassification import pytorch_lightning as pl dataset_for_loader = [ {'data':torch.tensor([0,1]), 'labels':torch.tensor(0)}, {'data':torch.tensor([2,3]), 'labels':torch.tensor(1)}, {'data':torch.tensor([4,5]), 'labels':torch.tensor(2)}, {'data':torch.tensor([6,7]), 'labels':torch.tensor(3)}, ] loader = DataLoader(dataset_for_loader, batch_size=2) for idx, batch in enumerate(loader): print(f'# batch {idx}') print(batch) category_list = [ 'dokujo-tsushin', 'it-life-hack', 'kaden-channel', 'livedoor-homme', 'movie-enter', 'peachy', 'smax', 'sports-watch', 'topic-news' ] tokenizer = BertJapaneseTokenizer.from_pretrained(MODEL_NAME) max_length = 128 dataset_for_loader = [] for label, category in enumerate(tqdm(category_list)): # file ./text has lots of articles, categorized by category # and they are just plain texts, whose content begins from forth line for file in glob.glob(f'./text/{category}/{category}*'): lines = open(file).read().splitlines() text = '\n'.join(lines[3:]) encoding = tokenizer( text, max_length=max_length, padding='max_length', truncation=True ) encoding['labels'] = label encoding = { k: torch.tensor(v) for k, v in encoding.items() } dataset_for_loader.append(encoding) SEED=lambda:0.0 # random.shuffle(dataset_for_loader) # ランダムにシャッフル random.shuffle(dataset_for_loader,SEED) n = len(dataset_for_loader) n_train = int(0.6*n) n_val = int(0.2*n) dataset_train = dataset_for_loader[:n_train] dataset_val = dataset_for_loader[n_train:n_train+n_val] dataset_test = dataset_for_loader[n_train+n_val:] dataloader_train = DataLoader( dataset_train, batch_size=32, shuffle=True ) dataloader_val = DataLoader(dataset_val, batch_size=256) dataloader_test = DataLoader(dataset_test, batch_size=256) class BertForSequenceClassification_pl(pl.LightningModule): def __init__(self, model_name, num_labels, lr): super().__init__() self.save_hyperparameters() self.bert_sc = BertForSequenceClassification.from_pretrained( model_name, num_labels=num_labels ) def training_step(self, batch, batch_idx): output = self.bert_sc(**batch) loss = output.loss self.log('train_loss', loss) return loss def validation_step(self, batch, batch_idx): output = self.bert_sc(**batch) val_loss = output.loss self.log('val_loss', val_loss) def test_step(self, batch, batch_idx): labels = batch.pop('labels') output = self.bert_sc(**batch) labels_predicted = output.logits.argmax(-1) num_correct = ( labels_predicted == labels ).sum().item() accuracy = num_correct/labels.size(0) self.log('accuracy', accuracy) def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=self.hparams.lr) checkpoint = pl.callbacks.ModelCheckpoint( monitor='val_loss', mode='min', save_top_k=1, save_weights_only=True, dirpath='model/', ) trainer = pl.Trainer( gpus=1, max_epochs=10, callbacks = [checkpoint] ) model = BertForSequenceClassification_pl( MODEL_NAME, num_labels=9, lr=1e-5 ) ### (a) ### # I think this is where I am doing fine-tuning trainer.fit(model, dataloader_train, dataloader_val) # this is to score after fine-tuning test = trainer.test(test_dataloaders=dataloader_test) print(f'Accuracy: {test[0]["accuracy"]:.2f}') But I am not really sure how to do a test before fine-tuning, in order to compare two models before and after fine-tuning, in order to show how effective fine-tuning is. Inserting the following two lines to ### (a) ###: test = trainer.test(test_dataloaders=dataloader_test) print(f'Accuracy: {test[0]["accuracy"]:.2f}') I got this result: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-13-c8b2c67f2d5c> in <module>() 9 10 # 6-19 ---> 11 test = trainer.test(test_dataloaders=dataloader_test) 12 print(f'Accuracy: {test[0]["accuracy"]:.2f}') 13 /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in test(self, model, test_dataloaders, ckpt_path, verbose, datamodule) 896 self.verbose_test = verbose 897 --> 898 self._set_running_stage(RunningStage.TESTING, model or self.lightning_module) 899 900 # If you supply a datamodule you can't supply train_dataloader or val_dataloaders /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in _set_running_stage(self, stage, model_ref) 563 the trainer and the model 564 """ --> 565 model_ref.running_stage = stage 566 self._running_stage = stage 567 AttributeError: 'NoneType' object has no attribute 'running_stage' I noticed that Trainer.fit() can take None as arguments other than model, so I tried this: trainer.fit(model) test=trainer.test(test_dataloaders=dataloader_test) print(f'Accuracy: {test[0]["accuracy"]:.2f}') The result: MisconfigurationException: No `train_dataloader()` method defined. Lightning `Trainer` expects as minimum a `training_step()`, `train_dataloader()` and `configure_optimizers()` to be defined. Thanks.
The Trainer needs to call its .fit() in order to set up a lot of things and then only you can do .test() or other methods. You are right about putting a .fit() just before .test() but the fit call needs to a valid one. You have to feed a dataloader/datamodule to it. But since you don't want to do a training/validation in this fit call, just pass limit_[train/val]_batches=0 while Trainer construction. trainer = Trainer(gpus=..., ..., limit_train_batches=0, limit_val_batches=0) trainer.fit(model, dataloader_train, dataloader_val) trainer.test(model, dataloader_test) # without fine-tuning The fit call here will just set things up for you and skip training/validation. And then the testing follows. Next time run the same code but without the limit_[train/val]_batches, this will do the pretraining for you trainer = Trainer(gpus=..., ...) trainer.fit(model, dataloader_train, dataloader_val) trainer.test(model, dataloader_test) # with fine-tuning Clarifying a bit about .fit() taking None for all but model: Its not quite true - you must provide either a DataLoader or a DataModule.
https://stackoverflow.com/questions/69249187/
BERT text clasisification using pytorch
I am trying to build a BERT model for text classification with the help of this code [https://towardsdatascience.com/bert-text-classification-using-pytorch-723dfb8b6b5b]. My dataset contains two columns(label, text). The labels can have three values of (0,1,2). The code works without any error but all values of confusion matrix are 0. Is there something wrong with my code? import matplotlib.pyplot as plt import pandas as pd import torch from torchtext.data import Field, TabularDataset, BucketIterator, Iterator import torch.nn as nn from transformers import BertTokenizer, BertForSequenceClassification import torch.optim as optim from sklearn.metrics import accuracy_score, classification_report, confusion_matrix import seaborn as sns torch.manual_seed(42) device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') MAX_SEQ_LEN = 128 PAD_INDEX = tokenizer.convert_tokens_to_ids(tokenizer.pad_token) UNK_INDEX = tokenizer.convert_tokens_to_ids(tokenizer.unk_token) label_field = Field(sequential=False, use_vocab=False, batch_first=True, dtype=torch.float) text_field = Field(use_vocab=False, tokenize=tokenizer.encode, lower=False, include_lengths=False, batch_first=True, fix_length=MAX_SEQ_LEN, pad_token=PAD_INDEX, unk_t> fields = [('label', label_field), ('text', text_field)] CLASSIFICATION_REPORT = "classification_report.jsonl" train, valid, test = TabularDataset.splits(path='', train='train.csv', validation='validate.csv', test='test.csv', format='CSV', fields=fields, skip_header=True) train_iter = BucketIterator(train, batch_size=16, sort_key=lambda x: len(x.text), device=device, train=True, sort=True, sort_within_batch=True) valid_iter = BucketIterator(valid, batch_size=16, sort_key=lambda x: len(x.text), device=device, train=True, sort=True, sort_within_batch=True) test_iter = Iterator(test, batch_size=16, device=device, train=False, shuffle=False, sort=False) class BERT(nn.Module): def __init__(self): super(BERT, self).__init__() options_name = "bert-base-uncased" self.encoder = BertForSequenceClassification.from_pretrained(options_name, num_labels = 3) def forward(self, text, label): loss, text_fea = self.encoder(text, labels=label)[:2] return loss, text_fea def train(model, optimizer, criterion = nn.BCELoss(), train_loader = train_iter, valid_loader = valid_iter, num_epochs = 5, eval_every = len(train_iter) // 2, file_pat> running_loss = 0.0 valid_running_loss = 0.0 global_step = 0 train_loss_list = [] valid_loss_list = [] global_steps_list = [] model.train() for epoch in range(num_epochs): for (label, text), _ in train_loader: label = label.type(torch.LongTensor) label = label.to(device) text = text.type(torch.LongTensor) text = text.to(device) output = model(text, label) loss, _ = output optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() global_step += 1 if global_step % eval_every == 0: model.eval() with torch.no_grad(): for (label, text), _ in valid_loader: label = label.type(torch.LongTensor) label = label.to(device) text = text.type(torch.LongTensor) text = text.to(device) output = model(text, label) loss, _ = output valid_running_loss += loss.item() average_train_loss = running_loss / eval_every average_valid_loss = valid_running_loss / len(valid_loader) train_loss_list.append(average_train_loss) valid_loss_list.append(average_valid_loss) global_steps_list.append(global_step) # resetting running values running_loss = 0.0 valid_running_loss = 0.0 model.train() # print progress print('Epoch [{}/{}], Step [{}/{}], Train Loss: {:.4f}, Valid Loss: {:.4f}'.format(epoch+1, num_epochs, global_step, num_epochs*len(tra> if best_valid_loss > average_valid_loss: best_valid_loss = average_valid_loss print('Finished Training!') model = BERT().to(device) optimizer = optim.Adam(model.parameters(), lr=2e-5) train(model=model, optimizer=optimizer) def evaluate(model, test_loader): y_pred = [] y_true = [] model.eval() with torch.no_grad(): for (label, text), _ in test_loader: label = label.type(torch.LongTensor) label = label.to(device) text = text.type(torch.LongTensor) text = text.to(device) output = model(text, label) _, output = output y_pred.extend(torch.argmax(output, 2).tolist()) y_true.extend(label.tolist()) print('Classification Report:') print(classification_report(y_true, y_pred, labels=[0,1,2], digits=4)) best_model = BERT().to(device) evaluate(best_model, test_iter)
you are using criterion = nn.BCELoss(), binary cross entropy for a multi class classification problem, "the labels can have three values of (0,1,2)". use suitable loss function for multiclass classification.
https://stackoverflow.com/questions/69249665/
Resize feature vector from neural network
I am trying to perform a task of approximation of two embeddings (textual and visual). For the visual embedding, I am using VGG as the encoder. The output is a 1x1000 embedding. For the textual encoder, I am using a Transformer to which output is shaped 1x712. What I want is to convert both these vectors to the same dimension 512. img_features.shape, txt_features.shape = (1,1000),(1,712) How can I do it in PyTorch? Add a final layer in each architecture that models the output to 512?
You could either apply a differentiable PCA operator such as torch.pca_lowrank. Alternatively, an easier solution is to use two fully connected adapter layers to learn two mappings. One for you image features 1000 -> n, the other for textual features: 712 -> n. Then you can choose a fusion strategy to combine the two features shaped (1,n): either using concatenation, point-wise addition/multiplication (in thoses cases n should be equal to 512. Esle you can learn a final mapping n*2 -> 512.
https://stackoverflow.com/questions/69252928/
create a linear model with fixed weights in Pytorch
I want to create a linear network with a single layer under PyTorch, but I want the weights to be manually initialized and to remain fixed. For example the values of the weights with the model: layer = nn.Linear(4, 1, bias=False) weights = tensor([[ 0.6], [0.25], [ 0.1], [0.05]], dtype=torch.float64) Is this achievable? If so, how can I do it? Or is there an alternative linear function?
You can freeze your layer by setting the requires_grad to False: layer.requires_grad_(False) This way the gradients of the layer's parameters won't get computed. Or by directly defining so when initializing the parameter: layer = nn.Linear(4, 1, bias=False) layer.weight = nn.Parameter(weights, requires_grad=False) Alternatively, given an input x shaped (n, 4), you can compute the result with a simple matrix multiplication as: >>> x@weights # equivalent to torch.matmul(x, weights)
https://stackoverflow.com/questions/69253161/
How can I use different encoder and decoder transformers models
simply input is image ===> output text(feature extractor ) I want to use separate encoder and decoder models for Handwriting recognition TrOCR shows an error that the input image is diff size for each model How can I modify the config of model or do normalize fro input image to models from transformers import ( TrOCRConfig, TrOCRProcessor, TrOCRForCausalLM, ViTConfig, ViTModel, VisionEncoderDecoderModel, ) import requests import cv2 from PIL import Image # TrOCR is a decoder model and should be used within a VisionEncoderDecoderModel # init vision2text model with random weights encoder = ViTModel(ViTConfig()) decoder = TrOCRForCausalLM(TrOCRConfig()) model = VisionEncoderDecoderModel(encoder=encoder, decoder=decoder) # If you want to start from the pretrained model, load the checkpoint with `VisionEncoderDecoderModel` processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") # model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") tokenizer=processor.feature_extractor # load image from the IAM dataset url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") # image normlize with 224 x 224 image_0 = cv2.imread('/content/mm.png') pixel_values = processor(image_0, return_tensors="pt").pixel_values text = "industry, ' Mr. Brown commented icily. ' Let us have a" # training model.config.decoder_start_token_id = processor.tokenizer.cls_token_id model.config.pad_token_id = processor.tokenizer.pad_token_id model.config.vocab_size = model.config.decoder.vocab_size model.config.encoder.image_size = 224 # model.config.image_size = 384 labels = processor.tokenizer(text, return_tensors="pt").input_ids outputs = model(pixel_values, labels=labels) loss = outputs.loss round(loss.item(), 2) # inference generated_ids = model.generate(pixel_values) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] generated_text And this is the error I got /usr/local/lib/python3.7/dist-packages/transformers/models/vit/modeling_vit.py in forward(self, pixel_values, interpolate_pos_encoding) 171 if height != self.image_size[0] or width != self.image_size[1]: 172 raise ValueError( --> 173 f"Input image size ({height}{width}) doesn't match model" 174 f" ({self.image_size[0]}{self.image_size[1]})." 175 ) ValueError: Input image size (384384) doesn't match model (224224).
I think the way you've worded your question doesn't line up with the example you've given. Firstly, the example array you've given is 3D, not 2D. You can do >>> arr.shape (1,2,3) >>> arr.ndim 3 Presumably this is a mistake, and you want your array to be 2D, so you would do arr = np.array([[5., 2., -5.], [4., 3., 1.]]) instead. Secondly, if a and b are values that, if an element is between then to set that element to value c rather than a and b being indexes, then the np.where function is great for this. def overwrite_interval(arr , a , b , c): inds = np.where((arr >= a) * (arr <= b)) arr[inds] = c return arr np.where returns a tuple, so sometimes it can be easier to work with boolean arrays directly. In which case, the function would look like this def overwrite_interval(arr , a , b , c): inds = (arr >= a) * (arr <= b) arr[inds] = c return arr Does this work for you, and is this your intended meaning? Note that the solution I've provided would work as is if you still meant for the initial array to be a 3D array.
https://stackoverflow.com/questions/69258128/
Total Variation Regularization for Tensors in Python
Formula Hi, I am trying to implement total variation function for tensor or in more accurate, multichannel images. I found that for above Total Variation (in picture), there is source code like this: def compute_total_variation_loss(img, weight): tv_h = ((img[:,:,1:,:] - img[:,:,:-1,:]).pow(2)).sum() tv_w = ((img[:,:,:,1:] - img[:,:,:,:-1]).pow(2)).sum() return weight * (tv_h + tv_w) Since, I am very beginner in python I didn't understood how the indices are referred to i and j in image. I also want to add total variation for c (besides i and j) but I don't know which index refers to c. Or to be more concise, how to write following equation in python: enter image description here
This function assumes batched images. So img is a 4 dimensional tensor of dimensions (B, C, H, W) (B is the number of images in the batch, C the number of color channels, H the height and W the width). So, img[0, 1, 2, 3] is the pixel (2, 3) of the second color (green in RGB) in the first image. In Python (and Numpy and PyTorch), a slice of elements can be selected with the notation i:j, meaning that the elements i, i + 1, i + 2, ..., j - 1 are selected. In your example, : means all elements, 1: means all elements but the first and :-1 means all elements but the last (negative indices retrieves the elements backward). Please refer to tutorials on "slicing in NumPy". So img[:,:,1:,:] - img[:,:,:-1,:] is equivalent to the (batch of) images minus themselves shifted by one pixel vertically, or, in your notation X(i + 1, j, k) - X(i, j, k). Then the tensor is squared (.pow(2)) and summed (.sum()). Note that the sum is also over the batch in this case, so you receive the total variation of the batch, not of each images.
https://stackoverflow.com/questions/69260403/
Pytorch function calling
I am trying to compute a trignometric function using pytorch, but having issues while calling it via function, below is my code:- def func(x,y): return torch.exp(torch.sin(x)/x-y) func(torch.tensor[2,3]) Error:- TypeError - Traceback (most recent call last) <ipython-input-16-beb818f912f5> in <module>() ----> 1 func(torch.tensor([2, 3])) TypeError: newf() missing 1 required positional argument: 'y' What is incorrect in this code while calling the function?
You need to do it with unpacking: func(*torch.tensor[2,3])
https://stackoverflow.com/questions/69262652/
Where is torch.matmul implemented?
Where is torch.matmul implemented, especially the part that runs on the GPU? The whole project is 2M lines of code. I tried to grep the sources of the 1.8.2 release, but have trouble finding this function. I'm guessing it's generated from something...
It should be implemented below: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/LinearAlgebra.cpp#L1450 The matmul function included by the Functions.h and ATen.h included Functions.h And inside pytorch api(pytorch/torch/csrc/api/include/torch/linalg.h), it includes <ATen/ATen.h> So: pytorch/torch/csrc/api/include/torch/linalg.h > ATen > Functions > LinearAlgebra > matmul
https://stackoverflow.com/questions/69264554/
Why does numpy and pytorch give different results after mean and variance normalization?
I am working on a problem in which a matrix has to be mean-var normalized row-wise. It is also required that the normalization is applied after splitting each row into tiny batches. The code seem to work for Numpy, but fails with Pytorch (which is required for training). It seems Pytorch and Numpy results differ. Any help will be greatly appreciated. Example code: import numpy as np import torch def normalize(x, bsize, eps=1e-6): nc = x.shape[1] if nc % bsize != 0: raise Exception(f'Number of columns must be a multiple of bsize') x = x.reshape(-1, bsize) m = x.mean(1).reshape(-1, 1) s = x.std(1).reshape(-1, 1) n = (x - m) / (eps + s) n = n.reshape(-1, nc) return n # numpy a = np.float32(np.random.randn(8, 8)) n1 = normalize(a, 4) # torch b = torch.tensor(a) n2 = normalize(b, 4) n2 = n2.numpy() print(abs(n1-n2).max())
In the first example you are calling normalize with a, a numpy.ndarray, while in the second you call normalize with b, a torch.Tensor. According to the documentation page of torch.std, Bessel’s correction is used by default to measure the standard deviation. As such the default behavior between numpy.ndarray.std and torch.Tensor.std is different. If unbiased is True, Bessel’s correction will be used. Otherwise, the sample deviation is calculated, without any correction. torch.std(input, dim, unbiased, keepdim=False, *, out=None) → Tensor Parameters input (Tensor) – the input tensor. unbiased (bool) – whether to use Bessel’s correction (δN = 1). You can try yourself: >>> a.std(), b.std(unbiased=True), b.std(unbiased=False) (0.8364538, tensor(0.8942), tensor(0.8365))
https://stackoverflow.com/questions/69264984/
'RuntimeError: mat1 and mat2 shapes cannot be multiplied', how do I solve it?
I'm trying to implement a ResNet1D, that should take as input a window of ECG signal ,containing a single heart beat, in my case with size 950 samples, and I want to predict the length of the QRS interval. Here's the code for the network implementation: class Bottleneck(nn.Module): expansion = 4 def __init__(self, in_planes, planes, stride=1): super(Bottleneck, self).__init__() self.conv1 = nn.Conv1d(in_planes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm1d(planes) self.conv2 = nn.Conv1d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm1d(planes) self.conv3 = nn.Conv1d(planes, self.expansion*planes, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm1d(self.expansion*planes) self.shortcut = nn.Sequential() if stride != 1 or in_planes != self.expansion*planes: self.shortcut = nn.Sequential( nn.Conv1d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False), nn.BatchNorm1d(self.expansion*planes) ) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = F.relu(self.bn2(self.conv2(out))) out = self.bn3(self.conv3(out)) out += self.shortcut(x) out = F.relu(out) return out class ResNet(nn.Module): def __init__(self, block, num_blocks, num_classes=3): super(ResNet, self).__init__() self.in_planes = 64 self.avg1 = nn.AvgPool1d(1024, stride=2) self.conv1 = nn.Conv1d(1, 128, kernel_size=3, stride=1, padding=1, bias=False) self.bn1 = nn.BatchNorm1d(128) self.layer1 = self._make_layer(block, 128, num_blocks[0], stride=1) self.layer2 = self._make_layer(block, 256, num_blocks[1], stride=2) self.layer3 = self._make_layer(block, 512, num_blocks[2], stride=2) self.layer4 = self._make_layer(block, 1024, num_blocks[3], stride=2) self.linear1 = nn.Linear(19968*block.expansion, 1024) self.linear2 = nn.Linear(1024, num_classes) def _make_layer(self, block, planes, num_blocks, stride): strides = [stride] + [1]*(num_blocks-1) layers = [] for stride in strides: layers.append(block(self.in_planes, planes, stride)) self.in_planes = planes * block.expansion return nn.Sequential(*layers) def forward(self, x): out = self.avg1(x) out = F.rel(self.bn1(self.conv1(out))) out = self.layer1(out) out = self.layer2(out) out = self.layer3(out) out = self.layer4(out) out = F.avg_pool1d(out, 16) out = out.view(out.size(0), -1) out = self.linear1(out) out = self.linear2(out) return out def ResNet50(): return ResNet(Bottleneck, [3, 4, 6, 3], num_classes=1) The input that I'm feeding to the network is a dataloader with batch size = 32, number of channels = 1 and sample length = 950. When training the network I get this error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x3584 and 19968x1024) I get that the error is in the Linear layer, but I don't understand how I should change the dimensions in order to make it work. Can you please explain this to me?
Matrices are multiplicable if the number of columns of the first matrix matches the number of rows of the second one. For instance, MxN * NxK. So here you have wrong shapes. You can always calculate the sizes of the outputs of each layer by yourself to make sure if your shapes are correct. So here I think it should work if you change this self.linear1 = nn.Linear(19968*block.expansion, 1024) to this self.linear1 = nn.Linear(3584*block.expansion, 1024)
https://stackoverflow.com/questions/69267461/
Can this matrix calculation be implemented or approximated without an intermediate 3D matrix?
Given an NxN matrix W, I'm looking to calculate an NxN matrix C given by the equation in this link: https://i.stack.imgur.com/dY7rY.png, or in LaTeX $$C_{ij} = \max_k \bigg\{ \sum_l \bigg( W_{ik}W_{kl}W_{lj} - W_{ik}W_{kj} \bigg) \bigg\}.$$ I have tried to implement this in PyTorch but I've either encountered memory problems by constructing an intermediate NxNxN 3D matrix which, for large N, causes my GPU to run out of memory, or used a for-loop over k which is then very slow. I can't work out how I can get round these. How might I implement this calculation, or an approximation of it, without a large intermediate matrix like this? Suggestions, pseudocode in any language or an implementation in any of Python/Numpy/PyTorch would be much appreciated.
The first solution using Numba (You can do the same using Cython or plain C) would be to formulate the problem using simple loops. import numpy as np import numba as nb @nb.njit(fastmath=True,parallel=True) def calc_1(W): C=np.empty_like(W) N=W.shape[0] for i in nb.prange(N): TMP=np.empty(N,dtype=W.dtype) for j in range(N): for k in range(N): acc=0 for l in range(N): acc+=W[i,k]*W[k,l]*W[l,j]-W[i,k]*W[k,j] TMP[k]=acc C[i,j]=np.max(TMP) return C Francesco provided a simplification which scales far better for larger array sizes. This leads to the following, where I also optimized away a small temporary array. @nb.njit(fastmath=True,parallel=True) def calc_2(W): C=np.empty_like(W) N=W.shape[0] M = np.dot(W,W) - N * W for i in nb.prange(N): for j in range(N): val=W[i,0]*M[0,j] for k in range(1,N): TMP=W[i,k]*M[k,j] if TMP>val: val=TMP C[i,j]=val return C This can be optimized further by partial loop unrolling and optimizing the array access. Some compilers may do this automatically. @nb.njit(fastmath=True,parallel=True) def calc_3(W): C=np.empty_like(W) N=W.shape[0] W=np.ascontiguousarray(W) M = np.dot(W.T,W.T) - W.shape[0] * W.T for i in nb.prange(N//4): for j in range(N): val_1=W[i*4+0,0]*M[j,0] val_2=W[i*4+1,0]*M[j,0] val_3=W[i*4+2,0]*M[j,0] val_4=W[i*4+3,0]*M[j,0] for k in range(1,N): TMP_1=W[i*4+0,k]*M[j,k] TMP_2=W[i*4+1,k]*M[j,k] TMP_3=W[i*4+2,k]*M[j,k] TMP_4=W[i*4+3,k]*M[j,k] if TMP_1>val_1: val_1=TMP_1 if TMP_2>val_2: val_2=TMP_2 if TMP_3>val_3: val_3=TMP_3 if TMP_4>val_4: val_4=TMP_4 C[i*4+0,j]=val_1 C[i*4+1,j]=val_2 C[i*4+2,j]=val_3 C[i*4+3,j]=val_4 #Remainder for i in range(N//4*4,N): for j in range(N): val=W[i,0]*M[j,0] for k in range(1,N): TMP=W[i,k]*M[j,k] if TMP>val: val=TMP C[i,j]=val return C Timings W=np.random.rand(100,100) %timeit calc_1(W) #16.8 ms ± 131 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit calc_2(W) #449 µs ± 25.7 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit calc_3(W) #259 µs ± 47.4 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) W=np.random.rand(2000,2000) #Temporary array would be 64GB in this case %timeit calc_2(W) #5.37 s ± 174 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit calc_3(W) #596 ms ± 30.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
https://stackoverflow.com/questions/69273679/
How to convert tokenized words back to the original ones after inference?
I'm writing a inference script for already trained NER model, but I have trouble with converting encoded tokens (their ids) into original words. # example input df = pd.DataFrame({'_id': [1], 'body': ['Amazon and Tesla are currently the best picks out there!']}) # calling method that handles inference: ner_model = NER() ner_model.recognize_from_df(df, 'body') # here is only part of larger NER class that handles the inference: def recognize_from_df(self, df: pd.DataFrame, input_col: str): predictions = [] df = df[['_id', input_col]].copy() dataset = Dataset.from_pandas(df) # tokenization, padding, truncation: encoded_dataset = dataset.map(lambda examples: self.bert_tokenizer(examples[input_col], padding='max_length', truncation=True, max_length=512), batched=True) encoded_dataset.set_format(type='torch', columns=['input_ids', 'attention_mask'], device=device) dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32) encoded_dataset_ids = encoded_dataset['_id'] for batch in dataloader: output = self.model(**batch) # decoding predictions and tokens for i in range(batch['input_ids'].shape[0]): tags = [self.unique_labels[label_id] for label_id in output[i]] tokens = [t for t in self.bert_tokenizer.convert_ids_to_tokens(batch['input_ids'][i]) if t != '[PAD]'] ... The results are close to what I need: # tokens: ['[CLS]', 'am', '##az', '##on', 'and', 'te', '##sla', 'are', 'currently', 'the', 'best', 'picks', 'out', 'there', ...] # tags: ['X', 'B-COMPANY', 'X', 'X', 'O', 'B-COMPANY', 'X', 'O', 'O', 'O', 'O', 'O', 'O', 'O', ...] How to combine 'am', '##az', '##on' and 'B-COMPANY', 'X', 'X' into one token/tag? I know that there is a method called convert_tokens_to_string in Tokenizer, but it returns just one big string, which is hard to map to tag. Regards
Provided you only want to "merge" company names one could do that in a linear time with pure Python. Skipping the beginning of sentence token [CLS] for brevity: tokens = tokens[1:] tags = tags[1:] The function below will merge company tokens and increase pointer appropriately: def merge_company(tokens, tags): generated_tokens = [] i = 0 while i < len(tags): if tags[i] == "B-COMPANY": company_token = [tokens[i]] for j in range(i + 1, len(tags)): i += 1 if tags[j] != "X": break else: company_token.append(tokens[j][2:]) generated_tokens.append("".join(company_token)) else: generated_tokens.append(tokens[i]) i += 1 return generated_tokens Usage is pretty simple, please notice tags need their Xs removed as well though: tokens = merge_company(tokens, tags) tags = [tag for tag in tags if tag != "X"] This would give you: ['amazon', 'and', 'tesla', 'are', 'currently', 'the', 'best', 'picks', 'out', 'there'] ['B-COMPANY', 'O', 'B-COMPANY', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
https://stackoverflow.com/questions/69274391/
How to extract loss and accuracy from logger by each epoch in pytorch lightning?
I want to extract all data to make the plot, not with tensorboard. My understanding is all log with loss and accuracy is stored in a defined directory since tensorboard draw the line graph. %reload_ext tensorboard %tensorboard --logdir lightning_logs/ However, I wonder how all log can be extracted from the logger in pytorch lightning. The next is the code example in training part. #model ssl_classifier = SSLImageClassifier(lr=lr) #train logger = pl.loggers.TensorBoardLogger(name=f'ssl-{lr}-{num_epoch}', save_dir='lightning_logs') trainer = pl.Trainer(progress_bar_refresh_rate=20, gpus=1, max_epochs = max_epoch, logger = logger, ) trainer.fit(ssl_classifier, train_loader, val_loader) I had confirmed that trainer.logger.log_dir returned directory which seems to save logs and trainer.logger.log_metrics returned <bound method TensorBoardLogger.log_metrics of <pytorch_lightning.loggers.tensorboard.TensorBoardLogger object at 0x7efcb89a3e50>>. trainer.logged_metrics returned only the log in the final epoch, like {'epoch': 19, 'train_acc': tensor(1.), 'train_loss': tensor(0.1038), 'val_acc': 0.6499999761581421, 'val_loss': 1.2171183824539185} Do you know how to solve the situation?
Lightning do not store all logs by itself. All it does is streams them into the logger instance and the logger decides what to do. The best way to retrieve all logged metrics is by having a custom callback: class MetricTracker(Callback): def __init__(self): self.collection = [] def on_validation_batch_end(trainer, module, outputs, ...): vacc = outputs['val_acc'] # you can access them here self.collection.append(vacc) # track them def on_validation_epoch_end(trainer, module): elogs = trainer.logged_metrics # access it here self.collection.append(elogs) # do whatever is needed You can then access all logged stuff from the callback instance cb = MatricTracker() Trainer(callbacks=[cb]) cb.collection # do you plotting and stuff
https://stackoverflow.com/questions/69276961/
Understanding the architecture of an LSTM for sequence classification
I have this model in pytorch that I have been using for sequence classification. class RoBERT_Model(nn.Module): def __init__(self, hidden_size = 100): self.hidden_size = hidden_size super(RoBERT_Model, self).__init__() self.lstm = nn.LSTM(768, hidden_size, num_layers=1, bidirectional=False) self.out = nn.Linear(hidden_size, 2) def forward(self, grouped_pooled_outs): # chunks_emb = pooled_out.split_with_sizes(lengt) # splits the input tensor into a list of tensors where the length of each sublist is determined by length seq_lengths = torch.LongTensor([x for x in map(len, grouped_pooled_outs)]) # gets the length of each sublist in chunks_emb and returns it as an array batch_emb_pad = nn.utils.rnn.pad_sequence(grouped_pooled_outs, padding_value=-91, batch_first=True) # pads each sublist in chunks_emb to the largest sublist with value -91 batch_emb = batch_emb_pad.transpose(0, 1) # (B,L,D) -> (L,B,D) lstm_input = nn.utils.rnn.pack_padded_sequence(batch_emb, seq_lengths, batch_first=False, enforce_sorted=False) # seq_lengths.cpu().numpy() packed_output, (h_t, h_c) = self.lstm(lstm_input, ) # (h_t, h_c)) # output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, padding_value=-91) h_t = h_t.view(-1, self.hidden_size) # (-1, 100) return self.out(h_t) # logits The issue that I am having is that I am not entirely convinced of what data is being passed to the final classification layer. I believe what is being done is that only the final LSTM cell in the last layer is being used for classification. That is there are hidden_size features that are passed to the feedforward layer. I have depicted what I believe is going on in this figure here: Is this understanding correct? Am I missing anything? Thanks.
Your code is a basic LSTM for classification, working with a single rnn layer. In your picture you have multiple LSTM layers, while, in reality, there is only one, H_n^0 in the picture. Your input to LSTM is of shape (B, L, D) as correctly pointed out in the comment. packed_output and h_c is not used at all, hence you can change this line to: _, (h_t, _) = self.lstm(lstm_input) in order no to clutter the picture further h_t is output of last step for each batch element, in general (B, D * L, hidden_size). As this neural network is not bidirectional D=1, as you have a single layer L=1 as well, hence the output is of shape (B, 1, hidden_size). This output is reshaped into nn.Linear compatible (this line: h_t = h_t.view(-1, self.hidden_size)) and will give you output of shape (B, hidden_size) This input is fed to a single nn.Linear layer. In general, the output of the last time step from RNN is used for each element in the batch, in your picture H_n^0 and simply fed to the classifier. By the way, having self.out = nn.Linear(hidden_size, 2) in classification is probably counter-productive; most likely your are performing binary classification and self.out = nn.Linear(hidden_size, 1) with torch.nn.BCEWithLogitsLoss might be used. Single logit contains information whether the label should be 0 or 1; everything smaller than 0 is more likely to be 0 according to nn, everything above 0 is considered as a 1 label.
https://stackoverflow.com/questions/69277384/
How to obtain the path name of train_dataset after using random_split in torch
I have the following code: import torch, torchvision root_dataset ="./data" dataset = torchvision.datasets.folder.ImageFolder(root=root_dataset, transform=None, target_transform=None) train_dataset, valid_dataset = torch.utils.data.dataset.random_split( dataset=dataset, lengths=[num_train, num_valid] ) My question is: How can I obtain the name list of the path of train_dataset after using random_split in torch? Thank you.
The paths (and labels) are stored in dataset.imgs. For instance, for imagenet: In [ ]: print(dataset.imgs[0]) Out [ ]: ('/shareDB/imagenet/val/n01440764/ILSVRC2012_val_00000293.JPEG', 0) After splitting the dataset, each split points to the original dataset: In [ ]: len(train_dataset.dataset), len(valid_dataset.dataset) Out [ ]: (50000, 50000) However, each split also holds the indices of samples from the original dataset selected for the split. You can use these indices and the original dataset to get a list of the images selected for each split: valid_imgs = [valid_dataset.dataset.imgs[i_] for i_ in valid_dataset.indices] train_imgs = [train_dataset.dataset.imgs[i_] for i_ in train_dataset.indices]
https://stackoverflow.com/questions/69283506/
Normalization of the dataset, Error: all elements of input should be between 0 and 1
I have a problem with data normalization in PyTorch when I try to execute the training. First thing you need to know is that the dataset is composed of 3024 signal windows (so 1 channel), each one with a length of 5000 samples, so the dimension of the CSV file is 5000x3024. Each signal has 1 label that needs to be predicted. Here is the code for how I load and normalize the data: class CSVDataset(Dataset): # load the dataset def __init__(self, path, normalize = False): # load the csv file as a dataframe df = read_csv(path) df = df.transpose() # store the inputs and outputs self.X = df.values[:, :-1] self.y = df.values[:, -1] print("Dataset length: ", self.X.shape[0]) # ensure input data is floats self.X = self.X.astype(np.float) self.y = self.y.astype(np.float) if normalize: self.X = self.X.reshape(self.X.shape[1], self.X.shape[0]) min_X = np.min(self.X,0) # returns an array of means for each signal window max_X = np.max(self.X,0) self.X = (self.X - min_X)/(max_X-min_X) min_y = np.min(self.y) max_y = np.max(self.y) self.y = (self.y - min_y)/(max_y-min_y) # reshape input data self.X = self.X.reshape(self.X.shape[0], 1, self.X.shape[1]) self.y = self.y.reshape(self.y.shape[0], 1) # label encode target and ensure the values are floats self.y = LabelEncoder().fit_transform(self.y) self.y = self.y.astype(np.float) # prepare the dataset def prepare_data(path): # load the dataset dataset = CSVDataset(path, normalize = True) # calculate split train, test = dataset.get_splits() # prepare data loaders train_dl = DataLoader(train, batch_size=32, shuffle=True) test_dl = DataLoader(test, batch_size=1024, shuffle=False) return train_dl, test_dl While the train method is: def train_model(train_dl, model): # define the optimization criterion = BCELoss() optimizer = SGD(model.parameters(), lr=0.01, momentum=0.9) model = model.float() # enumerate epochs for epoch in range(100): # enumerate mini batches for i, (inputs, targets) in enumerate(iter(train_dl)): targets = torch.reshape(targets, (32, 1)) # clear the gradients optimizer.zero_grad() # compute the model output yhat = model(inputs.float()) # calculate loss loss = criterion(yhat, targets.float()) # credit assignment loss.backward() # update model weights optimizer.step() The error that I get is in the line loss = criterion(yhat, targets.float()) and it says: RuntimeError: all elements of input should be between 0 and 1 I have tried inspecting the X in the variable explorer and it doesn't seem that there are any values that are not between 0 and 1. I don't know what I could have done wrong in normalization. Can you help me?
Builtin loss functions refer to input and target to designate the prediction and label instances respectively. The error message should be understood as "input of the criterion" i.e. yhat, and not as "input of the model". It seems yhat does not belong in [0, 1], while BCELoss expects a probability, not a logit. You can either add a sigmoid layer as the last layer of your model, or use nn.BCEWithLogitsLoss instead, which combines a sigmoid and the bce loss.
https://stackoverflow.com/questions/69284230/
Linear autoencoder using Pytorch
How do we build a simple linear autoencoder and train it using torch.optim optimisers? How do I do it using autograd (.backward()) and optimising the MSE loss, and then learn the values of the weights and biases in the encoder, and the decoder (ie. 3 parameters in the encoder and 4 in the decoder)? And the data has to be randomized, for each run of learning, start from random weights and biases, such as: wEncoder = torch.randn(D,1, requires_grad=True) wDecoder = torch.randn(1,D, requires_grad=True) bEncoder = torch.randn(1, requires_grad=True) bDecoder = torch.randn(1,D, requires_grad=True) The target optimizer is SGD, learning rate 0.01, no momentum, and 1000 steps (from a random start), then how do we plot loss versus epochs (steps)? I tried this but the losses are the same for every epoch. D = 2 x = torch.rand(100,D) x[:,0] = x[:,0] + x[:,1] x[:,1] = 0.5*x[:,0] + x[:,1] loss_fn = nn.MSELoss() optimizer = optim.SGD([x[:,0],x[:,1]], lr=0.01) losses = [] for epoch in range(1000): running_loss = 0.0 inputs = x_reconstructed targets = x loss=loss_fn(inputs,targets) loss.backward(retain_graph=True) optimizer.step() optimizer.zero_grad() running_loss += loss.item() epoch_loss = running_loss / len(data) losses.append(running_loss)
This example should get you going. Please see code comments for further explanation: import torch # Use torch.nn.Module to create models class AutoEncoder(torch.nn.Module): def __init__(self, features: int, hidden: int): # Necessary in order to log C++ API usage and other internals super().__init__() self.encoder = torch.nn.Linear(features, hidden) self.decoder = torch.nn.Linear(hidden, features) def forward(self, X): return self.decoder(self.encoder(X)) def encode(self, X): return self.encoder(X) # Random data data = torch.rand(100, 4) model = AutoEncoder(4, 10) # Pass model.parameters() for increased readability # Weights of encoder and decoder will be passed optimizer = torch.optim.SGD(model.parameters(), lr=0.01) loss_fn = torch.nn.MSELoss() # Per-epoch losses are gathered # Loss is the mean of batch elements, in our case mean of 100 elements losses = [] for epoch in range(1000): reconstructed = model(data) loss = loss_fn(reconstructed, data) # No need to retain_graph=True as you are not performing multiple passes # of backpropagation loss.backward() optimizer.step() optimizer.zero_grad() losses.append(loss.item()) Please notice linear autoencoder is roughly equivalent to PCA decomposition, which is more efficient. You should probably use a non-linear autoencoder unless it is simply for training purposes.
https://stackoverflow.com/questions/69284837/
how to convert a csv file to character level one-hot-encode matrices?
I have a CSV file that looks like this I want to choose the last column and make character level one-hot-encode matrices of every sequence, I use this code and it doesn't work data = pd.read_csv('database.csv', usecols=[4]) alphabet = ['A', 'C', 'D', 'E', 'F', 'G','H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y'] charto = dict((c,i) for i,c in enumerate(alphabet)) iint = [charto[char] for char in data] onehot2 = [] for s in iint: lett = [0 for _ in range(len(alphabet))] lett[s] = 1 onehot2.append(lett) What do you suggest doing for this task? (by the way, I want to use this dataset for a PyTorch model)
I think it would be best to keep pd.DataFrame as is and do the transformation "on the fly" within PyTorch Dataset. First, dummy data similar to yours: df = pd.DataFrame( { "ID": [1, 2, 3], "Source": ["Serbia", "Poland", "Germany"], "Sequence": ["ABCDE", "EBCDA", "AAD"], } ) After that, we can create torch.utils.data.Dataset class (example alphabet is shown, you might change it to anything you want): class Dataset(torch.utils.data.Dataset): def __init__(self, df: pd.DataFrame): self.df = df # Change alphabet to anything you need alphabet = ["A", "B", "C", "D", "E", "F"] self.mapping = dict((c, i) for i, c in enumerate(alphabet)) def __getitem__(self, index): sample = df.iloc[index] sequence = sample["Sequence"] target = torch.nn.functional.one_hot( torch.tensor([self.mapping[letter] for letter in sequence]), num_classes=len(self.mapping), ) return sample.drop("Sequence"), target def __len__(self): return len(self.df) This code simply transforms indices of letters to their one-hot encoding via torch.nn.functional.one_hot function. Usage is pretty simple: ds = Dataset(df) ds[0] which returns (you might want to change how your sample is created though as I'm not sure about the format and only focused on hot-encoded targets) the following targets (ID and Source omitted): tensor([ [1., 0., 0., 0., 0., 0.], [0., 1., 0., 0., 0., 0.], [0., 0., 1., 0., 0., 0.], [0., 0., 0., 1., 0., 0.], [0., 0., 0., 0., 1., 0.]]))
https://stackoverflow.com/questions/69286139/
Why does Keras BatchNorm produce different output than PyTorch?
Torch:'1.9.0+cu111' Tensorflow-gpu:'2.5.0' I came across a strange thing, when using the Batch Normal layer of tensorflow 2.5 and the BatchNorm2d layer of Pytorch 1.9 to calculate the same Tensor , and the results were quite different (TensorFlow is close to 1, Pytorch is close to 0).I thought at first it was the difference between the momentum and epsilon , but after changing them to the same, the result was the same. from torch import nn import torch x = torch.ones((20, 100, 35, 45)) a = nn.Sequential( # nn.Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), padding=0, bias=True), nn.BatchNorm2d(100) ) b = a(x) import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras.layers import * x = tf.ones((20, 35, 45, 100)) a = keras.models.Sequential([ # Conv2D(128, (1, 1), (1, 1), padding='same', use_bias=True), BatchNormalization() ]) b = a(x)
Batchnormalization works differently in training and inference, During training (i.e. when using fit() or when calling the layer/model with the argument training=True), the layer normalizes its output using the mean and standard deviation of the current batch of inputs. That is to say, for each channel being normalized, the layer returns gamma * (batch - mean(batch)) / sqrt(var(batch) + epsilon) + beta where: epsilon is small constant (configurable as part of the constructor arguments) gamma is a learned scaling factor (initialized as 1), which can be disabled by passing scale=False to the constructor. beta is a learned offset factor (initialized as 0), which can be disabled by passing center=False to the constructor. During inference (i.e. when using evaluate() or predict() or when calling the layer/model with the argument training=False (which is the default), the layer normalizes its output using a moving average of the mean and standard deviation of the batches it has seen during training. That is to say, it returns gamma * (batch - self.moving_mean) / sqrt(self.moving_var + epsilon) + beta. self.moving_mean and self.moving_var are non-trainable variables that are updated each time the layer in called in training mode, as such: moving_mean = moving_mean * momentum + mean(batch) * (1 - momentum) moving_var = moving_var * momentum + var(batch) * (1 - momentum) ref: https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization If you run the pytorch batchnorm in eval mode, you get close results (the rest of the discrepancy comes from the different internal implementation, parameter choices, etc.), from torch import nn import torch x = torch.ones((1, 2, 2, 2)) a = nn.Sequential( # nn.Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), padding=0, bias=True), nn.BatchNorm2d(2) ) a.eval() b = a(x) print(b) import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras.layers import * x = tf.ones((1, 2, 2, 2)) a = keras.models.Sequential([ # Conv2D(128, (1, 1), (1, 1), padding='same', use_bias=True), BatchNormalization() ]) b = a(x) print(b) out: tensor([[[[1.0000, 1.0000], [1.0000, 1.0000]], [[1.0000, 1.0000], [1.0000, 1.0000]]]], grad_fn=<NativeBatchNormBackward>) tf.Tensor( [[[[0.9995004 0.9995004] [0.9995004 0.9995004]] [[0.9995004 0.9995004] [0.9995004 0.9995004]]]], shape=(1, 2, 2, 2), dtype=float32)
https://stackoverflow.com/questions/69293455/
batch_first in PyTorch LSTM
I am new in this field, so I still don't understand about the batch_first in PyTorch LSTM. I tried the code that someone has referred to me, and it works on my train data when the batch_first = False, it produces the same output for Official LSTM and Manual LSTM. However, when I change the batch_first = True, it not produces the same value anymore, while I need to change the batch_first to True, because my dataset shape is tensor of (Batch, Sequences, Input size). Which part of the Manual LSTM needs to be changed to produces the same output as the Official LSTM produces when batch_first = True? Here is the code snippet: import numpy as np import torch import torch.nn as nn import torch.nn.functional as F train_x = torch.tensor([[[0.14285755], [0], [0.04761982], [0.04761982], [0.04761982], [0.04761982], [0.04761982], [0.09523869], [0.09523869], [0.09523869], [0.09523869], [0.09523869], [0.04761982], [0.04761982], [0.04761982], [0.04761982], [0.09523869], [0. ], [0. ], [0. ], [0. ], [0.09523869], [0.09523869], [0.09523869], [0.09523869], [0.09523869], [0.09523869], [0.09523869],[0.14285755], [0.14285755]]], requires_grad=True) seed = 23 torch.manual_seed(seed) np.random.seed(seed) pytorch_lstm = torch.nn.LSTM(1, 1, bidirectional=False, num_layers=1, batch_first=True) weights = torch.randn(pytorch_lstm.weight_ih_l0.shape,dtype = torch.float) pytorch_lstm.weight_ih_l0 = torch.nn.Parameter(weights) # Set bias to Zero pytorch_lstm.bias_ih_l0 = torch.nn.Parameter(torch.zeros(pytorch_lstm.bias_ih_l0.shape)) pytorch_lstm.weight_hh_l0 = torch.nn.Parameter(torch.ones(pytorch_lstm.weight_hh_l0.shape)) # Set bias to Zero pytorch_lstm.bias_hh_l0 = torch.nn.Parameter(torch.zeros(pytorch_lstm.bias_ih_l0.shape)) pytorch_lstm_out = pytorch_lstm(train_x) batch_size=1 # Manual Calculation W_ii, W_if, W_ig, W_io = pytorch_lstm.weight_ih_l0.split(1, dim=0) b_ii, b_if, b_ig, b_io = pytorch_lstm.bias_ih_l0.split(1, dim=0) W_hi, W_hf, W_hg, W_ho = pytorch_lstm.weight_hh_l0.split(1, dim=0) b_hi, b_hf, b_hg, b_ho = pytorch_lstm.bias_hh_l0.split(1, dim=0) prev_h = torch.zeros((batchsize,1)) prev_c = torch.zeros((batchsize,1)) i_t = torch.sigmoid(F.linear(train_x, W_ii, b_ii) + F.linear(prev_h, W_hi, b_hi)) f_t = torch.sigmoid(F.linear(train_x, W_if, b_if) + F.linear(prev_h, W_hf, b_hf)) g_t = torch.tanh(F.linear(train_x, W_ig, b_ig) + F.linear(prev_h, W_hg, b_hg)) o_t = torch.sigmoid(F.linear(train_x, W_io, b_io) + F.linear(prev_h, W_ho, b_ho)) c_t = f_t * prev_c + i_t * g_t h_t = o_t * torch.tanh(c_t) print('nn.LSTM output {}, manual output {}'.format(pytorch_lstm_out[0], h_t)) print('nn.LSTM hidden {}, manual hidden {}'.format(pytorch_lstm_out[1][0], h_t)) print('nn.LSTM state {}, manual state {}'.format(pytorch_lstm_out[1][1], c_t))
You have to iterate through each sequence element at a time and take the computed hidden and cell states as input in the next time step... h_t = torch.zeros((batch_size,1)) c_t = torch.zeros((batch_size,1)) hidden_seq = [] for t in range(30): x_t = train_x[:, t, :] i_t = torch.sigmoid(F.linear(x_t, W_ii, b_ii) + F.linear(h_t, W_hi, b_hi)) f_t = torch.sigmoid(F.linear(x_t, W_if, b_if) + F.linear(h_t, W_hf, b_hf)) g_t = torch.tanh(F.linear(x_t, W_ig, b_ig) + F.linear(h_t, W_hg, b_hg)) o_t = torch.sigmoid(F.linear(x_t, W_io, b_io) + F.linear(h_t, W_ho, b_ho)) c_t = f_t * c_t + i_t * g_t h_t = o_t * torch.tanh(c_t) hidden_seq.append(h_t.unsqueeze(0)) hidden_seq = torch.cat(hidden_seq, dim=0) hidden_seq = hidden_seq.transpose(0, 1).contiguous() print('nn.LSTM output {}, manual output {}'.format(pytorch_lstm_out[0], hidden_seq))
https://stackoverflow.com/questions/69293462/
Wrong "-1 background" annotations loaded from Custom COCO Dataset using Mmdetection
Introduction I'm working using Mmdetection to train a Deformable DETR model using a custom COCO Dataset. Meaning a Custom Dataset using the COCO format of annotations. The dataset uses the same images as the COCO with different "toy" annotations for a "playground" experiment and the annotation file was created using the packages pycocotools and json exclusively. I have made five variations of this playground dataset: 2 datasets with three classes (classes 1, 2, and 3), 1 dataset with six classes (classes 1 to 6) and 2 datasets with 7 classes (classes 1 to 7). The Problem Now, after creating the dataset in mmdetection using mmdet.datasets.build_dataset, I used the following code to check if everything was OK: from pycocotools.coco import COCO from os import path as osp from mmdet.datasets import build_dataset cfg = start_config() # this is simply a function to startup the config file ann_file = osp.join(cfg.data.train.data_root, cfg.data.train.ann_file) coco = COCO(ann_file) img_ids = coco.getImgIds() ann_ids = coco.getAnnIds(imgIds=img_ids) anns = coco.loadAnns(ids=ann_ids) cats_counter = {} for ann in anns: if ann['category_id'] in cats_counter: cats_counter[ann['category_id']]+=1 else: cats_counter[ann['category_id']] = 1 print(cats_counter) cats = {cat['id']:cat for cat in coco.loadCats(coco.getCatIds())} for i in range(len(cats_counter)): print("{} ({}) \t|\t{}".format(i, cats[i]['name'], cats_counter[i])) ds = build_dataset(cfg.data.train) print(ds) For three of the datasets the amounts from the json file and from the constructed mmdet dataset are almost exactly equal. However, for one of the 3-classes dataset and for the 6-classes dataset, the results are incredibly different, where this code returns the following: {3: 1843, 1: 659, 4: 1594, 2: 582, 0: 1421, 5: 498} 0 (1) | 1421 1 (2) | 659 2 (3) | 582 3 (4) | 1843 4 (5) | 1594 5 (6) | 498 loading annotations into memory... Done (t=0.06s) creating index... index created! CocoDataset Train dataset with number of images 1001, and instance counts: +---------------+-------+---------------+-------+---------------+-------+---------------+-------+---------------+-------+ | category | count | category | count | category | count | category | count | category | count | +---------------+-------+---------------+-------+---------------+-------+---------------+-------+---------------+-------+ | 0 [1] | 1421 | 1 [2] | 659 | 2 [3] | 581 | 3 [4] | 1843 | 4 [5] | 1594 | | | | | | | | | | | | | 5 [6] | 0 | -1 background | 45 | | | | | | | +---------------+-------+---------------+-------+---------------+-------+---------------+-------+---------------+-------+ and {1: 1420, 0: 4131, 2: 1046} 0 (1) | 4131 1 (2) | 1420 2 (3) | 1046 loading annotations into memory... Done (t=0.06s) creating index... index created! CocoDataset Train dataset with number of images 1001, and instance counts: +----------+-------+------------+-------+----------+-------+---------------+-------+----------+-------+ | category | count | category | count | category | count | category | count | category | count | +----------+-------+------------+-------+----------+-------+---------------+-------+----------+-------+ | | | | | | | | | | | | 0 [1] | 1419 | 1 [2] | 0 | 2 [3] | 0 | -1 background | 443 | | | +----------+-------+------------+-------+----------+-------+---------------+-------+----------+-------+ You can see that there is no "-1" id in the annotation json, and also some of the classes from the 3-classes dataset have 0 annotations, while the json clearly shows more than that. Has anyone encountered something similar using Mmdetection? What could be causing this problem?
There was a mismatch between the classes names in the annotation file and the classes names in the mmdetection config object. Correcting those solved the problem.
https://stackoverflow.com/questions/69293877/
Why does PyTorch's built-in loss function only work with the Long tensor type?
So, I'm using: torch.nn.CrossEntropyLoss(predictions, targets), and I wonder why exactly targets need to be a 64-bit integer and not 32-bit?
I wonder why exactly targets need to be a 64-bit integer and not 32-bit? This is because PyTorch is precompiled. Some other frameworks that preceded it (not Tensorflow) would invoke the compiler on-the-fly, which can cause a delay and some other unpleasantness. PyTorch doesn't. But this means the developers have to be mindful of the size of the precompiled library. They have to balance the utility of supporting yet another data type vs the increase in size that compiling everything for that data type would cause, and the decision here went against int32, uint32, uint64, etc.
https://stackoverflow.com/questions/69294379/
Why do you multiply the two images to see the correlation?
I have some questions about the CP Viton module: feature_A = feature_A.transpose(2,3).contiguous().view(b,c,h*w) feature_B = feature_B.view(b,c,h*w).transpose(1,2) # perform matrix mult. feature_mul = torch.bmm(feature_B,feature_A) print(feature_mul.size()) #torch.Size([4, 192, 192]) at this code For the multiplication of matrices, I don't know why they make it like b,hw,hw. It is said that multiplying the shape of the image as follows is to extract the correlation, but I don't know why. I'm talking about the bmm part.
The spatial correlation consists of computing the dot product of feature vectors of every (feature_A[k,:,i], feature_B[k,:,j]) feature pair. As such you first need to flatten the spatial dimension which results in a dimension of size h*w on both tensors. Your two operands will have a shape of (b, c, hw), and (b, hw, c). As a result of applying bmm, you end up with a tensor shaped (b, hw, hw)
https://stackoverflow.com/questions/69295817/
How to update a pretrained model after Pruning of filters in its conv layer in PyTorch?
I have a pretrained model LeNet5 defined from scratch. I am performing pruning over filters in the convolution layers present in the model shown below. class LeNet5(nn.Module): def __init__(self, n_classes): super(LeNet5, self).__init__() self.feature_extractor = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=20, kernel_size=5, stride=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2), nn.Conv2d(in_channels=20, out_channels=50, kernel_size=5, stride=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2) ) self.classifier = nn.Sequential( nn.Linear(in_features=800, out_features=500), nn.ReLU(), nn.Linear(in_features=500, out_features=10), # 10 - possible classes ) def forward(self, x): #x = x.view(x.size(0), -1) x = self.feature_extractor(x) x = torch.flatten(x, 1) logits = self.classifier(x) probs = F.softmax(logits, dim=1) return logits, probs I have successfully removed 2 filters from 20 in layer 1 (now 18 filters in conv2d layer1) and 5 filters from 50 in layer 2 (now 45 filters in conv2d layer3). So, now I need to update the model with the changes done as follows - out_channel of layer 1 - 20 to 18 in_channel of layer 3 - 20 to 18 out_channel of layer 3 - 50 to 45 However, I'm unable to run the model as it gives dimension error. RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x720 and 800x500) How to update the no. of filters layers present in the model using Pytorch to perform pruning? Is there any library I can use for the same?
Assuming you do not want the model to automatically change structure during runtime, you can easily update the structure of the model by simply changing the input parameters to the constructor. For instance: nn.Conv2d(in_channels = 1, out_channels = 18, kernel_size = 5, stride = 1), nn.Conv2d(in_channels = 18, out_channels = 45, kernel_size = 5, stride = 1), and so on. If you are retraining from scratch every time you change the model structure, that's all you need to do. However, if you would like to maintain portions of the already learned parameters when you change the model, you'll need to select these relevant values and reassign them to the model parameters. For instance, consider the parameters associated with the first convolutional layer, 1 input, 20 outputs, and kernel size of 5. The weights and biases for this layer have size [1,20,5,5] and [1,20]. You need to modify these parameters such that they have size [1,18,5,5] and [1,18]. You'd thus need the indices for the particular kernels/filters you want to maintain and which kernels you'd like to prune. The code syntax for doing this is roughly: params = net.state_dict() params["feature_extractor"]["conv1.weight"] = params["feature_extractor"]["conv1.weight"][:,:18,:,:] params["feature_extractor"]["conv1.bias"] = params["feature_extractor"]["conv1.bias"][:,:18] # and so on for the other layers net.load_state_dict(params) Here, I simply drop the last two kernels/bias values for the first convolutional layer. (Note that the actual dictionary key names may differ slightly; I didn't code this up to check because, as indicated in the comments above, you included a picture of code rather than real, copy-able, code so try to do the latter in the future.)
https://stackoverflow.com/questions/69299089/
Create tensor with arrays of different dimensions in PyTorch
I want to concatenate arrays of different dimensions to feed them to my neural network that will have as first layer the AdaptiveAveragePooling1d. I have a dataset that is composed of several signals (1D arrays), each one with a different length. For example: array1 = np.random.randn(1200,1) array2 = np.random.randn(950,1) array3 = np.random.randn(1000,1) I want to obtain a tensor in which I concatenate these three signals to obtain a 2D tensor. However if I try to do tensor = torch.Tensor([array1, array2, array3]) It gives me this error: ValueError: expected sequence of length 1200 at dim 2 (got 950) Is there a way to obtain such thing? EDIT More information about the dataset: Each signal window represents a heart beat on the ECG registration, taken from several patients, sampled with a sampling frequency of 1000Hz The beats can have different lengths, because it depends on the heart rate of the patient itself For each beat I need to predict the length of the QRS interval (the target of the network) that I have, expressed in milliseconds I have already thought of interpolating the shortest samples to the the length of the longest ones, but then I would also have to change the length of the QRS interval in the labels, is that right? I have read of this AdaptiveAveragePooling1d layer, that would allow me to input the network with samples of different sizes. But my problem is how do I input the network a dataset in which each sample has a different length? How do I group them without using a filling method with NaNs or zeros? I hope I explained myself.
This disobeys the definition of a tensor and is impossible. If a tensor is of shape (NxMx1), all of the N matrices must be of size (Mx1). There are still ways to get all your arrays to the same length. Look at where your data is coming from and what its structure is and figure out which of the following solutions would work. Some of these may change the signal's derivative in a way you don't like Cropping arrays to the same size (ie cutting start/end off) or zero padding the shorter ones to the length of the longer one (I really dislike this one and it would only work for very specific applications) 'Stretching' the arrays to the same size by using interpolation Shortening the arrays to the same size by subsampling For some applications, maybe even passing the coefficients of a fourier series from the signals EDIT For heart rate, which should be a roughly periodic signal, I'd definitely crop the signal which should work quite well. Passing FFT(equally cropped signals) or Fourier coefficients may also yield interesting results, but from my experience with neural spike data, training on the FFT of a signal like this doesn't perform any better when you have enough data to train off. Also if you're using a fully connected network, a using 1D convolutions is a good alternative to try.
https://stackoverflow.com/questions/69300510/
Initializing model parameters in pytorch manually
I am creating a separate class to initializer model and adding layers in a list, but those layers are not being added to parameters, plz tell how to add them to parameters() of model. class Mnist_Net(nn.Module): def __init__(self,input_dim,output_dim,hidden_layers=2,neurons=128): super().__init__() layers = [] for i in range(hidden_layers): if len(layers) == 0: layers.append(nn.Linear(input_dim,neurons)) if i == hidden_layers-1: layers.append(nn.Linear(layers[-2].weight.shape[0],output_dim)) layers.append(nn.Linear(layers[i-1].weight.shape[0],neurons)) self.layers= layers When I print model.parameters() model = Mnist_Net(28*28,10,neurons=56) for t in model.parameters(): print(t) it shows nothing, but when I add layers in class like self.layer1 = nn.Linear(input_dim,neurons) It shows one layer in parameters.Plz tell How can I add all layers in self.layers in model.parameters()
To be registered in the parent module, your submodules should be nn.Modules themselves. In your case, you should wrap layers with nn.ModuleList: self.layers = nn.ModuleList(layers) Then, your layers will be registered: >>> model = Mnist_Net(28*28,10, neurons=56) >>> for t in model.parameters(): ... print(t.shape) torch.Size([56, 784]) torch.Size([56]) torch.Size([56, 56]) torch.Size([56]) torch.Size([10, 56]) torch.Size([10]) torch.Size([56, 56]) torch.Size([56])
https://stackoverflow.com/questions/69300595/
Spacy inference goes OOM when processing several documents
I'm using spacy to process documents that come through rest api. To be more specific, I'm using transformer based model en_core_web_trf for NER, running on GPU. Here is a code snippet of the spacy related class (It is packed inside some basic flask server and but I don't suppose that matters here) class SpacyExtractor(): def __init__(self): spacy.require_gpu() self.model = spacy.load('en_core_web_trf', disable=["tagger", "parser", "attribute_ruler", "lemmatizer"]) def get_named_entities(self, text: str): doc = self.model(text) entities = [] for ent in doc.ents: entities.append((ent.text, ent.label_)) return entities The problem is, with each call of get_named_entities, the amount of GPU memory allocated goes up. And it is like 2-3 GB every time (I checked this by repeatedly calling nvidia-smi while the app was processing the docs). So after a few calls, I get OOM error RuntimeError: CUDA out of memory. Tried to allocate 2.35 GiB (GPU 0; 10.76 GiB total capacity; 5.02 GiB already allocated; 1.18 GiB free; 8.41 GiB reserved in total by PyTorch) Documents are not huge at all, 1-100 pages of text for each one. I think I make some mistake, but I just don't see it. Environment: Ubuntu 18.04, Python 3.8, spacy 3.1.3, cuda 9.1, RTX 2080Ti 11GB RAM EDIT: Also, I found out the OOM error when processing a single really long document, presented as a single long string.
The problem is, with each call of get_named_entities, the amount of GPU memory allocated goes up. You should detach your data as explained in the FAQ: Don’t accumulate history across your training loop. By default, computations involving variables that require gradients will keep history. This means that you should avoid using such variables in computations which will live beyond your training loops, e.g., when tracking statistics. Instead, you should detach the variable or access its underlying data. Edit You can also use with torch.no_grad(): doc = self.model(text) EDIT: Also, I found out the OOM error when processing a single really long document, presented as a single long string. Well, this is to be expected.
https://stackoverflow.com/questions/69301276/
Fastest method of reading tensor objects from files in python
I am training PyTorch models on various datasets. The datasets up to this point have been images so I can just read them on the fly when needed using cv2 or PIL which is fast. Now I am presented with a dataset of tensor objects of shape [400, 400, 8]. In the past I have tried to load these objects using PyTorch and NumPy's built-in tensor reading operations but these are generally much slower than reading images. The objects are currently stored in h5py compressed files where there are ~800 per file. My plan was to save the objects individually in some format and then read them on the fly but I am unsure of what format to save them in which is fastest. I would like to avoid keeping them all in memory as I believe the memory requirement would be too high.
If the data arrays are still "images", just 8-channel ones, you can split them into 3 image files a = x[:, :, 0:3] b = x[:, :, 3:6] c = x[:, :, 5:8] c[:, :, 0] = 0 # reduces the compressed size and store them using the conventional image libraries (cv2 and PIL). Images compress much better than general data (lossy 'jpeg' even more so), and thefore that reduces both the disk space and bandwidth, and has file system caching benefits.
https://stackoverflow.com/questions/69302032/
pytorch model predicts fixed label when it exports to onnx
I trained resnet-18 model in pytorch. And it works well in pytorch. But, when I converts it to onnx and predicts in cv2, model predicts only 1~2 label(it should predict 0~17 labels). this is my model export code model.eval() x = torch.randn(1, 3, 512, 384, requires_grad=True) # export model torch.onnx.export(model, x, "model.onnx", export_params=True, opset_version=10, do_constant_folding=True, input_names = ['input'], output_names = ['output']) And this is my code for inference in cv2 self.transform = albumentations.Compose([ albumentations.Resize(512, 384, cv2.INTER_LINEAR), albumentations.GaussianBlur(3, sigma_limit=(0.1, 2)), albumentations.Normalize(mean=(0.5), std=(0.2)), albumentations.ToFloat(max_value=255) ]) ... #image crop code: works fine in pytorch image = frame[ymin:ymax, xmin:xmax] #type(frame)==numpy.array, RGB form augmented = self.transform(image=image) image = augmented["image"] ... #inference code: does not work well net=cv2.dnn.readNet("Model.onnx") blob = cv2.dnn.blobFromImage(image, swapRB=False, crop=False) net.setInput(blob) label = np.array(net.forward()) text = 'Label: '+str(np.argmax(label[0])) All transform settings works well in pytorch. What can be the problem in this code?
The problem with your code probably has to do with preprocessing the images differently: self.transform rescales the image, but when you are reading blob, you are not doing that. To verify this, you can read the same image and check if the image and blob are equal (e.g. using torch.allclose), when the actual (random) augmentations are disabled.
https://stackoverflow.com/questions/69304079/
Using CUDA 11.x but getting error: Unknown CUDA arch (8.6) or GPU not supported
I'm setting up a conda environment to use pytorch 1.4.0 (on Ubuntu 20.04.2), but getting the error message: ValueError: Unknown CUDA arch (8.6) or GPU not supported I know this has been asked before, but no answer fits my case. This answer suggests that the CUDA version is too old. However, I updated my CUDA version to the most recent, and get the same error message. nvcc -V says I have CUDA 11 installed, and when I run nvidia-smi I get this info: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.84 Driver Version: 460.84 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ which, according to the NVIDIA docs, should work be compatible: Another auxilliary question: What does the "8.6" in CUDA arch (8.6) represent?
Specific versions of PyTorch work only with specific versions of CUDA. If you are using CUDA-11.1, you'll need a fairly recent version of PyTorch. You need to either upgrade your PyTorch, or downgrade your CUDA.
https://stackoverflow.com/questions/69304277/
Getting RESNet18 to work with float32 data
I have float32 data that I am trying to get RESNet18 to work with. I am using the RESNet model in torchvision (and using pytorch lightning) and modified it to use one layer (grayscale) data like so: class ResNetMSTAR(pl.LightningModule): def __init__(self): super().__init__() # define model and loss self.model = resnet18(num_classes=3) self.model.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) self.loss = nn.CrossEntropyLoss() @auto_move_data # this decorator automatically handles moving your tensors to GPU if required def forward(self, x): return self.model(x) def training_step(self, batch, batch_no): # implement single training step x, y = batch logits = self(x) loss = self.loss(logits, y) return loss def configure_optimizers(self): # choose your optimizer return torch.optim.RMSprop(self.parameters(), lr=0.005) When I try to run this model I am getting the following error: File "/usr/local/lib64/python3.6/site-packages/torch/nn/functional.py", line 2824, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward Is there anything that I can do differently to keep this error from happening?
The problem is that the y your feeding your cross entropy loss, is not a LongTensor, but a FloatTensor. CrossEntropy expects getting fed a LongTensor for the target, and raises the error. This is an ugly fix: x, y = batch y = y.long() But what I recommend you to do is going to where the dataset is defined, and make sure you are generating long targets, this way you won't reproduce this error if you change how your training loop is working.
https://stackoverflow.com/questions/69307041/
PyTorch Generating Matrix using a Kernel Function without For-Loops
I am trying to generate a matrix (tensor object on PyTorch) that is similar to Gram matrix except I need to apply a kernel function instead of inner product on my input matrix. For loops like the one below works: N = x.shape[0] # x.shape = (N,d) G = torch.zeros((N,N)) for i in range(N): for j in range(N): G[i][j] = K(x[i], x[j]) where x is my input tensor whose shape is (N,d) and the kernel function K(a,b) yields a real value after performing some math. For example: def K(a,b): return ((1+(a*b)).sum()).pow(2) #second degree polynomial. I want to generate this matrix, G without having to change the kernel function K() and of course, without for-loops! My initial attempt is to use a lambda approach but this code below obviously doesn't work as it only yields a list of k(x[i],x[i]). G = torch.tensor(list(map(lambda a,b: K(a,b),x,x)) How can I use the lambda function to yield N-by-N matrix? What would be some other ways to tackle this problem? Any insight would be appreciated.
You can calculate G from x simply with: G = (1 + torch.matmul(x, x.T)).pow(2)
https://stackoverflow.com/questions/69309085/
Synthesizing 1x1 convolution layer with fully connected layers
I'm trying to synthesize a 1x1 convolution layer with fully connected layers. This means a fully connected neural network deciding the parameters of a 1x1 convolution layer. Here is how I do. class Network(nn.Module): def __init__(self, len_input, num_kernels): self.input_layers = nn.Sequential( nn.Linear(len_input, num_kernels * 2), nn.ReLU(), nn.Linear(num_kernels * 2, num_kernels), nn.ReLU() ) self.synthesized_conv = nn.Conv2d(in_channels=3, out_channels=num_kernels, bias=False, kernel_size=1) self.conv_layers = nn.Sequential( nn.ReLU(), nn.Conv2d(in_channels=num_kernels, out_channels=3, kernel_size=1) ) def forward(self, x1, img): x = self.input_layer(x1.float()) with torch.no_grad(): self.synthesized_conv.weight = nn.Parameter(x.reshape_as(self.synthesized_conv.weight)) generated = self.conv_layer(self.synthesized_conv(img)) return generated There you can see that I'm initializing a 1x1 conv layer called "synthesized_conv" and trying to replace it's parameters with a fully connected network output called "self.input_layers" with call-by-reference. However, gradients doesn't seem like flowing through the fully connected network, but only flowing through convolutional layers. Here's how parameter histogram for fully connected layers looks like: This histogram comes as a strong indicator of those fully connected part is not learning at all. It's most likely a malpractice of convolution parameter update by fully connected network output. Can someone help me how can I do it without breaking the autograd graph?
The issue is you are redefining, again and again, the weight attribute of your model. An more direct solution would be to use the functional approach, i.e. torch.nn.functional.conv2d: class Network(nn.Module): def __init__(self, len_input, num_kernels): super().__init__() self.input_layers = nn.Sequential( nn.Linear(len_input, num_kernels * 2), nn.ReLU(), nn.Linear(num_kernels * 2, num_kernels * 3), nn.ReLU()) self.synthesized_conv = nn.Conv2d( in_channels=3, out_channels=num_kernels, kernel_size=1) self.conv_layers = nn.Sequential( nn.ReLU(), nn.Conv2d(in_channels=num_kernels, out_channels=3, kernel_size=1)) def forward(self, x1, img): x = self.input_layers(x1.float()) w = x.reshape_as(self.synthesized_conv.weight) generated = F.conv2d(img, w) return generated Also, I believe your input_layers will have to output num_kernels * 3 components in total since you have three channels total on your synthesized convolution. Here is a test example: >>> model = Network(10,3) >>> out = model(torch.rand(1,10), torch.rand(1,3,16,16)) >>> out.shape (torch.Size([1, 3, 16, 16]), <ThnnConv2DBackward at 0x7fe5d8e41450>) Of course, the parameters of synthesized_conv will never be changed, since they are never being used to infer the output. You can remove self.synthesized_conv altogether: class Network(nn.Module): def __init__(self, len_input, num_kernels): super().__init__() self.input_layers = nn.Sequential( nn.Linear(len_input, num_kernels * 2), nn.ReLU(), nn.Linear(num_kernels * 2, num_kernels*3), nn.ReLU()) self.syn_conv_shape = (num_kernels, 3, 1, 1) self.conv_layers = nn.Sequential( nn.ReLU(), nn.Conv2d(in_channels=num_kernels, out_channels=3, kernel_size=1)) def forward(self, x1, img): x = self.input_layers(x1.float()) generated = F.conv2d(img, x.reshape(self.syn_conv_shape)) return generated
https://stackoverflow.com/questions/69311127/
Why doesn't torch pruning actually remove filters or weights?
I work with one architecture and trying to sparse it via prune. I wrote functions for pruning, here is one of them: def prune_model_l1_unstructured(model, layer_type, proportion): for module in model.modules(): if isinstance(module, layer_type): prune.l1_unstructured(module, 'weight', proportion) prune.remove(module, 'weight') return model # prune model prune_model_l1_unstructured(model, nn.Conv2d, 0.5) It prunes some weights (change them to zeros). But prune.remove just deletes original weights and keeps zeros instead. Total amount of parameters still same (I checked it). The model's file (model.pt) size still the same too. And the model's "speed" still the same after it. I tried also global pruning and structured L1 pruning, results are the same. So how this can help to improve model's performance time? Why aren't the weights being removed and how to remove pruned connections?
TLDR; PyTorch prune's function just works as a weight mask, that's all it does. There are no memory savings associated with using torch.nn.utils.prune. As the documentation page for torch.nn.utils.prune.remove states: Removes the pruning reparameterization from a module and the pruning method from the forward hook. In effect, this means it will remove the mask - that prune.l1_unstructured added - from the parameter. As a side effect, removing the prune will imply having zeros on the previously masked values but these won't stay as 0s. In the end, PyTorch prune will only take more memory compared to not using it. So this is not actually the functionality you are looking for. You can read more on this comment. Here is an example: >>> module = nn.Linear(10,3) >>> prune.l1_unstructured(module, name='weight', amount=0.3) The weight parameters are masked: >>> module.weight tensor([[-0.0000, 0.0000, -0.1397, -0.0942, 0.0000, 0.0000, 0.0000, -0.1452, 0.0401, 0.1098], [ 0.2909, -0.0000, 0.2871, 0.1725, 0.0000, 0.0587, 0.0795, -0.1253, 0.0764, -0.2569], [ 0.0000, -0.3054, -0.2722, 0.2414, 0.1737, -0.0000, -0.2825, 0.0685, 0.1616, 0.1095]], grad_fn=<MulBackward0>) Here is the mask: >>> module.weight_mask tensor([[0., 0., 1., 1., 0., 0., 0., 1., 1., 1.], [1., 0., 1., 1., 0., 1., 1., 1., 1., 1.], [0., 1., 1., 1., 1., 0., 1., 1., 1., 1.]]) Notice that when applying prune.remove, the pruning is removed. And, the masked values remain at zero but are "unfrozen": >>> prune.remove(module, 'weight') >>> module.weight tensor([[-0.0000, 0.0000, -0.1397, -0.0942, 0.0000, 0.0000, 0.0000, -0.1452, 0.0401, 0.1098], [ 0.2909, -0.0000, 0.2871, 0.1725, 0.0000, 0.0587, 0.0795, -0.1253, 0.0764, -0.2569], [ 0.0000, -0.3054, -0.2722, 0.2414, 0.1737, -0.0000, -0.2825, 0.0685, 0.1616, 0.1095]], grad_fn=<MulBackward0>) And the mask is gone: >>> hasattr(module, 'weight_mask') False
https://stackoverflow.com/questions/69311857/
How to do a masked mean in PyTorch?
This is the forward pass of a bidirectional rnn where I want to take the avg pool of the output features. As you can see, I'm trying to exclude the time steps with a pad token from the calculation. def forward(self, text): # text is shape (B, L) embed = self.embed(text) rnn_out, _ = self.rnn(embed) # (B, L, 2*H) # Calculate average ignoring the pad token with torch.no_grad(): rnn_out[text == self.pad_token] *= 0 denom = torch.sum(text != self.pad_token, -1, keepdim=True) feat = torch.sum(rnn_out, dim=1) / denom feat = self.dropout(feat) return feat Backpropagation raises an exception because of the line rnn_out[text == self.pad_token] *= 0. Here's what it looks like: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [32, 21, 128]], which is output 0 of CudnnRnnBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). What's the correct way to do this? Note: I know I can do this by doing and/or of the following: Provide text lengths as an input. Loop through batch dimension, finding the mean for each sequence then stack the result. But I want to know if there's a cleaner way not involving those.
You're modifying a vector in a context where you disable the building of a computational graph (and you modify it inplace using *=), this will wreak havoc on the computation of the gradient. Instead I'd suggest the following: mask = text != self.pad_token denom = torch.sum(mask, -1, keepdim=True) feat = torch.sum(rnn_out * mask.unsqueeze(-1), dim=1) / denom Maybe you have to tweak this snippet a little bit, I couldn't test it as you haven't provided a complete example, but it hopefully shows the technique you can use.
https://stackoverflow.com/questions/69314108/
PyTorch - discard dataloader batch
I have a custom Dataset that loads data from large files. Sometimes, the loaded data are empty and I don't want to use them for training. In Dataset I have: def __getitem__(self, i): (x, y) = self.getData(i) #getData loads data and handles problems return (x, y) which in case of bad data return (None, None) (x and y are both None). However, it later fails in DataLoader and I am not able to skip this batch entirely. I have the batch size set to 1. trainLoader = DataLoader(trainDataset, batch_size=1, shuffle=False) for x_batch, y_batch in trainLoader: #process and train
You could implement a custom IterableDataset and define a __next__ and __iter__ that would skip any instances for which your getData function has raised an error on: Here is a possible implementation with dummy data: class DS(IterableDataset): def __init__(self): self.data = torch.randint(0,3,(20,)) self._i = -1 def getData(self, index): x = self.data[index] if x == 0: raise ValueError return x def __iter__(self): return self def __next__(self): self._i += 1 if self._i == len(self.data): # out of instances self._i = -1 # reset the iterable raise StopIteration # stop the iteration try: return self.getData(self._i) except ValueError: return next(self) You would use it like: >>> trainLoader = DataLoader(DS(), batch_size=1, shuffle=False) >>> for x in trainLoader: ... print(x) tensor([1]) tensor([2]) tensor([2]) ... tensor([1]) tensor([1]) Here all 0 instances have been skipped in the iterable dataset. You can adapt this simple example to fit your needs.
https://stackoverflow.com/questions/69316021/
Intution behind weighted random sampler in PyTorch
I am trying to use WeightedRandomSampler for handling imbalance in the dataset (class1: 2555, class 2: 227, class 3: 621, class 4: 2552 images). However, I debugged the steps but the intuition behind it is not clear to me. My target labels are in form of one-hot encoded vectors as below. train_labels.head(5) I converted the labels to class index as: labels = np.argmax(train_labels.loc[:, 'none':'both'].values, axis=1) train_labels = torch.from_numpy(labels) train_labels tensor([0, 0, 1, ..., 1, 0, 0]) Below are the steps, I used to calculate for the weighted random sampler. Please correct me if I am wrong with the interpretation of any steps. Count the number of samples per class in the dataset class_sample_count = np.array(train_labels.value_counts()) class_sample_count array([2555, 2552, 621, 227]) Calculate the weight associated with each class weight = 1. / class_sample_count weight array([0.00039139, 0.00039185, 0.00161031, 0.00440529]) Calculate the weight for each of the samples in the dataset. samples_weight = np.array(weight[train_labels]) print(samples_weight[1], samples_weight[2] ) 0.0003913894324853229 0.00039184952978056425 Convert the np.array to tensor tensor([0.0004, 0.0004, 0.0004, ..., 0.0004, 0.0004, 0.0004], dtype=torch.float64) After conversion to tensor, all the samples appear to have the same value in all four entries? Then how does Weighted Random Sampling is helping to deal with the imbalanced dataset? I will be grateful for the help. Thank you.
This is because you are computing the weights on the one-hot encodings, and since there are four components (four classes) you end up with four identical weights per instance after the indexing weight[train_labels]. The fact that you have identical weights is perfectly fine because each instance should be assigned a unique weight. To the sampler, this weight corresponds to the probability of picking this instance. If a given class is prominent in the dataset, the associated frequency (i.e. the weight) will be low, and as such instances of that class will have a lower probability of getting sampled from the dataset. With a fairly large number of samples, the goal with this weighting scheme is to have a balanced sampling even though class representations are imbalanced. If you stick with one-hot encodings, you can just pick the first column: >>> sample_weights = np.array(weight[train_labels])[:,0] Then use WeightedRandomSampler to construct a sampler: >>> sampler = WeightedRandomSampler(sample_weights, len(train_labels)) Finally you can plug it into a dataloader: >>> DataLoader(dataset, batch_size, sampler=sampler)
https://stackoverflow.com/questions/69318733/
PyTorch TypeError: flatten() takes at most 1 argument (2 given)
I am trying to run this program in PyTorch which is custom: class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten(start_dim=1) self.linear_relu_stack = nn.Sequential( nn.Linear(2142, 51), nn.ReLU(), nn.Linear(51, 1) ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits model = NeuralNetwork().to(device) The above code is the custom NeuralNetwork. I defined the training loop below. def train_loop(data, y, model, loss_fn, optimizer): for i in range(data.shape[0]): # Compute model prediction and loss pred = model(data[i, :, :]) loss = loss_fn(pred, y[i, :]) # Backpropagation optimizer.zero_grad() loss.backward() optimizer.step() print("Loss: {}".format(loss.item())) The following is how I would like to train it final_data = torch.randn(500, 42, 51) final_output = torch.randn(500, 1) learning_rate = 1e-3 batch_size = 1 epochs = 5 loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) epochs = 10 for t in range(epochs): print(f"Epoch {t+1}\n-------------------------------") train_loop(final_data, final_output, model, loss_fn, optimizer) print('Done!!') The variable final_data is of shape 500x42x51. The variable final_output is of shape 500x1. I have been trying to run the above data for 10 epochs but I always end up with this error: Epoch 1 ------------------------------- --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-17-3fb934698ecf> in <module>() 9 for t in range(epochs): 10 print(f"Epoch {t+1}\n-------------------------------") ---> 11 train_loop(final_data, final_output, model, loss_fn, optimizer) 12 13 print('Done!!') 4 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/flatten.py in forward(self, input) 38 39 def forward(self, input: Tensor) -> Tensor: ---> 40 return input.flatten(self.start_dim, self.end_dim) 41 42 def extra_repr(self) -> str: TypeError: flatten() takes at most 1 argument (2 given) The output is basically a classification between 0 or 1. I am still a newbie in terms of PyTorch and would like some help solving this issue and understanding what's wrong. Thank you
The error is a bit misleading, if you try running the code from a fresh kernel, the issues are elsewhere... There are multiple issues with your code: You are not using the correct shape for the target tensor y, here it should have a single dimension since the output tensor is two-dimensional. The target tensor should be of dtype Long When iterating over you data and selecting input (and target) with data[i, :, :] (and y[i, :]), you are essentially removing the batch axis. However all builtin nn.Module work with a batch axis. You can do a slice to avoid that side effect: with data[i:i+1] and y[i:i+1] respectively. Also do note that x[j, :, :] is identical to x[j]. That being said, the usage of the cross-entropy loss is not justified. You are outputting a single logit, so it doesn't make sense to use a cross-entropy loss. You can either output two logits on the last layer of your model, or switch to another loss function such as a binary cross-entropy loss (either using nn.BCELoss or a nn.BCEWithLogitsLoss which includes a sigmoid activation). In this case the target vector should be of dtype float, and its shape should equal that of pred. def train_loop(data, y, model, loss_fn, optimizer): for i in range(data.shape[0]): pred = model(data[i:i+1]) loss = loss_fn(pred, y[i:i+1].float()) # [...] final_data = torch.randn(500, 42, 51) final_output = torch.randint(0, 2, (500,1)) loss_fn = nn.BCEWithLogitsLoss() Then the following will work: >>> train_loop(final_data, final_output, model, loss_fn, optimizer)
https://stackoverflow.com/questions/69318900/
How can I build a neural network with 2 independent sets of weights and 2 losses?
Consider the following neural network: import torch import torch.nn as nn import torch.optim as optim class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.blue_1 = nn.Linear(2, 10) self.red_1 = nn.Linear(10, 5) self.blue_2 = nn.Linear(5, 4) self.red_2 = nn.Linear(4, 3) self.blue_3 = nn.Linear(3, 2) self.red_3 = nn.Linear(2, 1) def forward(self, x): x = torch.relu(self.blue_1(x)) x = self.red_1(x) x = self.blue_2(x) x = self.red_2(x) x = self.blue_3(x) x = self.red_3(x) return x net = Model() opt = optim.Adam(net.parameters()) features = torch.rand((10,2)) #10 inputs, each of 2D for epoch in range(3): x = net(features) loss = torch.sum(torch.randint(0,10,(10,)) - x) loss.backward() print(loss) I have 2 independent sets of weights, call them blues (e.g self.blue_1) and reds (e.g self.red_1). These 2 sets need to be multiplied in some combination (e.g see forward method). However, in comparison to what I have above I need the blue weights to be updated according to a certain loss function (e.g loss_blue = some_loss_function, and the red weights to be updated according to another loss function (e.g torch.sum(torch.randint(0,10,(10,)) - x)). It's important that the red loss doesn't propagate through the blue weights and vice versa. Is there a way to do this? I was thinking to even break it into 2 neural networks but I'm not sure if 1. it's the best approach, and 2. how to do it using that approach.
One way to do this would be, in four steps: Do the first inference to compute loss_blue. Backpropagate from loss_blue and update the blue parameters, Then infer again to compute loss_red with the updated blue weights Backpropagate from loss_red and update the red parameters. This is similar to how you would go about training a GAN alternating between generator and discriminator with successive backward passes and updates. Having two optimizers handling the two sets of parameters makes it easier to use. Don't forget to clear the gradients before backpropagating so one loss doesn't pollute the other parameters with its gradient. Something like this should work out: net = Model() optim_blue = optim.Adam(net.blues()) # fetch all blue parameters optim_red = optim.Adam(net.reds()) # fetch all red parameters features = torch.rand((10,2)) #10 inputs, each of 2D # # inference, backprop and update on blue params out = net(features) loss_blue = torch.sum(torch.randint(0,10,(10,)) - out) optim_red.zero_grad() optim_blue.zero_grad() loss_blue.backward() optim_blue.step() # # inference backprop and update on red params out = net(features) loss_red = out.mean() optim_red.zero_grad() optim_blue.zero_grad() loss_red.backward() optim_red.step() Edit based on comment: How do you specify which parameters/layers will be in the optimizer(optim_blue = optim.Adam(net.blues())). Is it something along the lines of optim_blue = optim.Adam([net. blue_1, net. blue_2...])? Yes, something like that. For defining the optimizers, you can for example create two functions inside your model: reds and blues. def reds(self): return [*self.red_1.parameters(), *self.red_2.parameters(), *self.red_3.parameters()] def blues(self): return [*self.blue_1.parameters(), *self.blue_2.parameters(), *self.blue_3.parameters()] What makes the gradients separated when you call loss_blue.backward()? That is, what stops them to flow through the reds? Is that the purpose of the 2 optimizers? When loss_blue.backward, the gradient flows through all parameters of the model, including the red parameters. What makes all the difference is indeed the fact that optim_blue will only update the blue parameters, not the red ones.
https://stackoverflow.com/questions/69321536/
Fine Tuning Pretrained Model MobileNet_V3_Large PyTorch
I am trying to add a layer to fine-tune the MobileNet_V3_Large pre-trained model. I looked around at the PyTorch docs but they don't have a tutorials for this specific pre-trained model. I did find that I can fine-tune MobileNet_V2 with: model_ft =models.mobilenet_v2(pretrained=True,progress=True) model_ft.classifier[1] = nn.Linear(model_ft.last_channel, out_features=len(class_names)) but I am not sure what the linear layer for MobileNet V3 should look like.
For V3 Large, you should do model_ft = models.mobilenet_v3_large(pretrained=True, progress=True) model_ft.classifier[-1] = nn.Linear(1280, your_number_of_classes) (This would also work for V2, but the code you posted would not work for V3 correctly). To see the structure of your network, you can just do print(model_ft.classifier) or print(model_ft) For fine-tuning people often (but not always) freeze all layers except the last one. Again, the layer to not freeze is model_ft.classifier[-1] rather than model_ft.classifier[1]. Whether or not you should freeze layers depends on how much data you have, and is best determined empirically.
https://stackoverflow.com/questions/69321848/
RuntimeError: Trying to backward through the graph a second time
I'm trying to train 'trainable Bernoulli distribution' using 'pyro'. I want to train Bernoulli distribution's parameter(probability to win) using NLL loss. train_data is one-hot encoded sparse matrix(2034,19475) and train_labels has 4 value(4 class, [0,1,2,3]). import torch import pyro pyd = pyro.distributions print("torch version:", torch.__version__) print("pyro version:", pyro.__version__) import matplotlib.pyplot as plt import numpy as np torch.manual_seed(123) ### 0. define Negative Log Likelihood(NLL) loss function def nll(x_train, distribution): return -torch.mean(distribution.log_prob(torch.tensor(x_train, dtype=torch.float))) ### 1. initialize bernoulli distribution(trainable distribution) train_vars = (pyd.Uniform(low=torch.FloatTensor([0.01]), high=torch.FloatTensor([0.1])).rsample([train_data.shape[-1]]).squeeze()) distribution = pyd.Bernoulli(probs=train_vars) ### 2. initialize 'label 0' data class_mask = (train_labels==0) class_data = train_data[class_mask, :] ### 3. initialize optimizer optim = torch.optim.Adam([train_vars]) train_vars.requires_grad=True ### 4. train loop for i in range(0,100): loss = nll(class_data, distribution) loss.backward() When I run this code, I get RUNTIME ERROR like below.. How should I deal with this error case? your comment would be very very very appreciate. torch version: 1.9.0+cu102 pyro version: 1.7.0 --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-269-0081bb1bb843> in <module> 25 loss = nll(class_data, distribution) 26 ---> 27 loss.backward() 28 /nf/yes/lib/python3.8/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs) 253 create_graph=create_graph, 254 inputs=inputs) --> 255 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) 256 257 def register_hook(self, hook): /nf/yes/lib/python3.8/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 145 retain_graph = create_graph 146 --> 147 Variable._execution_engine.run_backward( 148 tensors, grad_tensors_, retain_graph, create_graph, inputs, 149 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: Trying to backward through the graph a second time (or directly access saved variables after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved variables after calling backward.
You need to move distribution = pyd.Bernoulli(probs=train_vars) inside the loop, because it uses train_vars, which requires_grad.
https://stackoverflow.com/questions/69322200/
Some strange questions about pytorch copy a tensor
I am a bit confused about pytorch's shared memory mechanism. a = torch.tensor([[1,0,1,0], [0,1,1,0]]) b = a b[b == 1] = 0 It's easy to know that a and b will simutaneously become tensor([[0,0,0,0],[0,0,0,0]]), cause a and b share the same memory. When I changed the code to a = torch.tensor([[1,0,1,0], [0,1,1,0]]) b = a b = b - 1 b became tensor([[0,-1,0,-1],[-1,0,0,-1]]), but a is still torch.tensor([[1,0,1,0],[0,1,1,0]]). a and b are sharing the same memory. Why did b changed, while a didn't change?
In your second example a and b share the same reference but b = b - 1 is actually a copy. You are not affecting the underlying data of b (and not of a neither since it's the same). You can look at it this way: >>> a = torch.tensor([[1,0,1,0], [0,1,1,0]]) >>> b1 = a >>> b2 = b1 - 1 Comparing their pointer to the data buffers: >>> a.data_ptr() == b1.data_ptr() True >>> b1.data_ptr() == b2.data_ptr() False If in fact, you operate on b inplace, you will of course change a as well: >>> a = torch.tensor([[1,0,1,0], [0,1,1,0]]) >>> b1 = a >>> b1.sub_(1) Then you haven't made a copy: >>> a.data_ptr() == b1.data_ptr() True
https://stackoverflow.com/questions/69323036/
Understanding of Pytorch NLLLOSS
PyTorch's negative log-likelihood loss, nn.NLLLoss is defined as: So, if the loss is calculated with the standard weight of one in a single batch the formula for the loss is always: -1 * (prediction of model for correct class) Example: Correct Class = 0 prediction of model for correct class = 0.5 loss = -1 * 0.5 So, why is it called the "negative log-likelihood loss", if there isn't a log function involved in calculating the loss? ​
Indeed no log is being used to compute the result of nn.NLLLoss so this can be a little confusing. However, I believe the reason why it was called this way is because it expects to receive log-probabilities: The input given through a forward call is expected to contain log-probabilities of each class. - docs In the end it does not make much sense to have it in the name since you might as well want to apply this function on non-log-probabilities...
https://stackoverflow.com/questions/69325760/
When I make pytorch attention Can you give me a idea to fix loss
Here is my model class Build_Model(nn.Module): def __init__(self,args) : super(Build_Model, self).__init__() self.hidden_size = args.dec_size self.embedding = nn.Embedding(args.n_vocab, args.d_model) self.enc_lstm = nn.LSTM(input_size =args.d_model, hidden_size=args.d_model,batch_first=True) self.dec_lstm = nn.LSTM(input_size =args.d_model, hidden_size=args.d_model,batch_first=True) self.soft_prob = nn.Softmax(dim=-1) self.softmax_linear = nn.Linear(args.d_model*2,len(vocab)) self.softmax_linear_function = nn.Softmax(dim = -1) def forward(self, enc_inputs, dec_inputs) : enc_hidden = self.embedding(enc_inputs) dec_hidden = self.embedding(dec_inputs) enc_hidden , (enc_h_state,enc_c_state) = self.enc_lstm(enc_hidden) dec_hidden,(dec_h_state,dec_c_state) = self.dec_lstm(dec_hidden,(enc_h_state,enc_c_state)) attn_score = torch.matmul(dec_hidden, torch.transpose(enc_hidden,2,1)) attn_prob = self.soft_prob(attn_score) attn_out = torch.matmul(attn_prob,enc_hidden) cat_hidden = torch.cat((attn_out, dec_hidden),-1) y_pred = self.softmax_linear_function(self.softmax_linear(cat_hidden)) y_pred = torch.argmax(y_pred,dim =-1) print('y_pred = ',y_pred.shape) y_pred = y_pred.view(-1, 150) print('2y_pred = ',y_pred.shape) return y_pred Here is the loss function def lm_loss(y_true, y_pred): print(y_pred.shape) y_pred_argmax = y_pred #y_pred_argmax = y_pred_argmax.view(-1,150) print(y_true.shape, y_pred_argmax.shape) criterion = nn.CrossEntropyLoss(reduction="none") loss = criterion(y_true.float(), y_pred_argmax.float()[0]) #mask = tf.not_equal(y_true, 0) mask = torch.not_equal(y_pred_argmax,0) #mask = tf.cast(mask, tf.float32) mask = mask.type(torch.FloatTensor).to(device) loss *= mask #loss = tf.reduce_sum(loss) / tf.maximum(tf.reduce_sum(mask), 1) loss = torch.sum(loss) / torch.maximum(torch.sum(mask),1) return loss The last is evaluation optimizer.zero_grad() print(train_enc_inputs.shape,train_dec_inputs.shape, train_dec_labels.shape ) y_pred = model(train_enc_inputs,train_dec_inputs) #y_pred = torch.argmax(y_pred,dim =-1) print(y_pred.shape ) loss = lm_loss(train_dec_labels, y_pred) The output is here: torch.Size([32, 120]) torch.Size([32, 150]) torch.Size([32, 150]) y_pred = torch.Size([32, 150]) 2y_pred = torch.Size([32, 150]) torch.Size([32, 150]) torch.Size([32, 150]) torch.Size([32, 150]) torch.Size([32, 150]) The error traceback: ValueError Traceback (most recent call last) <ipython-input-159-cc8976139dd5> in <module>() 9 #y_pred = torch.argmax(y_pred,dim =-1) 10 print(y_pred.shape ) ---> 11 loss = lm_loss(train_dec_labels, y_pred) 12 n_step += 1 13 if n_step % 10 == 0: 3 frames <ipython-input-158-39ba03042d04> in lm_loss(y_true, y_pred) 15 print(y_true.shape, y_pred_argmax.shape) 16 criterion = nn.CrossEntropyLoss(reduction="none") ---> 17 loss = criterion(y_true.float(), y_pred_argmax.float()[0]) 18 #mask = tf.not_equal(y_true, 0) 19 mask = torch.not_equal(y_pred_argmax,0) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py in forward(self, input, target) 1119 def forward(self, input: Tensor, target: Tensor) -> Tensor: 1120 return F.cross_entropy(input, target, weight=self.weight, -> 1121 ignore_index=self.ignore_index, reduction=self.reduction) 1122 1123 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2822 if size_average is not None or reduce is not None: 2823 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2824 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 2825 2826 ValueError: Expected input batch_size (32) to match target batch_size (150). How can I fix it?
There are a few issues with your usage of nn.CrossEntropyLoss: You are supposed to call nn.CrossEntropyLoss with criterion(y_pred, y_true), you seem to have switched the two. y_pred contains the output logits of your network i.e. it hasn't been passed through a softmax: you need to remove self.softmax_linear_function in your model) Also y_pred should contain all components and not be the results of an argmax. y_true is passed in dense format: it contains the true class labels and has one dimension less than the prediction y_pred.
https://stackoverflow.com/questions/69328824/
How to estimate the run time of a neural network on mobile phones?
I trained a neural network with approximately 26000 parameters, Im intending to use it on mobile phones for real time inference. Im wondering if there is a way to estimate the run time of a neural network given the size of the network and the operating device.
You need to estimate the number of floating point ops (FLOPS) that running your model requires. For example, multiplying two N x N matrices counts as 2 N^3 FLOPS. (There are software packages that help you do that in PyTorch) 1 multiply-add counts as 2 FLOPS, by the way. Then, you need to know the capabilities of you target device. How many floating point ops can it do per second? This provides an upper bound on how fast your code can run. Will your code reach this theoretical limit? That is unclear, but this gives you something to shoot for. The simpler the problem (small tensors), the more likely you are to fall behind this limit. If you quantize your models for deployment, you need to adjust your calculation accordingly (use the speed of the relevant integer operations).
https://stackoverflow.com/questions/69329675/
When training a Deep learning model on multiple datasets, is it better concatenating all datasets and shuffling it, or train datasets sequentially?
So lets say that I Have datasets A, B, and C to train a model. My current solution take batches randomly from A, then from B, then from C. I wonder if concatenating all datasets and shuffling so that training would be more random would improve results
As you pointed out in your comment, the samples in the datasets are drawn from slightly different "distributions" (e.g., real vs synthetic images). In this case, it is better to randomly sample points from all datasets for each batch, rather than going sequentially through the different datasets.
https://stackoverflow.com/questions/69330797/
Import "torch" could not be resolved
Using vscode 1.60.2 Running this code in command line, after executing the "python" command works. So I know that the library is properly installed. This is a problem specifically with vscode. I restarted vscode, and even restarted windows after installing pytorch with pip. Did not fix it. import torch # Model model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) # Images imgs = ['https://ultralytics.com/images/zidane.jpg'] # batch of images # Inference results = model(imgs) # Results results.print() results.save() # or .show() results.xyxy[0] # img1 predictions (tensor) results.pandas().xyxy[0] # img1 predictions (pandas) # xmin ymin xmax ymax confidence class name # 0 749.50 43.50 1148.0 704.5 0.874023 0 person # 1 433.50 433.50 517.5 714.5 0.687988 27 tie # 2 114.75 195.75 1095.0 708.0 0.624512 0 person # 3 986.00 304.00 1028.0 420.0 0.286865 27 tie
You might have changed which interpreter you're using for Python. In VSCode go to the command palette and search for Python: Select Interpreter. If there's a default option go with that since that's where you might have installed the module to using pip. Should look something like this. Command palette can be reached by Ctrl+Shift+p. If that doesn't work set up a virtual environment for your project and install your module there.
https://stackoverflow.com/questions/69331364/
PyCox: Feature transforms with DataFrameMapper
I am trying to implement DeepSurv for survival analysis with the Python package pycox. The author of the package also provide also a notbook with a coding example so I tried to transfer the code to my data. However, there seems to be a problem defining x_train due to their proposed Feature transforms with DataFrameMapper. In the notbook it says: cols_standardize = ['x0', 'x1', 'x2', 'x3', 'x8'] cols_leave = ['x4', 'x5', 'x6', 'x7'] standardize = [([col], StandardScaler()) for col in cols_standardize] leave = [(col, None) for col in cols_leave] x_mapper = DataFrameMapper(standardize + leave) x_train = x_mapper.fit_transform(df_train).astype('float32') x_val = x_mapper.transform(df_val).astype('float32') x_test = x_mapper.transform(df_test).astype('float32') In the notbook they are standardizing the 5 numerical covariates but I have nothing to standardize. So I changed the code into: cols_standardize = [] cols_leave = df_train.columns.values.tolist() standardize = [([col], StandardScaler()) for col in cols_standardize] leave = [(col, None) for col in cols_leave] x_mapper = DataFrameMapper(standardize + leave) x_train = x_mapper.fit_transform(df_train).astype('float32') x_val = x_mapper.transform(df_val).astype('float32') x_test = x_mapper.transform(df_test).astype('float32') But when I execute training the model this error occurs: batch_size = 256 lrfinder = model.lr_finder(x_train, y_train, batch_size, tolerance=10) _ = lrfinder.plot() RuntimeError: expected device cpu and dtype Float but got device cpu and dtype Long Is it maybe because of the batch_size? What does batch_size actually mean? However, I also tried to skip the whole Feature transforms step, so I just changed my dataframes into floats: x_train = df_train.astype('float32') x_val = df_val.astype('float32') x_test = df_test.astype('float32') But then if I go on training the modell it says: All objects in 'data' doest have the same type. I am really confused how to prepare my data to use pycox. Especially this label transforms step with standardization appears really confusing. I would be glad for any help!
There is no problem with the code. Try to upgrade pytorch, like suggested here https://github.com/huggingface/transformers/issues/2126
https://stackoverflow.com/questions/69333370/
understanding transforms: resize and centercrop with same size
I am trying to understand this particular set of compose transforms: transform= transforms.Compose([transforms.Resize((224,224) interpolation=torchvision.transforms.InterpolationMode.BICUBIC),\ transforms.CenterCrop(224),transforms.ToTensor(),\ transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) Does it make sense (and is it legit) to do centercrop after transforms - with the same size parameter? I would have thought resize itself is giving the centercrop but I see in the repos that centercrop is composed after resize - both with exactly the same sizes. I wonder what is the use of doing such a thing. For the sake of completeness, I would like to add that my input image sizes vary (ie they are all not of the same dims). thanks!
I would have thought resize itself is giving the center crop. Function T.Resize won't center crop your image, the center will stay the same since you are only resizing the original image, i.e. proportions are kept and the original center remains at the center. Applying a crop of the same shape as the image - since it's just after the resize - with T.CenterCrop doesn't make any difference since you are cropping nothing out of the image. If you change the sizes of your T.CenterCrop, then this and the order you apply both transforms will matter greatly.
https://stackoverflow.com/questions/69334048/
install pytorch c++ api CUDA11.4 for ubuntu
I'm trying to use the PyTorch c++ API on an ubuntu 18.04. I've installed CUDA 11.4 and cuDNN 8.2.4.15. The source I'm compiling is available here. compiling CUDA with nvcc works and the cuDNN installation test succeeds. But I am unable to find a good documentation for installing and compiling projects with PyTorch c++ api on Ubuntu. Do you know any god ones? system configurations: OS: ubuntu 18.04 GPU: 1 x NVIDIA Tesla P4 Machine Type: n1-standard-2 (2 vCPUs, 7.5 GB memory) on google cloud console CUDA: 11.4 cuDNN: 8.2.4.15
I used the next ones to install on Ubuntu 16, it can be helpful for you. PyTorch C++ API Ubuntu Installation Guide tutorial to compile and use pytorch on ubuntu 16.04 Also can it will be util to refer the official documentation to use PyTorch c++ for Linux systems and the GCPdocumentation.
https://stackoverflow.com/questions/69336152/
Perceptron with weights of bounded condition number
Let N be a (linear) single-layer perceptron with weight matrix w of dimension nxn. I want to train N under the Boolean constraint that the condition number k(w) of the weights w remain below a given threshold k_0 at each step of the optimisation. Is there a standard way to implement this constraint (in pytorch, say)?
After each optimizer step, go through the list of parameters and recondition all matrices: (code looked at for a few seconds, but not tested) def recondition_(x, max_cond): # would need to be fixed for non-square x u, s, vh = torch.linalg.svd(x) curr_cond = s[0] / s[-1] if curr_cond > max_cond: ratio = curr_cond / max_cond mult = torch.linspace(0, math.log(ratio), len(s)).exp() s = mult * s x[:] = torch.mm(u, torch.mm(torch.diag(s), vh)) Training loop: ... optimizer.step() with torch.no_grad(): for p in model.parameters(): if p.dim() == 2: recondition_(p, max_cond) ...
https://stackoverflow.com/questions/69340238/
pytorch: NLLLoss ignore_index default value
in the pytorch NLLLoss doc the default of ignore_index is -100 instead of the usual None, are there any particular reasons? seems like any negative value is equivalent. BTW, what may be the reason that I would want to ignore an index? Thanks!
The value for ignore_index must be an int, that's why the default value is an int and not None. The default value is arbitrary, it could have been any negative number, i.e. anything that is not a "valid" class label. The function will ignore all elements for which the target instance has that class label. In practice, this option can be used to identify unlabeled pixels for example in dense prediction tasks. Edit: Tracing back the implementation of nn.NLLLoss, we can find this comment in the nll_loss implementation of torch/onnx/symbolic_opset12.py: # in onnx NegativeLogLikelihoodLoss specification, ignore_index is optional without default value. # therefore we need to set ignore_index attribute even if it is not specified (e.g. ignore_index=-100). ignore_index = sym_help._maybe_get_const(ignore_index, "i")
https://stackoverflow.com/questions/69346001/
How can I control (or at least record) parameters passed to torchvision transform on an image by image basis?
I am studying the effects of blur and noise on an image classifier, and I would like to use torchvision transforms to apply varied amounts of Gaussian blur and Poisson noise my images. It's pretty trivial to specify a probability distribution for the noise and blur parameters, but I can't figure out how to either control those parameters on an image by image basis or get PyTorch to record the parameters actually used for each image. Could I do this by defining my transform inside the dataset class rather than passing it to the dataloader, so that each time I load an image a custom transform is created and it's parameters are returned with the image and its label?
The transform mechanism provided by PyTorch uses simple callable objects that are called automatically upon loading samples from Dataset. There is nothing fundamentally stopping you from doing all your transforms from Dataset itself. Since you haven't provided any code, I can only offer some pseudocodes from torchvision.transforms.functional import gaussian_blur class CoolDataset(Dataset): def __init__(self, root_dir): self.image_list = os.listdir(root_dir) self.sample_wise_blur_std = [0.1, 0.2, ..] def __getitem__(self, i): img = read_image(self.image_list[i]) # .shape is (C,H,W) blurred = gaussian_blur(img, (3,3), std=self.sample_wise_blur_std[i]) return img, self.sample_wise_blur_std[i] with transform parameters fused in your Dataset definition, your Dataloader will collate them for you cooldl = DataLoader(CoolDataset('/path/to/images'), ...) for X, blurs in cooldl: # X.shape is (B,C,H,W) # blurs.shape is (B,) pass I hope this is what you are looking for.
https://stackoverflow.com/questions/69347979/
torch.hub.load() raises HTTPError: HTTP Error 404: Not Found when loading model
I had this simple piece of code found on the fairseq GitHub repository which basically loads the bart.base PyTorch model from torch.hub: bart = torch.hub.load('pytorch/fairseq', 'bart.base') This code was working perfectly around two weeks ago, now it raises the following error despite the fact that I didn't change anything: HTTPError Traceback (most recent call last) <ipython-input-7-68181b5f094c> in <module>() 1 # torch.cuda.empty_cache() ----> 2 bart = torch.hub.load('pytorch/fairseq', 'bart.base') #takes around two minutes 3 # bart.cuda() # use GPU ... ... /usr/lib/python3.7/urllib/request.py in http_error_default(self, req, fp, code, msg, hdrs) 647 class HTTPDefaultErrorHandler(BaseHandler): 648 def http_error_default(self, req, fp, code, msg, hdrs): --> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp) 650 651 class HTTPRedirectHandler(BaseHandler): HTTPError: HTTP Error 404: Not Found Also, I found out that this happens with other models on fairseq. All the following models raise the same error: >>> torch.hub.load('pytorch/fairseq', 'transformer.wmt16.en-de') # ERROR! >>> torch.hub.load('pytorch/fairseq', 'camembert') # ERROR! So, there must be something in common among all of them.
Apparently, the fairseq folks decided to change the default branch of their GitHub repository from master to main exactly 7 days ago. (check this commit). So, adding the main branch to the repo info will fix the problem: bart = torch.hub.load('pytorch/fairseq:main', 'bart.base') #<--- added :main And that's because in torch.hub.load() function the default branch name is master. So actually, you were calling pytorch/fairseq:master which doesn't exist anymore. And all other models are working now: torch.hub.load('pytorch/fairseq:main', 'transformer.wmt16.en-de') # WORKS! >>> torch.hub.load('pytorch/fairseq:main', 'camembert') # WORKS!
https://stackoverflow.com/questions/69349308/
Generator not working as expected in python
I've been working on a genetic algorithm in PyTorch, and I've run into an issue while trying to mutate my model's parameters. I've been using the .apply() function to randomly change a model's weights and biases. Here is the exact function I made: def mutate(m): if type(m) == nn.Linear: m.weight = nn.Parameter(m.weight+torch.randn(m.weight.shape)) m.bias = nn.Parameter(m.bias+torch.randn(m.bias.shape)) This function does work for sure, I've tested it, but this isn't the weird part. While trying to use this function for every model in a list, the same mutation happens to each and every model. I obviously don't want this, as I want variety in my population. Here is a reproduceable example: import torch import torch.nn as nn population_size = 5 #Size of the population population = [nn.Linear(1,1)]*population_size #Creating my population, each agent is a player in this list dummy_input = torch.rand(1) #Random input def mutate(m): #Mutation function if type(m) == nn.Linear: m.weight = nn.Parameter(m.weight+torch.randn(m.weight.shape)) m.bias = nn.Parameter(m.bias+torch.randn(m.bias.shape)) population = list(x.apply(mutate) for x in population) #This is the line I've been having issues with for i in population: print (i(dummy_input)) #This is here to show that all the models are mutating in the same way and outputting the same thing This code has the following output: tensor([-2.0366], grad_fn=<AddBackward0>) tensor([-2.0366], grad_fn=<AddBackward0>) tensor([-2.0366], grad_fn=<AddBackward0>) tensor([-2.0366], grad_fn=<AddBackward0>) tensor([-2.0366], grad_fn=<AddBackward0>) As you can see, all the models mutated in the same way, and are yielding the same output. This is running in Python 3.9, thank you all in advance.
It looks like creating a list of linear parameters in the way you do simply copies the initial nn.Linear object. For example, setting population[0].weight = nn.Parameter() sets all linear layer weights in population list to an empty parameter value. In your case, the final random weight and bias assigned by the mutate function is given to all layers in the population list since they are all copies of one another. Changing the fourth line of your code to population = [nn.Linear(1,1) for _ in range(population_size)] creates five unique linear layers and fixes this problem.
https://stackoverflow.com/questions/69350549/
Keras - bilinear data transformation (like in pytorch)
So I'm trying to implement in Keras (tf) "bilinear transformation to the incoming data" (coming from pytorch). Transformation is defined in torch.nn.Bilinear as y = x_1^T A x_2 + b I built custom layer in Keras that in method call() will return transformed data (I'd like to use this layer as a part of my model later on). I do have however problem with implementation of the transformation function itself due to 3d input I have. and shapes are as follows: x_1= TensorShape([72, 10, 6]) x_2= TensorShape([72, 10, 6]) self.w = TensorShape([24, 6, 6]) (24 coming from defined out_features=4*feature_vector) self.b = TensorShape([24]) output I'd like to use as an input for LSTM layer. So my formula in tf looks like that: a = tf.matmul(tf.matmul(input1, self.w), input2) + self.b and I tried transposing different parts but it doesn't work - I have a problem with not compatible dimensions and I'd really appreciate any hint
I have a detailed implementation of bilinear with pytorch and you can translate to keras easily a = torch.randn(2,3) # input1 b = torch.randn(3,7,5) # bilinear matrix weight (input1 dim, input2 dim, output dim) c = torch.randn(2,7) # input2 B = nn.Parameter(b) t = a @ B.permute(2,0,1) output = (t * c).sum(-1).t()
https://stackoverflow.com/questions/69352259/
What is the last line of this Rnn function means?
I am here to ask a noob question. class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(RNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size*2, num_classes) def forward(self, x): h0 = torch.zeros(self.num_layers*2, x.size(0), self.hidden_size).to(device) out, _ = self.rnn(x, h0) # out: tensor of shape (batch_size, seq_length, hidden_size) out = self.fc(out[:, -1, :]) return out Here what does out = self.fc(out[:, -1, :]) means? And also why there is a "_" in out, _ = self.rnn(x, h0) ?
The line out = self.fc(out[:, -1, :]) is using negative indexing: out is a tensor of shape batch_size x seq_length x hidden_size, so out[:, 1, :] would return the first element along the second dimension (or axis), and out[:, -1, :] returns the last element along the second dimension. It would be equivalent to out[:, seq_length-1, :]. The underscore in out, _ = self.rnn(x, h0) means that self.rnn(x, h0) returns two outputs, and out is assigned to the first output, and the second output isn't assigned to anything so _ is a placeholder.
https://stackoverflow.com/questions/69355872/
Pytorch: TypeError: list is not a Module subclass
I want to extract some layers from a model, so I write nn.Sequential(list(model.children())[:7]) but get error: list is not a Moudule subclass. I know I should write nn.Sequential(*list(model.children)[:7]) but why I must add the * ??? If I just wanna to get the [] including the layers, I must write layer = list(model.children())[:7] but not layer = *list(model.children())[:7] In this situation, the * does not work and get error layer = *list(model_ft.children())[:3] ^ SyntaxError: can't use starred expression here Why???
list(model.children())[:7] returns a list, but the input of nn.Sequential() requires the modules to be an OrderedDict or to be added directly, not in a python list. nn.Sequential Modules will be added to it in the order they are passed in the constructor. Alternatively, an OrderedDict of modules can be passed in. # nn.Sequential(list(model.children)[:3]) means, which is wrong nn.Sequential([Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3)), ReLU(inplace=True), MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)]) # nn.Sequential(*list(model.children)[:3]) mean nn.Sequential(Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3)), ReLU(inplace=True), MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)) That's why you need to unpack the list using *. It can only be used inside a function, hence, it doesn't work in your last case. Read * in python
https://stackoverflow.com/questions/69356296/
Concat two tensors
I want to concat two tensors of size a: torch.Size([16, 1]) and b: torch.Size([16, 120]) to be of size torch.Size([16, 121]) could you please help with that?
Here you can use the torch.cat() function. Example: >>> a = torch.rand([16,1]) >>> b = torch.rand([16,120]) >>> a.size() torch.Size([16, 1]) >>> b.size() torch.Size([16, 120]) >>> c = torch.cat((a,b),dim=1) >>> c.size() torch.Size([16, 121]) What you want to do is to concatenate the tensors on the first dimension (dim=1).
https://stackoverflow.com/questions/69357502/
AttributeError: 'ReLU' object has no attribute 'dim'
I am building a GAN, and my discriminator function is defined as class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() self.fc1 = nn.Linear(50*15, 32) self.fc2 = nn.Linear(32, 32) self.fc3 = nn.Linear(32, 1) def forward(self, x): x = x.flatten() x = torch.nn.ReLU(self.fc1(x)) x = torch.nn.ReLU(self.fc2(x)) return torch.nn.Sigmoid(self.fc3(x)) When I was testing the code, I got an error with the following command discriminator(gen_series) where gen_series is a tensor with the dimension 15*50. The error occurs as --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-99-fa68eff35865> in <module> 16 valid = Variable(Tensor(piece, time).fill_(1.0), requires_grad=False) 17 print(gen_series) ---> 18 discriminator(gen_series) 19 # g_loss = adversarial_loss(discriminator(gen_series), valid) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) <ipython-input-94-7c6c59da67f9> in forward(self, x) 27 x = x.flatten() 28 x = torch.nn.ReLU(self.fc1(x)) ---> 29 x = torch.nn.ReLU(self.fc2(x)) 30 31 return torch.nn.Sigmoid(self.fc3(x)) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input) 85 86 def forward(self, input): ---> 87 return F.linear(input, self.weight, self.bias) 88 89 def extra_repr(self): /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1366 - Output: :math:`(N, *, out\_features)` 1367 """ -> 1368 if input.dim() == 2 and bias is not None: 1369 # fused op is marginally faster 1370 ret = torch.addmm(bias, input, weight.t()) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 583 return modules[name] 584 raise AttributeError("'{}' object has no attribute '{}'".format( --> 585 type(self).__name__, name)) 586 587 def __setattr__(self, name, value): AttributeError: 'ReLU' object has no attribute 'dim' I didn't find any related questions. Any help is appreciated!!
You can use this code and I think it would work well class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() self.fc = nn.Sequential( nn.Linear(50 * 15, 32), nn.ReLU(), nn.Linear(32, 32), nn.ReLU(), nn.Linear(32, 1), nn.Sigmoid() ) def forward(self, x): x = x.flatten() x = self.fc(x) return x
https://stackoverflow.com/questions/69360048/
My training and validation loss suddenly increased in power of 3
train function def train(model, iterator, optimizer, criterion, clip): model.train() epoch_loss = 0 for i, batch in enumerate(iterator): optimizer.zero_grad() output = model(batch.text) loss = criterion(output, torch.unsqueeze(batch.labels, 1)) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) main_script def main( train_file, test_file, config_file, checkpoint_path, best_model_path ): device = 'cuda' if torch.cuda.is_available() else 'cpu' with open(config_file, 'r') as j: config = json.loads(j.read()) for k,v in config['model'].items(): v = float(v) if v < 1.0: config['model'][k] = float(v) else: config['model'][k] = int(v) for k,v in config['training'].items(): v = float(v) if v < 1.0: config['training'][k] = float(v) else: config['training'][k] = int(v) train_itr, val_itr, test_itr, vocab_size = data_pipeline( train_file, test_file, config['training']['max_vocab'], config['training']['min_freq'], config['training']['batch_size'], device ) model = CNNNLPModel( vocab_size, config['model']['emb_dim'], config['model']['hid_dim'], config['model']['model_layer'], config['model']['model_kernel_size'], config['model']['model_dropout'], device ) optimizer = optim.Adam(model.parameters()) criterion = nn.CrossEntropyLoss() num_epochs = config['training']['n_epoch'] clip = config['training']['clip'] is_best = False best_valid_loss = float('inf') model = model.to(device) for epoch in tqdm(range(num_epochs)): train_loss = train(model, train_itr, optimizer, criterion, clip) valid_loss = evaluate(model, val_itr, criterion) if (epoch + 1) % 2 == 0: print("training loss {}, validation_loss{}".format(train_loss,valid_loss)) I was training a Convolution Neural Network for binary Text classification. Given a sentence, it detects its a hate speech or not. Training loss and validation loss was fine till 5 epoch after that suddenly the training loss and validation loss shot up suddenly from 0.2 to 10,000. What could be the reason for such huge increase is loss suddenly?
Default learning rate of Adam is 0.001, which, depending on task, might be too high. It looks like instead of converging your neural network became divergent (it left the previous ~0.2 loss minima and fell into different region). Lowering your learning rate at some point (after 50% or 70% percent of training) would probably fix the issue. Usually people divide the learning rate by 10 (0.0001 in your case) or by half (0.0005 in your case). Try with dividing by half and see if the issue persist, in general you would want to keep your learning rate as high as possible until divergence occurs as is probably the case here. This is what schedulers are for (gamma specifies learning rate multiplier, might want to change that to 0.5 first). One can think of lower learning rate phase as fine-tuning already found solution (placing weights in better region of the loss valley) and might require some patience.
https://stackoverflow.com/questions/69361178/
Every one of my genetic algorithm models mutates in the same way
I've been working on a genetic algorithm, where I'm trying to take the best model of my population (based on score) and make the entire population into that model. I have done this successfully, but when I try to mutate each model separately, all the models mutate to the same parameters. I know this is because I use an object and just clone it into a list, but I don't know what to change so that it doesn't work this way. Below I've made a reproducible example to be run in Python 3.9. I know the code isn't particularly small, but this is as small as I can make it. Thanks in advance, any help is appreciated. import torch import torch.nn as nn torch.manual_seed(0) #Reproducibility population_size = 3 #Defining the size of my population population = [nn.Linear(1,1) for _ in range(population_size)] #Initializing the population input = torch.rand(1)# Creating dummy input def mutate(m): #Function to mutate a model if type(m) == nn.Linear: m.weight = nn.Parameter(m.weight+torch.randn(m.weight.shape)) m.bias = nn.Parameter(m.bias+torch.randn(m.bias.shape)) for i in population: print (i(input)) population = [x.apply(mutate) for x in population] print ('\n') for i in population: print (i(input)) #The above works as expected #I want to fill my entire population with that model. #I've been filling the population with the best model by doing the following: best_model = population[0] #Say the first model in the list was the best performing one population = [best_model for _ in range(population_size)] #This is the line I think needs to change, I just don't know what to change it to. #This does fill the population with my best model, but when I try to mutate it, every model is mutated to the same parameters population = [x.apply(mutate) for x in population] #I know this is because I am using best_model while replacing the population, but I don't know how else to copy the model for i in population: print (i(input)) #Just to show that the population all gives the same result
You can make a deep copy of the model. Make sure to import copy, and then change population = [best_model for _ in range(population_size)] to population = [copy.deepcopy(best_model) for _ in range(population_size)]
https://stackoverflow.com/questions/69364694/
Deployment with customer handler on Google Cloud Vertex AI
I'm trying to deploy a TorchServe instance on Google Vertex AI platform but as per their documentation (https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#response_requirements), it requires the responses to be of the following shape: { "predictions": PREDICTIONS } Where PREDICTIONS is an array of JSON values representing the predictions that your container has generated. Unfortunately, when I try to return such a shape in the postprocess() method of my custom handler, as such: def postprocess(self, data): return { "predictions": data } TorchServe returns: { "code": 503, "type": "InternalServerException", "message": "Invalid model predict output" } Please note that data is a list of lists, for example: [[1, 2, 1], [2, 3, 3]]. (Basically, I am generating embeddings from sentences) Now if I simply return data (and not a Python dictionary), it works with TorchServe but when I deploy the container on Vertex AI, it returns the following error: ModelNotFoundException. I assumed Vertex AI throws this error since the return shape does not match what's expected (c.f. documentation). Did anybody successfully manage to deploy a TorchServe instance with custom handler on Vertex AI?
Actually, making sure that the TorchServe processes correctly the input dictionary (instances) solved the issue. It seems like what's on the article did not work for me.
https://stackoverflow.com/questions/69373666/