instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How to prune weights less than a threshold in PyTorch? | How to prune weights of a CNN (convolution neural network) model which is less than a threshold value (let's consider prune all weights which are <= 1).
How we can achieve that for a weight file saved in .pth format in pytorch?
| PyTorch since 1.4.0 provides model pruning out of the box, see official tutorial.
As there is no threshold method to prune in PyTorch currently, you have to implement it yourself, though it's kinda easy once you get the overall idea.
Threshold Pruning method
Below is a code performing pruning:
from torch.nn.utils import prune
class ThresholdPruning(prune.BasePruningMethod):
PRUNING_TYPE = "unstructured"
def __init__(self, threshold):
self.threshold = threshold
def compute_mask(self, tensor, default_mask):
return torch.abs(tensor) > self.threshold
Explanation:
PRUNING_TYPE can be one of global, structured, unstructured. global acts across whole module (e.g. remove 20% of weight with smallest value), structured acts on whole channels/modules. We need unstructured as we would like to modify each connection in specific parameter tensor (say weight or bias)
__init__ - pass here whatever you want or need to make it work, normal stuff
compute_mask - mask to be used to prune specific tensor. In our case all parameters below threshold should be zero. I did it with absolute value as it makes more sense. default_mask is not needed here, but is left as named parameter as that's what API requires atm.
Moreover, inheriting from prune.BasePruningMethod defines methods to apply the mask to each parameter, make pruning permanent etc. See base class docs for more info.
Example module
Nothing too fancy, you can put anything you want here:
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.first = torch.nn.Linear(50, 30)
self.second = torch.nn.Linear(30, 10)
def forward(self, inputs):
return self.second(torch.relu(self.first(inputs)))
module = MyModule()
You can also load your module via module = torch.load('checkpoint.pth')
if you need, it doesn't matter.
Prune module's parameters
We should define which parameter of our module (and whether it's weight or bias) should be pruned, like this:
parameters_to_prune = ((module.first, "weight"), (module.second, "weight"))
Now, we can apply globally our unstructured pruning to all defined parameters (threshold is passed as kwarg to __init__ of ThresholdPruning):
prune.global_unstructured(
parameters_to_prune, pruning_method=ThresholdPruning, threshold=0.1
)
Results
weight attribute
To see the effect, check weights of first submodule simply with:
print(module.first.weight)
It is a weight with our pruning technique applied, but please notice it's not a torch.nn.Parameter anymore! Now it is simply an attribute of our model, hence it won't take part in training or evaluation currently.
weight_mask
We can check created mask via module.first.weight_mask to see everything is done correctly (it will be binary in this case).
weight_orig
Applying pruning creates a new torch.nn.Parameter with original weights named name + _orig, in this case weight_orig, let's see:
print(module.first.weight_orig)
This parameter will be used during training and evaluation currently!. After applying pruning via methods described above there are forward_pre_hooks added which "switch" original weight to weight_orig.
Due to such approach you can define and apply your pruning at any part of training or inference without "destroying" original weights.
Applying pruning permanently
If you wish to apply pruning permanently simply issue:
prune.remove(module.first, "weight")
And now our module.first.weight is once again parameter with entries appropriately pruned, module.first.weight_mask is removed and so is module.first.weight_orig. It's what you are probably after.
You can iterate over children to make it permanent:
for child in module.children():
prune.remove(child, "weight")
You could define parameters_to_prune using the same logic:
parameters_to_prune = [(child, "weight") for child in module.children()]
Or if you want only convolution layers to be pruned (or anything else really):
parameters_to_prune = [
(child, "weight")
for child in module.children()
if isinstance(child, torch.nn.Conv2d)
]
Advantages
uses "PyTorch way of pruning" so it's easier to communicate your intent to other programmers
define pruning on a per-tensor basis, single responsibility instead of going through everything
confine to predefined ways
pruning is not permanent hence you can recover from it if needed. Module can be saved with pruning masks and original weights so it leaves you some space to revert eventual mistake (e.g. threshold was too high and now all your weights are zero rendering results meaningless)
works with original weights during forward calls unless you want to finally change to pruned version (simple call to remove)
Disadvantages
IMO pruning API could be clearer
You can do it shorter (as provided by Shai)
might be confusing for those who do not know such thing is "defined" by PyTorch (still there are tutorials and docs so I don't think it's a major problem)
| https://stackoverflow.com/questions/61629395/ |
Implementation of the Dense Synthesizer | I’m trying to understand the Synthesizer paper (https://arxiv.org/pdf/2005.00743.pdf 1) and there’s a description of the dense synthesizer mechanism that should replace the traditional attention model as described in the Transformer architecture.
The Dense Synthesizer is described as such:
So I tried to implement the layer and it looks like this but I’m not sure whether I’m getting it right:
class DenseSynthesizer(nn.Module):
def __init__(self, l, d):
super(DenseSynthesizer, self).__init__()
self.linear1 = nn.Linear(d, l)
self.linear2 = nn.Linear(l, l)
def forward(self, x, v):
# Equation (1) and (2)
# Shape: l x l
b = self.linear2(F.relu(self.linear1(x)))
# Equation (3)
# [l x l] x [l x d] -> [l x d]
return torch.matmul(F.softmax(b), v)
Usage:
l, d = 4, 5
x, v = torch.rand(l, d), torch.rand(l, d)
synthesis = DenseSynthesizer(l, d)
synthesis(x, v)
Example:
x and v are tensors:
x = tensor([[0.0844, 0.2683, 0.4299, 0.1827, 0.1188],
[0.2793, 0.0389, 0.3834, 0.9897, 0.4197],
[0.1420, 0.8051, 0.1601, 0.3299, 0.3340],
[0.8908, 0.1066, 0.1140, 0.7145, 0.3619]])
v = tensor([[0.3806, 0.1775, 0.5457, 0.6746, 0.4505],
[0.6309, 0.2790, 0.7215, 0.4283, 0.5853],
[0.7548, 0.6887, 0.0426, 0.1057, 0.7895],
[0.1881, 0.5334, 0.6834, 0.4845, 0.1960]])
And passing through a forward pass through the dense synthesis, it returns:
>>> synthesis = DenseSynthesizer(l, d)
>>> synthesis(x, v)
tensor([[0.5371, 0.4528, 0.4560, 0.3735, 0.5492],
[0.5426, 0.4434, 0.4625, 0.3770, 0.5536],
[0.5362, 0.4477, 0.4658, 0.3769, 0.5468],
[0.5430, 0.4461, 0.4559, 0.3755, 0.5551]], grad_fn=<MmBackward>)
Is the implementation and understanding of the dense synthesizer correct?
Theoretically, how is that different from a multi-layered perceptron that takes in two different inputs and makes uses of it at different point in the forward propagation?
| Is the implementation and understanding of the dense synthesizer correct?
Not exactly, linear1 = nn.Linear(d,d) according to the paper and not (d,l).
Of course this does not work if X.shape = (l,d) according to matrix multiplication rules.
This is because :
So F is applied to each Xi in X for i in [1,l]
The resulting matrix B is then passed to the softmax function and multiplied by G(x).
So you'd have to modify your code to sequentially process the input then use the returned matrix to compute Y.
how is that different from a multi-layered perceptron that takes in two different inputs and makes uses of it at different point in the forward propagation?
To understand, we need to put things into context, the idea of introducing attention mechanism was first described here in the context of Encoder - Decoder : https://arxiv.org/pdf/1409.0473.pdf
The core idea is to allow the model to have control over how the context vector from the encoder is retrieved using a neural network instead of relying solely on the last encoded state :
see this post for more detail.
The Transformers introduced the idea of using "Multi-Head Attention" (see graph below) to reduce the computational burden and focus solely on the attention mechanism itself. post
https://arxiv.org/pdf/1706.03762.pdf
So where does the Dense synthesizer fits into all of that ?
It simply replaces the Dot product (as illustrated in the first pictures in your post) by F(.). If you replace what's inside the softmax by F you get the equation for Y
Conclusion
This is an MLP but applied step wise to the input in the context of sequence processing.
Thank you
| https://stackoverflow.com/questions/61630765/ |
No module named 'torch.autograd' | Working with torch package:
import torch
from torch.autograd import Variable
x_data = [1.0,2.0,3.0]
y_data = [2.0,4.0,6.0]
w = Variable(torch.Tensor([1.0]), requires_grad = True)
def forward(x):
return x*w
def loss(x,y):
y_pred = forward(x)
return (y_pred-y)*(y_pred-y)
print("my prediction before training",4,forward(4))
for epoch in range(10):
for x_val, y_val in zip(x_data,y_data):
l= loss(x_val, y_val)
l.backward()
print("\tgrad: ", x_val, y_val, w.grad.data[0])
w.data=w.data-0.01*w.grad.data
w.grad.data.zero_()
print("progress:", epoch, l.data[0] )
print("my new prediction after training ", forward(4))
Got error:
runfile('C:/gdrive/python/temp2.py', wdir='C:/gdrive/python')
Traceback (most recent call last):
File "C:\gdrive\python\temp2.py", line 11, in <module>
from torch.autograd import Variable
ModuleNotFoundError: No module named 'torch.autograd'
Command conda list pytorch brings:
# packages in environment at C:\Users\g\.conda\envs\test:
#
# Name Version Build Channel
(test) PS C:\gdrive\python>
How to fix this problem?
| It seems to me that you have installed pytorch using conda.
Might be you have torch named folder in your current directory.
Try changing the directory, or try installing pytorch using pip.
This https://github.com/pytorch/pytorch/issues/1851 might help you to solve your problem.
| https://stackoverflow.com/questions/61642363/ |
Pytorch - Distributed Data Parallel Confusion | I was just looking at the DDP Tutorial:
https://pytorch.org/tutorials/intermediate/ddp_tutorial.html
According to this:
It’s common to use torch.save and torch.load to checkpoint modules
during training and recover from checkpoints. See SAVING AND LOADING
MODELS for more details. When using DDP, one optimization is to save
the model in only one process and then load it to all processes,
reducing write overhead. This is correct because all processes start
from the same parameters and gradients are synchronized in backward
passes, and hence optimizers should keep setting parameters to the
same values. If you use this optimization, make sure all processes do
not start loading before the saving is finished. Besides, when loading
the module, you need to provide an appropriate map_location argument
to prevent a process to step into others’ devices. If map_location is
missing, torch.load will first load the module to CPU and then copy
each parameter to where it was saved, which would result in all
processes on the same machine using the same set of devices. For more
advanced failure recovery and elasticity support, please refer to
TorchElastic.
I dont understand what this means. Shouldn't only one process/first GPU be saving the model? Is saving and loading how weights are shared across the processes/GPUs?
| When you're using DistributedDataParallel you have the same model across multiple devices, which are being synchronised to have the exact same parameters.
When using DDP, one optimization is to save the model in only one process and then load it to all processes, reducing write overhead.
Since they are identical, it is unnecessary to save the models from all processes, as it would just write the same parameters multiple times. For example when you have 4 processes/GPUs you would write the same file 4 times instead of once. That can be avoided by only saving it from the main process.
That is an optimisation for the saving of the model. If you load the model right after you saved it, you need to be more careful.
If you use this optimization, make sure all processes do not start loading before the saving is finished.
If you save it in only one process, that process will take time to write the file. In the meantime all other processes continue and they might load the file before it was fully written to disk, which may lead to all sorts of unexpected behaviour or failure, whether that file does not exist yet, you are trying to read an incomplete file or you load an older version of the model (if you overwrite the same file).
Besides, when loading the module, you need to provide an appropriate map_location argument to prevent a process to step into others’ devices. If map_location is missing, torch.load will first load the module to CPU and then copy each parameter to where it was saved, which would result in all processes on the same machine using the same set of devices.
When saving the parameters (or any tensor for that matter) PyTorch includes the device where it was stored. Let's say you save it from the process that used GPU 0 (device = "cuda:0"), that information is saved and when you load it, the parameters are automatically put onto that device. But if you load it in the process that uses GPU 1 (device = "cuda:1"), you will incorrectly load them into "cuda:0". Now instead of using multiple GPUs, you have the same model multiple times in a single GPU. Most likely, you will run out of memory, but even if you don't, you won't be utilising the other GPUs anymore.
To avoid that problem, you should set the appropriate device for map_location of torch.load.
torch.load(PATH, map_location="cuda:1")
# Or load it on the CPU and later use .to(device) on the model
torch.load(PATH, map_location="cpu")
| https://stackoverflow.com/questions/61642619/ |
Running and building Pytorch on Google Colab | I am trying to run a python package that requires pytorch-gpu. I have change the runtime type of my Colab notebook to GPU. When I run the command, I am facing the following error. Not sure if I am able to build pytorch on colab myself?
Traceback (most recent call last):
File "inference_unet.py", line 9, in <module>
import torchvision.transforms as transforms
File "/usr/local/lib/python3.6/dist-packages/torchvision/__init__.py", line 10, in <module>
from .extension import _HAS_OPS
File "/usr/local/lib/python3.6/dist-packages/torchvision/extension.py", line 58, in <module>
_check_cuda_version()
File "/usr/local/lib/python3.6/dist-packages/torchvision/extension.py", line 54, in _check_cuda_version
.format(t_major, t_minor, tv_major, tv_minor))
RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=10.2 and torchvision has CUDA Version=10.1. Please reinstall the torchvision that matches your PyTorch install.
| Now you can directly use pytorch-gpu on google colab, no need of installation.
Just change your runtime to gpu, import torch and torchvision and you are done.
I have attached screenshot doing just the same.
Hope the answer will find helpful.
But in case you want to install different version of pytorch or any other package then you can install using pip, just add ! before your pip command and run the cell.
for example,
| https://stackoverflow.com/questions/61643369/ |
What is vectorised way of doing this operation in pytorch instead of two FOR loops |
Hello,
I have a tensor 'A' in Pytorch of dimesnsions Batch x Channel x Height x Width. I want to reshape it into 'B' such that dimesnions H and W are increased by 'r' and channels reduced by a factor of 'r^2'. For 'r'=2, the illustration is shown in figure attached.
In the figure if 'B' had 4 channels then first 4 channels, with 4th channel having border color violet, then first four channels in 'A' will be peack/skin colored pixels with border colors red, green, blue and violet and remaining channels adjusted accordingly.
I know the 'pack' and 'unpack' each can be done with 2 for loops. But that takes more time. There shoud be vectorised way in PyTorch to do switch between 'A' and 'B' with just reshape and pemutation commands. Can someone help me on that?
In this example, the batch size is set to 1. But if batch dimension is more I would like the operations shown in the figure to operate individually on each batch entry.
Can someone please help me with a generic code to switch between A and B in the vectorised way in PyTorch. Better if that also works when Batch size is more than 1.
Please note the two operations cannot be done with already implemented nn.PixelShuffle
Thanks a lot.
| Although this can be done with careful permutation and reshaping, pytorch has already implemented this with nn.PixelShuffle.
| https://stackoverflow.com/questions/61657947/ |
How to read numerical data from CSV in PyTorch? | I'm new to PyTorch; trying to implement a model I developed in TF and compare the results. The model is an Autoencoder model. The input data is a csv file including n samples each with m features (a n*m numerical matrix in a csv file). The targets (the labels) are in another csv file with the same format as the input file. I've been looking online but couldn't find a good documentation for reading non-image data from csv file with multiple labels. Any idea how can I read my data and iterate over it during training?
Thank you
| Might you be looking for something like TabularDataset?
class
torchtext.data.TabularDataset(path, format, fields, skip_header=False, csv_reader_params={}, **kwargs)
Defines a Dataset of columns stored in CSV, TSV, or JSON format.
It will take a path to a CSV file and build a dataset from it. You also need to specify the names of the columns which will then become the data fields.
In general, all of implementations of torch.Dataset for specific types of data are located outside of pytorch in the torchvision, torchtext, and torchaudio libraries.
| https://stackoverflow.com/questions/61661943/ |
How to move axis on simple numpy array | I'm having trouble moving the 3 axis to the 1 position. I would like to move the 3 to the first 69 position. This is for a machine learning dataset and PyTorch will only accept the data if it's in a 3x69x69 format. Thanks for any help!
# To get the images and labels from file
with h5py.File(r"C:\Users\ajbur\Downloads\Galaxy10.h5", 'r') as F:
images = np.array(F['images'])
labels = np.array(F['ans'])
np.shape(images)
np.moveaxis(images,0,-1).shape
np.shape(images)
output is [20000, 69, 69, 3]
I want it to be [20000, 3, 69, 69]
| The second and third arguments of moveaxis are source and destination. To move the last axis to the second position you could do:
a = np.empty([20000, 69, 69, 3])
np.moveaxis(a, -1, 1).shape
>>> (20000, 3, 69, 69)
| https://stackoverflow.com/questions/61664389/ |
How to sort a tensor by first dimension | I have a 2D tensor and I would like to sort by the first dimension like this example:
a = torch.FloatTensor(
[[5, 5],
[5, 3],
[3, 5],
[6, 4],
[3, 7]])
And I expected this result after sorting:
a = torch.FloatTensor(
[[3, 5],
[3, 7],
[5, 3],
[5, 5],
[6, 4]])
Is it possible to do this in pytorch? I know that is possible to do it in numpy, but I want do it in GPU using torch.
| Sort by first column and use the indices to then sort the whole array:
a[a[:, 0].sort()[1]]
Output:
tensor([[3., 5.],
[3., 7.],
[5., 5.],
[5., 3.],
[6., 4.]])
And if you really need it interleaved:
b = a[a[:, 1].sort()[1]]
b[b[:, 0].sort()[1]]
Output:
tensor([[3., 5.],
[3., 7.],
[5., 3.],
[5., 5.],
[6., 4.]])
| https://stackoverflow.com/questions/61665622/ |
ImportError: cannot import name 'mobilenet_v2' from 'torchvision.models' | I want to run a fastai deep learning model on my pc. Not train, just run the pre-trained model on my PC. I have the .pth file. I tried to import the fastai module that I installed and I recieved the error :
ImportError: cannot import name 'mobilenet_v2' from 'torchvision.models' (C:\file_path\__init__.py)
The Code I tried to execute:
#From the fastai library
from fastai import *
from torchvision.models import *
from fastai.vision import *
I can't find any solutions as to why I am getting this error.
Im running this code in anaconda, to be specific, the spyder IDE connected to my anaconda environment. I will re-edit this if anyone needs more specifications. Thank you.
| I just finished fixing this problem with my system. Uninstall any pytorch, torchvision by conda and pip. Uninstall fastai as well.
Go to https://pytorch.org/get-started/locally/ and run the conda command there base on your cuda version and etc. Then
conda install -c fastai fastai
| https://stackoverflow.com/questions/61666911/ |
Reading csv.gz file in torchtext | Pandas’s read_csv works for csv.gz as well.
Is there a way to achieve similar with PyTorch?https://torchtext.readthedocs.io/en/latest/data.html#torchtext.data.Dataset doesn’t seem to have such an option.
| TLDR: No, this is not supported by TabularDataset
torchtext.data.TabularDataset uses csv.reader.
Using csvreader against a gzipped file in Python suggests if you open the file with gzip.open, csv.reader can read it.
However, TabularDataset asks for a file path, not a file pointer, so digging into the source code, it uses
io.open(os.path.expanduser(path), encoding="utf8")
To open the filepath. Since .gz is not utf8, this won't read the file correctly.
| https://stackoverflow.com/questions/61675018/ |
Pytorch: Loading sample of images using DataLoader | I use standard DataLoader from torch.utils.data. I create dataset class and then build DataLoader this way:
train_dataset = LandmarksDataset(os.path.join(args.data, 'train'), train_transforms, split="train")
train_dataloader = data.DataLoader(train_dataset, batch_size=args.batch_size, num_workers=2,
pin_memory=True, shuffle=True, drop_last=True)
It works perfect, but dataset is big enough - 300k of images. So it takes a lot of time for reading images on using DataLoader. So it is really wretchedly to build such big DataLoader on debug stage! I just want to test some my hypothesis and want to do it fast! I don't need to load whole dataset for this.
I'm trying to find the way How to load just a small fixed part of dataset without building dataLoader on whole dataset?
At current moment all my ideas are just create another folder, copy some part of images here and use pipeline on it. But I suppose, Pytorch is clever enough to have some builtin methods for loading just a part of images from big dataset. Can you give me advice how to?
| As far as I am aware there's no mechanism that does this for you. Your problem is in the LandmarksDataset class at the point where you're reading the paths of your train data folder. I assume os.listdir(train_data_folder).
Instead you could use a more efficient way os.scandir(train_data_folder) this returns a generator and calling next() on it will give you paths to your images within the train data. This way you can call next() as many times without changing the structure of your train data folder and build a subset of it.
| https://stackoverflow.com/questions/61675646/ |
Pytorch: Convert 2D-CNN model to tflite | I'd like to convert a model (eg Mobilenet V2) from pytorch to tflite in order to run it on a mobile device.
Has anyone managed to do so?
All I found, was a method that uses ONNX to convert the model into an inbetween state. However, this seems not to work properly, as Tensorflow expects a NHWC-channel order whereas onnx and pytorch work with NCHW channel order.
There is a discussion on github, however in my case the conversion worked without complaints until a "frozen tensorflow graph model", after trying to convert the model further to tflite, it complains about the channel order being wrong...
Here is my code so far:
import torch
import torch.onnx
import onnx
from onnx_tf.backend import prepare
# Create random input
input_data = torch.randn(1,3,224,224)
# Create network
model = torch.hub.load('pytorch/vision:v0.6.0', 'mobilenet_v2', pretrained=True)
model.eval()
# Forward Pass
output = model(input_data)
# Export model to onnx
filename_onnx = "mobilenet_v2.onnx"
filename_tf = "mobilenet_v2.pb"
torch.onnx.export(model, input_data, filename_onnx)
# Export model to tensorflow
onnx_model = onnx.load(filename_onnx)
tf_rep = prepare(onnx_model)
tf_rep.export_graph(filename_tf)
All working without errors until here (ignoring many tf warnings). Then I look up the names of the input and output tensors using netron ("input.1" and "473").
Finally I apply my usual tf-graph to tf-lite conversion script from bash:
tflite_convert \
--output_file=mobilenet_v2.tflite \
--graph_def_file=mobilenet_v2.pb \
--input_arrays=input.1 \
--output_arrays=473
My configuration:
torch 1.6.0.dev20200508 (needs pytorch-nightly to work with mobilenet V2 from torch.hub)
tensorflow-gpu 1.14.0
onnx 1.6.0
onnx-tf 1.5.0
Here is the exact error message I'm getting from tflite:
Unexpected value for attribute 'data_format'. Expected 'NHWC'
Fatal Python error: Aborted
UPDATE:
Updating my configuration:
torch 1.6.0.dev20200508
tensorflow-gpu 2.2.0
onnx 1.7.0
onnx-tf 1.5.0
using
tflite_convert \
--output_file=mobilenet_v2.tflite \
--graph_def_file=mobilenet_v2.pb \
--input_arrays=input.1 \
--output_arrays=473 \
--enable_v1_converter # <-- needed for conversion of frozen graphs
leading to another error:
Exception: <unknown>:0: error: loc("convolution"): 'tf.Conv2D' op is neither a custom op nor a flex op
Update:
Here is an onnx model of mobilenet v2 loaded via netron:
Here is a gdrive link to my converted onnx and pb file
| @Ahwar posted a nice solution to this using a Google Colab notebook.
It uses
torch 1.5.0+cu101
torchsummary 1.5.1
torchtext 0.3.1
torchvision 0.6.0+cu101
tensorflow 1.15.2
tensorflow-addons 0.8.3
tensorflow-estimator 1.15.1
onnx 1.7.0
onnx-tf 1.5.0
The conversion is working and the model can be tested on my computer. However when pushing the model to the mobile phone it only works in CPU mode and is much slower (almost 10 fold) than a corresponding model created in tensorflow directly. GPU mode is not working on my mobile phone (in contrast to the corresponding model created in tensorflow directly)
Update:
Apparantly after converting the mobilenet v2 model, the tensorflow frozen graph contains many more convolution operations than the original pytorch model ( ~38 000 vs ~180 ) as discussed in this github issue.
| https://stackoverflow.com/questions/61679908/ |
How most efficiently compute the diagonal of a matrix product | I want to compute the following:
import numpy as np
n= 3
m = 2
x = np.random.randn(n,m)
#Method 1
y = np.zeros(m)
for i in range(m):
y[i] = x[:,i] @ x[:,i]
#Method 2
y2 = np.diag(x.T @ x)
The first method has the problem that it uses a for loop, which can't be very effecient (I need to do this in pytorch on a GPU millions of times)
The second method computes the full matrix product, when I only need the diagonal entries, so that can't be very efficient either.
I'm wondering whether there exist any clever way of doing this?
| Use a manually constructed sum-product. You want the sums of the squares of the individual columns:
y = (x * x).sum(axis=0)
As Divakar suggests, np.einsum will likely offer a less memory-intensive option, since it does not require the temporary array x * x:
y = np.einsum('ij,ij->j', x, x)
| https://stackoverflow.com/questions/61687914/ |
Bert pre-trained model giving random output each time | I was trying to add an additional layer after huggingface bert transformer, so I used BertForSequenceClassification inside my nn.Module Network. But, I see the model is giving me random outputs when compared to loading the model directly.
Model 1:
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels = 5) # as we have 5 classes
import torch
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode(texts[0], add_special_tokens=True, max_length = 512)).unsqueeze(0) # Batch size 1
print(model(input_ids))
Out:
(tensor([[ 0.3610, -0.0193, -0.1881, -0.1375, -0.3208]],
grad_fn=<AddmmBackward>),)
Model 2:
import torch
from torch import nn
class BertClassifier(nn.Module):
def __init__(self):
super(BertClassifier, self).__init__()
self.bert = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels = 5)
# as we have 5 classes
# we want our output as probability so, in the evaluation mode, we'll pass the logits to a softmax layer
self.softmax = torch.nn.Softmax(dim = 1) # last dimension
def forward(self, x):
print(x.shape)
x = self.bert(x)
if self.training == False: # in evaluation mode
pass
#x = self.softmax(x)
return x
# create our model
bertclassifier = BertClassifier()
print(bertclassifier(input_ids))
torch.Size([1, 512])
torch.Size([1, 5])
(tensor([[-0.3729, -0.2192, 0.1183, 0.0778, -0.2820]],
grad_fn=<AddmmBackward>),)
They should be the same model, right. I found a similar issue here but no reasonable explanation https://github.com/huggingface/transformers/issues/2770
Does Bert has some ranomized parameter if so how to get reproducible output?
Why the two models give me different outputs? Is there something I'm doing wrong?
| The reason is due to the random initialization of the classifier layer of Bert. If you print your model, you'll see
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(dropout): Dropout(p=0.1, inplace=False)
(classifier): Linear(in_features=768, out_features=5, bias=True)
)
There is a classifier in the last layer, this layer is added after bert-base. Now, the expectation is you'll train this layer for your downstream task.
If you want to get more insight:
model, li = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels = 5, output_loading_info=True) # as we have 5 classes
print(li)
{'missing_keys': ['classifier.weight', 'classifier.bias'], 'unexpected_keys': ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'], 'error_msgs': []}
You can see the classifier.weight and bias are missing, so these part will be randomly initialized each time you call BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels = 5).
| https://stackoverflow.com/questions/61690689/ |
Neural network in pytorch | I wanna create a Neural Network in PyTorch, that will have 2 inputs and 3 outputs with 1 hidden layer. The two inputs will be float numbers that represents features of an image and 3 outputs will be real numbers between 0 and 1. For example output (1, 0, 0) would mean that it is square and (0,1,0) would mean it is rectangle. Any idea how to do it in pytorch?
| The network can be defined like this:
import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autograd
import torch.nn.functional as F
from torch.autograd import Variable
class Net(nn.Module):
def __init__(self, num_inputs=2, num_outputs=3,hidden_dim=5):
# define your network here
super(Net, self).__init__()
self.layer1 = nn.Linear(num_inputs,hidden_dim)
self.layer2 = nn.Linear(hidden_dim,num_outputs)
def forward(self, x):
# implement the forward pass
x = F.relu(self.layer1(x))
x = F.sigmoid(self.layer2(x))
return x
Although I have defined the network here, you should maybe look at some examples on the official pytorch website for example on how to train your model.
| https://stackoverflow.com/questions/61694517/ |
'Net' object has no attribute 'parameters' | I am fairly new to machine learning. I learned to write this code from youtube tutorials but I keep getting this error
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/aniket/Desktop/DeepLearning/PythonLearningPyCharm/CatVsDogs.py", line 109, in <module>
optimizer = optim.Adam(net.parameters(), lr=0.001) # tweaks the weights from what I understand
AttributeError: 'Net' object has no attribute 'parameters'
this is the Net class
class Net():
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1,32,5)
self.conv2 = nn.Conv2d(32,64,5)
self.conv3 = nn.Conv2d(64,128,5)
self.to_linear = None
x = torch.randn(50,50).view(-1,1,50,50)
self.Conv2d_Linear_Link(x)
self.fc1 = nn.Linear(self.to_linear, 512)
self.fc2 = nn.Linear(512, 2)
def Conv2d_Linear_Link(self , x):
x = F.max_pool2d(F.relu(self.conv1(x)),(2,2))
x = F.max_pool2d(F.relu(self.conv2(x)),(2,2))
x = F.max_pool2d(F.relu(self.conv3(x)),(2,2))
if self.to_linear is None :
self.to_linear = x[0].shape[0]*x[0].shape[1]*x[0].shape[2]
return x
def forward(self, x):
x = self.Conv2d_Linear_Link(x)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.softmax(x, dim=1)
and this is the function train
def train():
for epoch in range(epochs):
for i in tqdm(range(0,len(X_train), batch)):
batch_x = train_X[i:i + batch].view(-1, 1, 50, 50)
batch_y = train_y[i:i + batch]
net.zero_grad() # i don't understand why we do this but we do we don't want the probabilites adding up
output = net(batch_x)
loss = loss_function(output, batch_y)
loss.backward()
optimizer.step()
print(loss)
and the optimizer and loss functions and data
optimizer = optim.Adam(net.parameters(), lr=0.001) # tweaks the weights from what I understand
loss_function = nn.MSELoss() # gives the loss
| You're not subclassing nn.Module. It should look like this:
class Net(nn.Module):
def __init__(self):
super().__init__()
This allows your network to inherit all the properties of the nn.Module class, such as the parameters attribute.
| https://stackoverflow.com/questions/61703398/ |
ImportError: cannot import name 'Optional' from 'torch.jit.annotations' | I have installed cpuonly pytorch and torchvision in anaconda. But when i try to import torchvision i get the following error.
ImportError: cannot import name 'Optional' from 'torch.jit.annotations'(C:\Users\MSI\Anaconda3\lib\site-packages\torch\jit\annotations.py)
How can i fix this?
| Not sure if you are installing the correct versions of the libraries. This combination seems to work:
conda create --name test5 python=3.6
conda install -c pytorch pytorch torchvision cpuonly
python
>>> import torchvision
| https://stackoverflow.com/questions/61703503/ |
how to keep pytorch model in redis cache to access model faster for video streaming? | I have this code belonging to feature_extractor.py which is a part of this folder in here:
import torch
import torchvision.transforms as transforms
import numpy as np
import cv2
from .model import Net
class Extractor(object):
def __init__(self, model_path, use_cuda=True):
self.net = Net(reid=True)
self.device = "cuda" if torch.cuda.is_available() and use_cuda else "cpu"
state_dict = torch.load(model_path, map_location=lambda storage, loc: storage)['net_dict']
self.net.load_state_dict(state_dict)
print("Loading weights from {}... Done!".format(model_path))
self.net.to(self.device)
self.size = (64, 128)
self.norm = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])
def _preprocess(self, im_crops):
def _resize(im, size):
return cv2.resize(im.astype(np.float32) / 255., size)
im_batch = torch.cat([self.norm(_resize(im, self.size)).unsqueeze(0) for im in im_crops], dim=0).float()
return im_batch
def __call__(self, im_crops):
im_batch = self._preprocess(im_crops)
with torch.no_grad():
im_batch = im_batch.to(self.device)
features = self.net(im_batch)
return features.cpu().numpy()
if __name__ == '__main__':
img = cv2.imread("demo.jpg")[:, :, (2, 1, 0)]
extr = Extractor("checkpoint/ckpt.t7")
feature = extr(img)
print(feature.shape)
Now Imagine 200 requests are in row to proceed. The process of loading model for each request makes the code run slowly.
So I thought it might be a good idea to keep the pytorch model in cache. I modified it like this:
from redis import Redis
import msgpack as msg
r = Redis('111.222.333.444')
class Extractor(object):
def __init__(self, model_path, use_cuda=True):
try:
self.net = msg.unpackb(r.get('REID_CKPT'))
finally:
self.net = Net(reid=True)
self.device = "cuda" if torch.cuda.is_available() and use_cuda else "cpu"
state_dict = torch.load(model_path, map_location=lambda storage, loc: storage)['net_dict']
self.net.load_state_dict(state_dict)
print("Loading weights from {}... Done!".format(model_path))
self.net.to(self.device)
packed_net = msg.packb(self.net)
r.set('REID_CKPT', packed_net)
self.size = (64, 128)
self.norm = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])
Unfortunately this error comes up:
File "msgpack/_packer.pyx", line 286, in msgpack._cmsgpack.Packer.pack
File "msgpack/_packer.pyx", line 292, in msgpack._cmsgpack.Packer.pack
File "msgpack/_packer.pyx", line 289, in msgpack._cmsgpack.Packer.pack
File "msgpack/_packer.pyx", line 283, in msgpack._cmsgpack.Packer._pack
TypeError: can not serialize 'Net' object
The reason obviously is because that it cannot convert Net object (pytorch nn.Module class) to bytes.
How can I efficiently save pytorch model in cache (or somehow keep it in RAM) and call for it for each request?
Thanks everyone.
| If you only need to keep model state on RAM, Redis is not necessary. You could instead mount RAM as a virtual disk and store model state there. Check out tmpfs.
| https://stackoverflow.com/questions/61708442/ |
Obtain torch.tensor from string of floats | We can convert 1 dimensional array of floats, stored as a space separated numbers in text file, in to a numpy array or a torch tensor as follows.
line = "1 5 3 7 4"
np_array = np.fromstring(line, dtype='int', sep=" ")
np_array
>> array([1, 5, 3, 7, 4])
And to convert above numpy array to a torch tensor, we can do following :
torch_tensor = torch.tensor(np_array)
torch_tensor
>>tensor([1, 5, 3, 7, 4])
How can I convert a string of numbers separated by space in to a torch.Tensor directly without
converting them to a numpy array? We can also do this by fist splitting the string at a space, mapping them to int or float, and then feeding it to torch.tensor. But like numpy's fromstring, is there any such method in pytorch?
| What about
x = torch.tensor(list(map(float, line.split(' '))), dtype=torch.float32)
| https://stackoverflow.com/questions/61710826/ |
volatile was removed and now had no effect use with.torch.no_grad() instread | my torch program stopped at this point
I guess i can not use volatile=True
how should I change it and what is the reason to stop?
and How should I change this code?
images = Variable(images.cuda())
targets = [Variable(ann.cuda(), volatile=True) for ann in targets]
train.py:166: UserWarning: volatile was removed and now has no effect.
Use with torch.no_grad(): instead.
| Variable doesn't do anything and has been deprecated since pytorch 0.4.0. Its functionality was merged with the torch.Tensor class. Back then the volatile flag was used to disable the construction of the computation graph for any operation which the volatile variable was involved in. Newer pytorch has changed this behavior to instead use with torch.no_grad(): to disable construction of the computation graph for anything in the body of the with statement.
What you should change will depend on your reason for using volatile in the first place. No matter what though you probably want to use
images = images.cuda()
targets = [ann.cuda() for ann in targets]
During training you would use something like the following so that the computation graph is created (assuming standard variable names for model, criterion, and optimizer).
output = model(images)
loss = criterion(images, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Since you don't need to perform backpropagation during evaluation you would use with torch.no_grad(): to disable the creation of the computation graph which reduces the memory footprint and speeds up computation.
with torch.no_grad():
output = model(images)
loss = criterion(images, targets)
| https://stackoverflow.com/questions/61720460/ |
computation graph of setting weights in pytorch | I need a clarification of code written for some function in FastAI2 library.
this is the code WeightDropout written in FastAI2 library.
class WeightDropout(Module):
"A module that warps another layer in which some weights will be replaced by 0 during training."
def __init__(self, module, weight_p, layer_names='weight_hh_l0'):
self.module,self.weight_p,self.layer_names = module,weight_p,L(layer_names)
for layer in self.layer_names:
#Makes a copy of the weights of the selected layers.
w = getattr(self.module, layer)
delattr(self.module, layer)
self.register_parameter(f'{layer}_raw', nn.Parameter(w.data))
setattr(self.module, layer, F.dropout(w.data, p=self.weight_p, training=False))
if isinstance(self.module, (nn.RNNBase, nn.modules.rnn.RNNBase)):
self.module.flatten_parameters = self._do_nothing
def _setweights(self):
"Apply dropout to the raw weights."
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
setattr(self.module, layer, F.dropout(raw_w.data, p=self.weight_p, training=self.training))
def forward(self, *args):
self._setweights()
with warnings.catch_warnings():
#To avoid the warning that comes because the weights aren't flattened.
warnings.simplefilter("ignore")
return self.module.forward(*args)
def reset(self):
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
setattr(self.module, layer,
F.dropout(raw_w.data, p=self.weight_p, training=False))
if hasattr(self.module, 'reset'): self.module.reset()
def _do_nothing(self): pass
where above code randomly drops weights in weight matrix of hidden layers.I am primarily interested in ,
def _setweights(self):
"Apply dropout to the raw weights."
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
setattr(self.module, layer, F.dropout(raw_w.data, p=self.weight_p, training=self.training))
my question is that, does this operation of changing weights is recorded in gradient computation.
| No, assigning a new weight is not tracked in the computational graph, because an assignment has no derivative, therefore it's impossible to get a gradient through it.
Then why does that code work? The model is not overwriting the actual parameters, but it's using a modified version for the calculations, while keeping the original weights unchanged. It's a little obscure, but the most important part is when the parameters are copied when the model is created:
#Makes a copy of the weights of the selected layers.
w = getattr(self.module, layer)
delattr(self.module, layer)
self.register_parameter(f'{layer}_raw', nn.Parameter(w.data))
What happens here, is that for every parameters you create a copy which ends in _raw. For example, if you have a linear layer on your model (e.g. self.linear1 = nn.Linear(2, 4), you have two parameters with the names linear1.weight and linear1.bias. Now they are copied to linear1.weight_raw and linear1.bias_raw. To be precise, they are not copied, but reassigned to the *_raw attributes and then the original ones are deleted, hence they are just moved from the original to the raw versions. The originals need to be deleted, since they are no longer parameters (which would be optimised/learned).
Afterwards, when the dropout is applied, the parameters that are optimised/learned (*_raw versions) are unchanged, but the weight used for the actual calculations is the one with some weights randomly dropped. In the example with the linear layer that would look as follows if you do the calculations manually:
# A dummy input
input = torch.randn(1, 2)
# The raw parameters of the linear layer, randomly initialised
weight_raw = nn.Parameter(torch.randn(4, 2))
bias_raw = nn.Parameter(torch.randn(4))
# Randomly dropping elements of the parameters with 50% probability
weight = F.dropout(weight_raw, p=0.5)
bias = F.dropout(bias_raw, p=0.5)
# Calculation of the linear layer (forward)
output = torch.matmul(input, weight.transpose(0, 1)) + bias
From this you can see that there is no actual reassignment, but just the regular computational flow that you are familiar with.
Now you might be wondering why these *_raw parameters are created instead of applying the dropout in the forward pass (like in the example above). The reason for that is to avoid having to reimplement the forward pass, otherwise every module would need to have their forward method modified, but since they differ widely across modules, that cannot be done in a generic manner. This approach essentially hijacks the parameters, so that the forward pass uses a modified version of them.
Continuing the example from above:
# Using the actual module for the same calculation
linear1 = nn.Linear(2, 4)
# Delete the parameters, so that regular tensors can be assigned to them
# Otherwise it throws an error that the tensor is not an nn.Parameter
del linear1.weight
del linear1.bias
# Assign the parameters with dropped elements
linear1.weight = weight
linear1.bias = bias
# Run the forward pass directly
output_linear1 = linear1(input)
torch.equal(output, output_linear1) # => True
The bottom line is that the parameters are extracted from the modules, and the forward pass uses a modified version (after dropout) for the calculations, they are no longer parameters but intermediate results.
| https://stackoverflow.com/questions/61722520/ |
Pytorch TypeError - eq() received an invalid combination of arguments | I'm working on a text classification problem with BERT. When training on the local machine everything works just fine, but when switching to the server, I get the following error:
<ipython-input-28-508d35ac5f5f> in flat_accuracy(preds, labels)
5 pred_flat = np.argmax(preds, axis=1).flatten()
6 labels_flat = labels.flatten()
----> 7 return np.sum(pred_flat == labels_flat) / len(labels_flat)
8
9 # Function to calculate the f1_score of our predictions vs labels
TypeError: eq() received an invalid combination of arguments - got (numpy.ndarray), but expected one of:
* (Tensor other)
didn't match because some of the arguments have invalid types: (numpy.ndarray)
* (Number other)
didn't match because some of the arguments have invalid types: (numpy.ndarray)
Code:
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
Torch version on local machine: 1.4.0
Torch version on the server: 1.3.1
Any help would be greatly appreciated!
| It could be that the eq implementation of the torch version on your server no longer lets you do elementwise comparison between a torch.Tensor and a np.ndarray. You should coerce either pred_flat to be a torch.Tensor, or coerce labels_flat to be a numpy array. Since you're using np.sum in the return statement and you are just returning a scalar value, I'd just move everything to numpy, so
labels_flat = labels.numpy()
but if you're on the GPU you may need to call labels.cpu().numpy(), and if you're tracking gradients on labels you might need labels.detach().cpu().numpy().
| https://stackoverflow.com/questions/61733562/ |
Concatenating two torch tensors of different shapes in pytorch | I have two torch tensors. One with shape [64, 4, 300], and one with shape [64, 300]. How can I concatenate these two tensors to obtain the resultant tensor of shape [64, 5, 300]. I'm aware about the tensor.cat function used for this, but in order to use that function, I need to reshape the second tensor in order to match the number of dimensions of the tensor. I've heard that reshaping of the tensors should not be done, as it might mess up the data in the tensor. How can I do this concatenation?
I've tried reshaping, but following part makes me more doubtful about such reshaping.
a = torch.rand(64,300)
a1 = a.reshape(64,1,300)
list(a1[0]) == list(a)
Out[32]: False
| You have to use torch.cat along first dimension and do unsqueeze at the first one as well, like this:
import torch
first = torch.randn(64, 4, 300)
second = torch.randn(64, 300)
torch.cat((first, second.unsqueeze(dim=1)), dim=1)
# Shape: [64, 5, 300]
It won't mess up with your data, it's only adding superficial 1 dimension (reshape doesn't if done correctl anyway).
| https://stackoverflow.com/questions/61734347/ |
No module named 'torch.nn.functional' | I have python file with lines:
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
It generates errors:
File "C:\gdrive\python\a.py", line 5, in <module>
import torch.nn.functional as F
ModuleNotFoundError: No module named 'torch.nn.functional'
How to fix that error?
I have installed pytorch by using command:
conda install pytorch-cpu torchvision-cpu -c pytorch
| It looks like you have an outdated version of PyTorch. Conda - pytorch-cpu was last published over a year ago and its latest version of PyTorch is 1.1.0, whereas PyTorch is currently at version 1.5.0. That packages has been abandoned.
You should install PyTorch with the official instructions given on PyTorch - Get Started locally, by selecting the version you want. In your case that would be Conda with CUDA None (to get the CPU only version).
The resulting command is:
conda install pytorch torchvision cpuonly -c pytorch
| https://stackoverflow.com/questions/61736959/ |
Gradient clipping in pytorch has no effect (Gradient exploding still happens) | I have an exploding gradient problem when train the minibatch for 150-200 epochs with batch size = 256 and there’s about 30-60 minibatch (This depends on my specific config). But I have an exploding gradient issues even if I add the code below.
As you can see this below images, notice that in step about 40k there’s the swing of gradients between ± 20k, 40k and 60k respectively. I don’t know why this happens because i use the clip_grad_value_ above. Also Using the learning rate decay from 0.01 to about 0.008 at step 40k.
Or do I need to update the weight parameters by myself something like this
image
But i think optimizer.step() should do the job and the clip_grad_value_ is an inplace operation so i don’t need to take the return value from function. Please correct if i did anything wrong. Thank you very much
Best regards,
Mint
| Your code looks right, but try using a smaller value for the clip-value argument. Here's the documentation on the clip_grad_value_() function you're using, which shows that each individual term in the gradient is set such that its magnitude does not exceed the clip value.
You have clip value set to 100, so if you have 100 parameters then abs(gradient).sum() can be as large as 10,000 (100*100).
| https://stackoverflow.com/questions/61756557/ |
How does the Transformer Model Compute Self Attention? | In the transformer model, https://arxiv.org/pdf/1706.03762.pdf there is self-attention which is computed using softmax on Query (Q) and Key (K) vectors:
I am trying to understand the matrix multiplications:
Q = batch_size x seq_length x embed_size
K = batch_size x seq_length x embed_size
QK^T = batch_size x seq_length x seq_length
Softmax QK^T = Softmax (batch_size x seq_length x seq_length)
How is the softmax computed since there are seq_length x seq_length values per batch element?
A reference to Pytorch computation will be very helpful.
Cheers!
|
How is the softmax computed since there are seq_length x seq_length values per batch element?
The softmax is performed on w.r.t the last axis (torch.nn.Softmax(dim=-1)(tensor) where tensor is of shape batch_size x seq_length x seq_length) to get the probability of attending to every element for each element in the input sequence.
Let's assume, we have a text sequence "Thinking Machines", so we have a matrix of shape "2 x 2" (where seq_length = 2) after performing QK^T.
I am using the following illustration (reference) to explain self-attention computation. As you know, first scaled-dot-product is performed QK^T/square_root(d_k) and then softmax is computed for each sequence element.
Here, Softmax is performed for the first sequence element "Thinking". The raw score of 14 and 12 is turned into a probability of 0.88 and 0.12 by doing softmax. These probability indicates that the token "Thinking" would attend itself with 88% probability, and the token "Machines" with 12% probability. Similarly, the attention probability is computed for the token "Machines" too.
Note. I strongly suggest reading this excellent article on Transformer. For implementation, you can take a look at OpenNMT.
| https://stackoverflow.com/questions/61764582/ |
How to see the adapted learning rate for Adam in pytorch? | There are many different optimizers with adaptive learning rate methods. Is it possible to see the adapted value of the initial learning rate for Adam?
Here is a similar question about Adadelta and the answer was to search for ["acc_delta"] key, but Adam has no that key.
| AFAIK there is no super easy way to do this. However, you can recalculate the current learning rate of a certain paramter using the implementation of Adam in PyTorch: https://pytorch.org/docs/stable/_modules/torch/optim/adam.html
I came up with this minimal working example:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
def get_current_lr(optimizer, group_idx, parameter_idx):
# Adam has different learning rates for each paramter. So we need to pick the
# group and paramter first.
group = optimizer.param_groups[group_idx]
p = group['params'][parameter_idx]
beta1, _ = group['betas']
state = optimizer.state[p]
bias_correction1 = 1 - beta1 ** state['step']
current_lr = group['lr'] / bias_correction1
return current_lr
x = Variable(torch.randn(100, 1)) #Just create a random tensor as input
model = nn.Linear(1, 1)
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=1e-3)
niter = 20
for _ in range(0, niter):
out = model(x)
optimizer.zero_grad()
loss = criterion(out, x) #Here we learn the identity mapping
loss.backward()
optimizer.step()
group_idx, param_idx = 0, 0
current_lr = get_current_lr(optimizer, group_idx, param_idx)
print('Current learning rate (g:%d, p:%d): %.4f | Loss: %.4f'%(group_idx, param_idx, current_lr, loss.item()))
which should output something like this:
Current learning rate (g:0, p:0): 0.0100 | Loss: 0.5181
Current learning rate (g:0, p:0): 0.0053 | Loss: 0.5161
Current learning rate (g:0, p:0): 0.0037 | Loss: 0.5141
Current learning rate (g:0, p:0): 0.0029 | Loss: 0.5121
Current learning rate (g:0, p:0): 0.0024 | Loss: 0.5102
Current learning rate (g:0, p:0): 0.0021 | Loss: 0.5082
Current learning rate (g:0, p:0): 0.0019 | Loss: 0.5062
Current learning rate (g:0, p:0): 0.0018 | Loss: 0.5042
Current learning rate (g:0, p:0): 0.0016 | Loss: 0.5023
Current learning rate (g:0, p:0): 0.0015 | Loss: 0.5003
Current learning rate (g:0, p:0): 0.0015 | Loss: 0.4984
Current learning rate (g:0, p:0): 0.0014 | Loss: 0.4964
Current learning rate (g:0, p:0): 0.0013 | Loss: 0.4945
Current learning rate (g:0, p:0): 0.0013 | Loss: 0.4925
Current learning rate (g:0, p:0): 0.0013 | Loss: 0.4906
Current learning rate (g:0, p:0): 0.0012 | Loss: 0.4887
Current learning rate (g:0, p:0): 0.0012 | Loss: 0.4868
Current learning rate (g:0, p:0): 0.0012 | Loss: 0.4848
Current learning rate (g:0, p:0): 0.0012 | Loss: 0.4829
Current learning rate (g:0, p:0): 0.0011 | Loss: 0.4810
Note that monitoring the learning rate of every individual paramter is probably not feasible nor helpful for larger models.
| https://stackoverflow.com/questions/61773139/ |
How to get the probability of a particular token(word) in a sentence given the context | I'm trying to calculate the probability or any type of score for words in a sentence using NLP. I've tried this approach with GPT2 model using Huggingface Transformers library, but, I couldn't get satisfactory results due to the model's unidirectional nature which for me didn't seem to predict within context. So I was wondering whether there is a way, to calculate the above said using BERT since it's Bidirectional.
I've found this post relatable, which I randomly saw the other day but didn't see any answer which would be useful for me as well.
Hope I will be able to receive ideas or a solution for this. Any help is appreciated. Thank you.
| BERT is trained as a masked language model, i.e., it is trained to predict tokens that were replaced by a [MASK] token.
from transformers import AutoTokenizer, BertForMaskedLM
tok = AutoTokenizer.from_pretrained("bert-base-cased")
bert = BertForMaskedLM.from_pretrained("bert-base-cased")
input_idx = tok.encode(f"The {tok.mask_token} were the best rock band ever.")
logits = bert(torch.tensor([input_idx]))[0]
prediction = logits[0].argmax(dim=1)
print(tok.convert_ids_to_tokens(prediction[2].numpy().tolist()))
It prints token no. 11581 which is:
Beatles
To get a normalized probability distribution over BERT's vocabulary, you can normalize the logits using the softmax function, i.e., F.softmax(logits, dim=1), (assuming standart import torch.nn.fucntional as F).
The tricky thing is that words might be split into multiple subwords. You can simulate that by adding multiple [MASK] tokens, but then you have a problem with how to compare the scores of prediction so different lengths reliably. I would probably average the probabilities, but maybe there is a better way.
| https://stackoverflow.com/questions/61787853/ |
Python get pytorch tensor size | I wanna know how to get the shape of this tensor in Python ? I have tried this :
> len(x)
But this prints 1, why ? I want to print the number of tuples here which is 3. Using len(x) prints only 1.
What's the problem ?
Here's the tensor :
(x=array([[[[ 0.07499999, 0. ],
[ 0.0703125 , 0. ],
[ 0.0703125 , 0. ],
[ 0.09218752, 0. ],
[ 0.1953125 , 0. ],
[ 0.05312502, 0. ],
[ 0.2890625 , 0. ],
[ 0.015625 , 0. ],
[ 0.32656252, 0. ],
[ 0.09218752, 0. ],
[ 0.23906249, 0. ],
[ 0.09218752, 0. ],
[ 0.22812498, 0. ],
[ 0.06406248, 0. ],
[ 0.19062501, 0. ],
[ 0.02031249, 0. ],
[ 0.17343748, 0. ]],
[[ 0.06875002, 0. ],
[ 0.06875002, 0. ],
[ 0.06875002, 0. ],
[ 0.09062499, 0. ],
[ 0.19375002, 0. ],
[ 0.05781251, 0. ],
[ 0.2921875 , 0. ],
[ 0.01406252, 0. ],
[ 0.325 , 0. ],
[ 0.08437502, 0. ],
[ 0.23124999, 0. ],
[ 0.09531248, 0. ],
[ 0.22031248, 0. ],
[ 0.06406248, 0. ],
[ 0.18906248, 0. ],
[ 0.02031249, 0. ],
[ 0.171875 , 0. ]],
[[ 0.06718749, 0. ],
[ 0.06093752, 0. ],
[ 0.07187498, 0. ],
[ 0.078125 , 0. ],
[ 0.18593752, 0. ],
[ 0.03437501, 0. ],
[ 0.2765625 , 0. ],
[-0.00312501, 0. ],
[ 0.29843748, 0. ],
[ 0.078125 , 0. ],
[ 0.21718752, 0. ],
[ 0.078125 , 0. ],
[ 0.21249998, 0. ],
[ 0.07187498, 0. ],
[ 0.19062501, 0. ],
[ 0.13749999, 0. ],
[ 0.1796875 , 0. ]]]], dtype=float32), 0)
| It looks like your 3 tuples are located within the first (and only) index of x. In this case, len(x[0]) yields 3.
| https://stackoverflow.com/questions/61802892/ |
Pytorch RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn | This code is built up as follows: My robot takes a picture, some tf computer vision model calculates where in the picture the target object starts. This information (x1 and x2 coordinate) is passed to a pytorch model. It should learn to predict the correct motor activations, in order to get closer to the target. After the movement is executed, the robot takes a picture again and the tf cv model should calculate whether the motor activation brought the robot closer to the desired state (x1 at 10, x2 coordinate at at31)
However every time i run the code pytorch is not able to calculate the gradients.
I'm wondering if this is some data-type problem or if it is a more general one: Is it impossible to calculate the gradients if the loss is not calculated directly from the pytorch network's output?
Any help and suggestions will be greatly appreciated.
#define policy model (model to learn a policy for my robot)
import torch
import torch.nn as nn
import torch.nn.functional as F
class policy_gradient_model(nn.Module):
def __init__(self):
super(policy_gradient_model, self).__init__()
self.fc0 = nn.Linear(2, 2)
self.fc1 = nn.Linear(2, 32)
self.fc2 = nn.Linear(32, 64)
self.fc3 = nn.Linear(64,32)
self.fc4 = nn.Linear(32,32)
self.fc5 = nn.Linear(32, 2)
def forward(self,x):
x = self.fc0(x)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
x = F.relu(self.fc5(x))
return x
policy_model = policy_gradient_model().double()
print(policy_model)
optimizer = torch.optim.AdamW(policy_model.parameters(), lr=0.005, betas=(0.9,0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)
#make robot move as predicted by pytorch network (not all code included)
def move(motor_controls):
#define curvature
# motor_controls[0] = sigmoid(motor_controls[0])
activation_left = 1+(motor_controls[0])*99
activation_right = 1+(1- motor_controls[0])*99
print("activation left:", activation_left, ". activation right:",activation_right, ". time:", motor_controls[1]*100)
#start movement
#main
import cv2
import numpy as np
import time
from torch.autograd import Variable
print("start training")
losses=[]
losses_end_of_epoch=[]
number_of_steps_each_epoch=[]
loss_function = nn.MSELoss(reduction='mean')
#each epoch
for epoch in range(2):
count=0
target_reached=False
while target_reached==False:
print("epoch: ", epoch, ". step:", count)
###process and take picture
indices = process_picture()
###binary_network(sliced)=indices as input for policy model
optimizer.zero_grad()
###output: 1 for curvature, 1 for duration of movement
motor_controls = policy_model(Variable(torch.from_numpy(indices))).detach().numpy()
print("NO TANH output for motor: 1)activation left, 2)time ", motor_controls)
motor_controls[0] = np.tanh(motor_controls[0])
motor_controls[1] = np.tanh(motor_controls[1])
print("TANH output for motor: 1)activation left, 2)time ", motor_controls)
###execute suggested action
move(motor_controls)
###take and process picture2 (after movement)
indices = (process_picture())
###loss=(binary_network(picture2) - desired
print("calculate loss")
print("idx", indices, type(torch.tensor(indices)))
# loss = 0
# loss = (indices[0]-10)**2+(indices[1]-31)**2
# loss = loss/2
print("shape of indices", indices.shape)
array=np.zeros((1,2))
array[0]=indices
print(array.shape, type(array))
array2 = torch.ones([1,2])
loss = loss_function(torch.tensor(array).double(), torch.tensor([[10.0,31.0]]).double()).float()
print("loss: ", loss, type(loss), loss.shape)
# array2[0] = loss_function(torch.tensor(array).double(),
torch.tensor([[10.0,31.0]]).double()).float()
losses.append(loss)
#start line causing the error-message (still part of main)
###calculate gradients
loss.backward()
#end line causing the error-message (still part of main)
###apply gradients
optimizer.step()
#Output (so far as intented) (not all included)
#calculate loss
idx [14. 15.] <class 'torch.Tensor'>
shape of indices (2,)
(1, 2) <class 'numpy.ndarray'>
loss: tensor(136.) <class 'torch.Tensor'> torch.Size([])
#Error Message:
Traceback (most recent call last):
File "/home/pi/Desktop/GradientPolicyLearning/PolicyModel.py", line 259, in <module>
array2.backward()
File "/home/pi/.local/lib/python3.7/site-packages/torch/tensor.py", line 134, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/pi/.local/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in
backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
| If you call .detach() on the prediction, that will delete the gradients. Since you are first getting indices from the model and then trying to backprop the error, I would suggest
prediction = policy_model(torch.from_numpy(indices))
motor_controls = prediction.clone().detach().numpy()
This would keep the predictions as it is with the calculated gradients that can be backproped.
Now you can do
loss = loss_function(prediction, torch.tensor([[10.0,31.0]]).double()).float()
Note, you might wanna call double of the prediction if it throws an error.
| https://stackoverflow.com/questions/61808965/ |
How to extract position input-output indeces from huggingface transformer text tokenizator? | I want to solve stress prediction task with pretrained russian bert.
Input data looks like this:
граммов сверху|000100000001000
Zeros mean no stress. Ones represent stress position character.
I want to map it as word -> vowel number index
So it will be like
граммов -> 1
сверху -> 1
So, for each token, it should be a linear layer with softmax.
I understand this part, but it's hard for me to deal with text preprocessing because text tokenizator can split a word into subword tokens.
Tokenizator takes an input string and returns tokens like this
bert_tokenizer.encode('граммов сверху')
->
[101, 44505, 26656, 102]
So, how to get position mapping between input chars and words?
The desired output should be like [[0, 7], [8, 14]]
I tried to read docs
https://huggingface.co/transformers/main_classes/tokenizer.html
And found convert_ids_to_tokens function
It works like
encoded = bert_tokenizer.encode('граммов сверху')
bert_tokenizer.convert_ids_to_tokens(encoded)
->
['[CLS]', 'граммов', 'сверху', '[SEP]']
But I'm not sure how to use original string and stress indices to calculate stress position number for token.
| Its turned out, tokenizer have return_offsets_mapping param, this solve my problem.
| https://stackoverflow.com/questions/61821515/ |
How to use map_location='cpu' due to "RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False" | I was trying to download the following model at https://pytorch.org/hub/nvidia_deeplearningexamples_tacotron2/
import torch
tacotron2 = torch.hub.load('nvidia/DeepLearningExamples:torchhub', 'nvidia_tacotron2')
I received:
>>> import torch
>>> tacotron2 = torch.hub.load('nvidia/DeepLearningExamples:torchhub', 'nvidia_tacotron2')
Using cache found in .cache\torch\hub\nvidia_DeepLearningExamples_torchhub
...
File "Anaconda3\envs\env3_pytorch\lib\site-packages\torch\serialization.py", line 79, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
So I used the following with map_location='cpu', but still get the same error.
>>> tacotron2 = torch.hub.load('nvidia/DeepLearningExamples:torchhub', 'nvidia_tacotron2', map_location='cpu')
torch.version.cuda shows a version but torch.cuda.is_available() is false.
>>> import torch
>>> torch.version.cuda
'9.0'
>>> torch.cuda.is_available()
False
How to get around this error related to map_location as I used exactly the command error message asked to use?
How can I use cuda in loading the model?
| torch.hub.load does not specifically support map_location, it only forwards the extra arguments to the loading of the model, so it's implementation dependent whether that would be support.
In this case it is not supported, the loading is implemented in NVIDIA/DeepLearningExamples:torchhub - hubconf.py and it does not pass any map_location to torch.load when the checkpoint is loaded.
That means you need to load the checkpoint manually and apply it to the model. Thankfully, the model can be loaded with the same configuration without loading the checkpoint, by setting pretrained=False and the checkpoint can be loaded separately with torch.hub.load_state_dict_from_url, which supports map_location. There is only a small change that is required to the state dict, because it was trained with DistributedDataParallel, which wraps the module, such that every layer in the model became module.layer. In the state dict that module. prefix needs to be stripped.
tacotron2 = torch.hub.load('nvidia/DeepLearningExamples:torchhub', 'nvidia_tacotron2', pretrained=False)
checkpoint = torch.hub.load_state_dict_from_url('https://api.ngc.nvidia.com/v2/models/nvidia/tacotron2pyt_fp32/versions/1/files/nvidia_tacotron2pyt_fp32_20190306.pth', map_location="cpu")
# Unwrap the DistributedDataParallel module
# module.layer -> layer
state_dict = {key.replace("module.", ""): value for key, value in checkpoint["state_dict"].items()}
# Apply the state dict to the model
tacotron2.load_state_dict(state_dict)
| https://stackoverflow.com/questions/61826246/ |
No matching distribution found for torch==1.5.0+cpu on Heroku | I am trying to deploy my Django app which uses a machine learning model. And the machine learning model requires pytorch to execute.
When i am trying to deploy it is giving me this error
ERROR: Could not find a version that satisfies the requirement torch==1.5.0+cpu (from -r /tmp/build_4518392d43f43bc52f067241a9661c92/requirements.txt (line 23)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.4.1, 0.4.1.post2, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0)
ERROR: No matching distribution found for torch==1.5.0+cpu (from -r /tmp/build_4518392d43f43bc52f067241a9661c92/requirements.txt (line 23))
! Push rejected, failed to compile Python app.
! Push failed
My requirements.txt is
asgiref==3.2.7
certifi==2020.4.5.1
chardet==3.0.4
cycler==0.10.0
dj-database-url==0.5.0
Django==3.0.6
django-heroku==0.3.1
future==0.18.2
gunicorn==20.0.4
idna==2.9
imageio==2.8.0
kiwisolver==1.2.0
matplotlib==3.2.1
numpy==1.18.4
Pillow==7.1.2
psycopg2==2.8.5
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2020.1
requests==2.23.0
six==1.14.0
sqlparse==0.3.1
torch==1.5.0+cpu
torchvision==0.6.0+cpu
urllib3==1.25.9
whitenoise==5.0.1
And runtime.txt is python-3.7.5
However installing it on my computer is not giving any type of error when i use command pip install torch==1.5.0+cpu I am using python 3.7.5 and pip 20.0.2.
Complete code is here.
How to solve this issue i really need to deploy my app. Thanks
| PyTorch does not distribute the CPU only versions over PyPI. They are only available through their custom registry.
If you select the CPU only version on PyTorch - Get Started Locally you get the following instructions:
pip install torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
Since you're not manually executing the pip install, you cannot simply add the -f https://download.pytorch.org/whl/torch_stable.html.
As an alternative, you can put it into your requirements.txt as a standalone line. It shouldn't really matter where exactly you put it, but it is commonly put at the very top.
-f https://download.pytorch.org/whl/torch_stable.html
asgiref==3.2.7
certifi==2020.4.5.1
chardet==3.0.4
cycler==0.10.0
dj-database-url==0.5.0
Django==3.0.6
django-heroku==0.3.1
future==0.18.2
gunicorn==20.0.4
idna==2.9
imageio==2.8.0
kiwisolver==1.2.0
matplotlib==3.2.1
numpy==1.18.4
Pillow==7.1.2
psycopg2==2.8.5
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2020.1
requests==2.23.0
six==1.14.0
sqlparse==0.3.1
torch==1.5.0+cpu
torchvision==0.6.0+cpu
urllib3==1.25.9
whitenoise==5.0.1
| https://stackoverflow.com/questions/61841672/ |
Function AddBackward0 returned an invalid gradient at index 1 - expected type torch.FloatTensor but got torch.cuda.FloatTensor | My Model:
class myNet(nn.Module):
def __init__(self):
super(myNet,self).__init__()
self.act1=Dynamic_relu_b(64)
self.conv1=nn.Conv2d(3,64,3)
self.pool=nn.AdaptiveAvgPool2d(1)
self.fc=nn.Linear(128,20)
def forward(self,x):
x=self.conv1(x)
x=self.act1(x)
x=self.pool(x)
x=x.view(x.shape[0],-1)
x=self.fc(x)
return x
A code that replicates the experiment is provided:
def one_hot_smooth_label(x,num_class,smooth=0.1):
num=x.shape[0]
labels=torch.zeros((num,20))
for i in range(num):
labels[i][x[i]]=1
labels=(1-(num_class-1)/num_class*smooth)*labels+smooth/num_class
return labels
images=torch.rand((4,3,300,300))
images=images.cuda()
labels=torch.from_numpy(np.array([1,0,0,1]))
model=myNet()
model=model.cuda()
output=model(images)
labels=one_hot_smooth_label(labels,20)
labels = labels.cuda()
criterion=nn.BCEWithLogitsLoss()
loss=criterion(output,labels)
loss.backward()
The error:
RuntimeError Traceback (most recent call last)
<ipython-input-42-1268777e87e6> in <module>()
21
22 loss=criterion(output,labels)
---> 23 loss.backward()
1 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
98 Variable._execution_engine.run_backward(
99 tensors, grad_tensors, retain_graph, create_graph,
--> 100 allow_unreachable=True) # allow_unreachable flag
101
102
RuntimeError: Function AddBackward0 returned an invalid gradient at index 1 - expected type TensorOptions(dtype=float, device=cpu, layout=Strided, requires_grad=false) but got TensorOptions(dtype=float, device=cuda:0, layout=Strided, requires_grad=false) (validate_outputs at /pytorch/torch/csrc/autograd/engine.cpp:484)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7fcf7711b536 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x2d84224 (0x7fcfb1bad224 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #2: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&) + 0x548 (0x7fcfb1baed58 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #3: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&, bool) + 0x3d2 (0x7fcfb1bb0ce2 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #4: torch::autograd::Engine::thread_init(int) + 0x39 (0x7fcfb1ba9359 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #5: torch::autograd::python::PythonEngine::thread_init(int) + 0x38 (0x7fcfbe2e8378 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0xbd6df (0x7fcfe23416df in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #7: <unknown function> + 0x76db (0x7fcfe34236db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #8: clone + 0x3f (0x7fcfe375c88f in /lib/x86_64-linux-gnu/libc.so.6)
After many experiments, I found that act1 in the model was the problem. If you delete act1, the error will not appear!
But I don't know why act1 has this problem.
What seems to be the wrong part of the error is requiers_grad=False, and I don't know which part set this.
This is the code about act1(Dynamic_relu_b):
class Residual(nn.Module):
def __init__(self, in_channel, R=8, k=2):
super(Residual, self).__init__()
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.relu = nn.ReLU(inplace=True)
self.R = R
self.k = k
out_channel = int(in_channel / R)
self.fc1 = nn.Linear(in_channel, out_channel)
fc_list = []
for i in range(k):
fc_list.append(nn.Linear(out_channel, 2 * in_channel))
self.fc2 = nn.ModuleList(fc_list)
def forward(self, x):
x = self.avg(x)
x = torch.squeeze(x)
x = self.fc1(x)
x = self.relu(x)
result_list = []
for i in range(self.k):
result = self.fc2[i](x)
result = 2 * torch.sigmoid(result) - 1
result_list.append(result)
return result_list
class Dynamic_relu_b(nn.Module):
def __init__(self, inchannel, R=8, k=2):
super(Dynamic_relu_b, self).__init__()
self.lambda_alpha = 1
self.lambda_beta = 0.5
self.R = R
self.k = k
self.init_alpha = torch.zeros(self.k)
self.init_beta = torch.zeros(self.k)
self.init_alpha[0] = 1
self.init_beta[0] = 1
for i in range(1, k):
self.init_alpha[i] = 0
self.init_beta[i] = 0
self.residual = Residual(inchannel)
def forward(self, input):
delta = self.residual(input)
in_channel = input.shape[1]
bs = input.shape[0]
alpha = torch.zeros((self.k, bs, in_channel))
beta = torch.zeros((self.k, bs, in_channel))
for i in range(self.k):
for j, c in enumerate(range(0, in_channel * 2, 2)):
alpha[i, :, j] = delta[i][:, c]
beta[i, :, j] = delta[i][:, c + 1]
alpha1 = alpha[0]
beta1 = beta[0]
max_result = self.dynamic_function(alpha1, beta1, input, 0)
for i in range(1, self.k):
alphai = alpha[i]
betai = beta[i]
result = self.dynamic_function(alphai, betai, input, i)
max_result = torch.max(max_result, result)
return max_result
def dynamic_function(self, alpha, beta, x, k):
init_alpha = self.init_alpha[k]
init_beta = self.init_beta[k]
alpha = init_alpha + self.lambda_alpha * alpha
beta = init_beta + self.lambda_beta * beta
bs = x.shape[0]
channel = x.shape[1]
results = torch.zeros_like(x)
for i in range(bs):
for c in range(channel):
results[i, c, :, :] = x[i, c] * alpha[i, c] + beta[i, c]
return results
How should I solve this problem?
| In PyTorch two tensors need to be on the same device to perform any mathematical operation between them. But in your case one is on the CPU and the other on the GPU. The error is not as clear as it normally is, because it happened in the backwards pass. You were (un)lucky that your forward pass did not fail. That's because there is an exception to the same device restriction, namely when using scalar values in the mathematical operation, e.g. tensor * 2, and it even occurs when the scalar is a tensor: cpu_tensor * tensor(2, device='cuda:0'). You are using a lot of loops and accessing individual scalars to calculate further results.
While the forward pass works like that, in the backward pass when the gradients are calculated, the gradients are multiplied with the previous ones (application of the chain rule). At that point, the two are on different devices.
You have identified that it's in the Dynamic_relu_b. In there you need to make sure that every tensor that you create, is on the same device as the input. The two tensors you create in the forward method are:
alpha = torch.zeros((self.k, bs, in_channel))
beta = torch.zeros((self.k, bs, in_channel))
These are created on the CPU, but your input is on the GPU, so you need to put them on the GPU as well. To be generic, it should be put onto the device where the input is located.
alpha = torch.zeros((self.k, bs, in_channel), device=input.device)
beta = torch.zeros((self.k, bs, in_channel), device=input.device)
The biggest problem in your code are the loops. Not only did they obfuscated a bug, they are very harmful for performance, since they can neither be parallelised nor vectorised, and those are the reasons why GPUs are so fast. I'm certain that these loops can be replaced with more efficient operations, but you'll have to get out of the mindset of creating an empty tensor and then filling it one by one.
I'll give you one example from dynamic_function:
results = torch.zeros_like(x)
for i in range(bs):
for c in range(channel):
results[i, c, :, :] = x[i, c] * alpha[i, c] + beta[i, c]
You're multiplying x (size: [bs, channel, height, width]) with alpha (size: [bs, channel]), where every plane (height, width) of x is multiplied by a different element of alpha (scalar). That would be same as doing an element-wise multiplication with a tensor of the same size as the plane [height, width], but where all elements are the same scalar.
Thankfully, you don't need to repeat them yourself, since singular dimensions (dimensions with size 1) are automatically expanded to match the size of the other tensor, see PyTorch - Broadcasting Semantics for details. That means you only need to reshape alpha to have size [bs, channel, 1, 1].
The loop can therefore be replaced with:
results = x * alpha.view(bs, channel, 1, 1) + beta.view(bs, channel, 1, 1)
By eliminating that loop, you gain a lot of performance, and your initial error just got much clearer, because the forward pass would fail with the following message:
File "main.py", line 78, in dynamic_function
results = x * alpha.view(bs, channel, 1, 1) + beta.view(bs, channel, 1, 1)
RuntimeError: expected device cuda:0 but got device cpu
Now you would know that one of these is on the CPU and the other on the GPU.
| https://stackoverflow.com/questions/61845974/ |
Why some people chain the parameters of two different networks and train them with same optimizer? | I was looking at CycleGAN's official pytorch implementation and there, author chained the parameters of both networks and used a single optimizer for both network. How does this work? Is it better than using two different optimizers for two different networks ?
all_params = chain(module_a.parameters(), module_b.parameters())
optimizer = torch.optim.Adam(all_params)
| From chain documentation: https://docs.python.org/3/library/itertools.html#itertools.chain
itertools.chain(*iterables)
Make an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables are exhausted.
As parameters() gives you an iterable, you can use the optimizer to simultaneously optimize parameters for both of the networks. So, same optimizer states will be used for both models (Modules), if you use two different optimizers, the parameters will be optimized separately.
If you have a composite network, it becomes necessary to optimize the parameters (of all) at the same time, hence using a single optimizer for all of them is the way to go.
| https://stackoverflow.com/questions/61846505/ |
NumPyro vs Pyro: Why is former 100x faster and when should I use the latter? | From Pytorch-Pyro's website:
We’re excited to announce the release of NumPyro, a NumPy-backed Pyro using JAX for automatic differentiation and JIT compilation, with over 100x speedup for HMC and NUTS!
My questions:
Where is the performance gain (which is sometimes 340x or 2X) of NumPyro (over Pyro) coming from exactly?
And more importantly, why (rather, where) would I continue to use Pyro?
Extra:
How should I view the performance and features of NumPyro compared to Tensorflow Probability, in deciding which to use where?
| That's a good question. I just asked the same question in Pyro's dedicated forum. Here's the answer of one of their core developers: "There are many cool stuffs in Pyro that do not appear in NumPyro, for example, see Contributed code section in Pyro docs. For me, while developing, it is much easier to debug PyTorch code than Jax code (though Jax team has put much effort to help debugging in recent releases). Hence to implement a new inference algorithm, it is easier for me to work in Pyro."
| https://stackoverflow.com/questions/61846620/ |
How to use TPUs with PyTorch? | I am trying to use TPU using pytorch_xla, but it shows import error in _XLAC.
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
import torch_xla
import torch_xla.core.xla_model as xm
ImportError Traceback (most recent call last)
<ipython-input-60-6a19e980152f> in <module>()
----> 1 import torch_xla
2 import torch_xla.core.xla_model as xm
/usr/local/lib/python3.6/dist-packages/torch_xla/__init__.py in <module>()
39 import torch
40 from .version import __version__
---> 41 import _XLAC
42
43 _XLAC._initialize_aten_bindings()
ImportError: /usr/local/lib/python3.6/dist-packages/_XLAC.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at6native6einsumENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN3c108ArrayRefINS_6TensorEEE
| Please try this:
!pip uninstall -y torch
!pip install torch==1.8.2+cpu -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install -q cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl
import torch_xla
It worked for me.
Source: googlecolab/colabtools#2237
| https://stackoverflow.com/questions/61847448/ |
Input dimension reshape when using PyTorch model with CoreML | I have a seq2seq model in PyTorch that I want to run with CoreML. When exporting the model to ONNX the input dimensions are fixed to the shape of the tensor used during export, and again with the conversion from ONNX to CoreML.
import torch
from onnx_coreml import convert
x = torch.ones((32, 1, 1000)) # N x C x W
model = Model()
torch.onnx.export(model, x, 'example.onnx')
mlmodel = convert(model='example.onnx', minimum_ios_deployment_target='13')
mlmodel.save('example.mlmodel')
For the ONNX export you can export dynamic dimension -
torch.onnx.export(
model, x, 'example.onnx',
input_names = ['input'],
output_names = ['output'],
dynamic_axes={
'input' : {0 : 'batch', 2: 'width'},
'output' : {0 : 'batch', 1: 'owidth'},
}
)
But this leads to a RunTimeWarning when converting to CoreML -
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: Error compiling model: "compiler error: Blob with zero size found:
For inference in CoreML I would like the batch (first) and width (last) dimension to either be dynamic or have the ability to statically change them.
Is that possible?
| The dimensions of the input can be made dynamic in ONNX by specifying dynamic_axes for torch.onnx.export.
torch.onnx.export(
model,
x,
'example.onnx',
# Assigning names to the inputs to reference in dynamic_axes
# Your model only has one input: x
input_names=["input"],
# Define which dimensions should be dynamic
# Names of the dimensions are optional, but recommended.
# Could just be: {"input": [0, 2]}
dynamic_axes={"input": {0: "batch", 2: "width"}}
)
Now the exported model accepts inputs of size [batch, 1, width], where batch and width are dynamic.
| https://stackoverflow.com/questions/61850304/ |
getting unicode decode error while trying to load pre-trained model using torch.load(PATH) | Trying to load a ResNet 18 pre-trained model using the torch.load(PATH) but getting Unicode decode error please help.
Traceback (most recent call last):
File "main.py", line 312, in <module>
main()
File "main.py", line 138, in main
checkpoint = torch.load(args.resume)
File "F:\InsSoft\Anaconda\lib\site-packages\torch\serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "F:\InsSoft\Anaconda\lib\site-packages\torch\serialization.py", line 773, in _legacy_load
result = unpickler.load()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbe in position 2: invalid start byte
| This error hits whenever the model is pretrained on torch version < 0.4 but using torch version > 0.4 for testing / resuming.
so use checkpoint = torch.load(args.resume,encoding='latin1')
| https://stackoverflow.com/questions/61851244/ |
How to know node/feature contributions? | I'm working on GCN (Graph Convolutional Network) in PyTorch, in my application: a patient is a graph, nodes represent its genes, for each gene I have 2 features (gene structure and expression value).
The task is I'm doing a regression model to predict the risk of each patient to get a disease.
My question is,
1- how to know which nodes (genes) contribute to the prediction?
2- and which feature of the 2 that I have (gene structure and expression value) contribute to the prediction?
Any suggestions/ideas would be helpful, thanks.
| I am suggesting possibly the simplest solution. However, it can work well.
According to your description of the problem, you want to learn the graph (that represents a patient) representation which can be used to predict the risk of getting a disease. As we know, GCN (graph convolution network) can provide vector representations for each node in the graph.
All the node representations can be turned into a single vector representation which would represent the entire graph and this can be done in many ways. For example, you can use max-pooling or self-attentive pooling. In both ways, you can identify which nodes contributed most to the final prediction.
For example, in self-attentive pooling, every node gets a weight and the single vector representation is a weighted vector representation. So, the weights can indicate the nodes' contribution. If we use max-pooling, then we can count how many of the features from a node is pooled while applying the max-pooling. The count itself can indicate the contribution.
Which feature of the 2 that I have (gene structure and expression value) contribute to the prediction?
The same above idea you can apply. For example, you can have learnable weights for the 2 features to combine them while computing the single vector representation of the graph.
| https://stackoverflow.com/questions/61851325/ |
pytorch model not updating | I put my training code below. I am using torch.optim.SGD as optimizer. I thought optimizer.step() would be doing the update but the model accuracy seems to stay the same. My friend said he didn't use the optimizer.step() and his works fine.
I tried taking it out, still the same result. What can I be doing wrong?
I don't think there's a problem with the accuracy calculation.
class FNet(nn.Module):
def __init__(self, **kwargs):
super().__init__()
self.fc1 = nn.Linear(128*256, 1024)
self.fc2 = nn.Linear(1024, 256)
self.fc3 = nn.Linear(256, 2)
def forward(self, X):
X = F.relu(self.fc1(X))
X = F.relu(self.fc2(X))
X = self.fc3(X)
return F.softmax(X, dim=1)
def main():
learning_rate = 0.01
model = FNet()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=5e-04) # you can play with momentum and weight_decay parameters as well
accs = [0,]
for epoch in range(max_epoch):
train(epoch, model, optimizer, train_batch)
acc = test(model, val_batch)
accs.append(acc)
def train(epoch, model, optimizer, trainloader):
model.train()
optimizer.zero_grad()
for batch_idx, (data, labels) in enumerate(trainloader):
outputs = model(data)
loss = F.nll_loss(outputs, labels)
loss.backward()
optimizer.step()
def test(model, testloader):
correct = 0
total = 0
model.eval()
for batch_idx, (data, labels) in enumerate(testloader):
outputs = model(data.view(-1,128*256))
for sample_idx,output in enumerate(outputs):
if torch.argmax(output) == labels[sample_idx]:
correct = correct + 1
total = total + 1
accuracy = correct/total
return accuracy
| I think this line should be under your for loop
optimizer.zero_grad(). You need to clear the parameter gradients after each loop.
try this
def train(epoch, model, optimizer, trainloader):
model.train()
for batch_idx, (data, labels) in enumerate(trainloader):
optimizer.zero_grad()
outputs = net(data)
loss = F.nll_loss(outputs, labels)
loss.backward()
optimizer.step()
| https://stackoverflow.com/questions/61854692/ |
Is it mandatory in pytorch to add modules to ModuleList to access its parameters | I read some posts about ModuleList and all of them said that adding modules to ModuleList gives access to parameters of the Neural Network but in “Training a classifier” example of 60 mins blitz pytorch tutorial the modules are not added to any ModuleList and still the parameters could be accessed using
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
This is confusing. Please clarify how the parameters are accessible even though the modules have not been added to any ModuleList
class mmodel(nn.Module):
def __init__(self):
super(mmodel,self).__init__()
self.lst=[]
self.lst.append(nn.Linear(1,1))
self.lst.append(nn.Linear(1,1))
self.mlist = nn.ModuleList([nn.Linear(2,1,bias=False),nn.Linear(3,1)])
def forward(self,x):
for m in self.lst:
print(type(m))
x = m(x)
for m in self.mlist:
print(type(m))
mm = mmodel()
mm(X)
print(list(mm.parameters()))
<class 'torch.nn.modules.linear.Linear'>
<class 'torch.nn.modules.linear.Linear'>
<class 'torch.nn.modules.linear.Linear'>
<class 'torch.nn.modules.linear.Linear'>
[Parameter containing:
tensor([[0.2302, 0.3712]], requires_grad=True), Parameter containing:
tensor([[-0.3451, -0.0274, 0.3990]], requires_grad=True), Parameter containing:
tensor([0.3258], requires_grad=True)]
As evident from the above outputs the parameters of modules added in a python list are not visible in the forward method. Only the parameters of the modules added in the ModuleList are visible in the forward method
| Calling module.parameters() lists all nn.Parameter of the module. Concretely, every attribute on the module that is an instance of nn.Parameter will be in that list. Additionally to listing all the parameters of that module, it will also list all parameters of the submodules (unless module.parameters(recurse=False) is used). That means it will also collect all parameters of every attribute that is an instance of nn.Module, which includes all subclasses.
However, if you assign an ordinary list of modules to your module, they won't be included since that is an instance of list, but not nn.Module. For convenience, nn.ModuleList can be used in place of a regular Python list. nn.ModuleList is an instance of nn.Module but acts similar to a list, albeit much more restricted.
Let's take a look at an example to understand what's considered a submodule:
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
# This is an nn.Module, therefore it is considered a submodule
self.single_linear = nn.Linear(10, 20)
# This is a list, that happens to contain nn.Modules.
# This is not a submodule.
self.linears = [nn.Linear(20, 10), nn.Linear(20, 30), nn.Linear(20, 40)]
# A list with ints, clearly not a submodule either.
self.numbers = [0, 1, 2, 3]
# This is a module, but acts like an ordinary list
# So it's a submodule as well.
self.linears_module_list = nn.ModuleList(
[nn.Linear(20, 10), nn.Linear(20, 30), nn.Linear(20, 40)]
)
In this model we have assigned 4 new attributes, single_linear, linears, numbers and linears_module_list. When they are assigned, PyTorch checks whether they are instances of nn.Module, and if they are, they will be registered as submodules.
We can verify that (leaving out numbers, because that is pretty clear):
model = Model()
isinstance(model.single_linear, nn.Module) # => True
isinstance(model.linears, nn.Module) # => False
isinstance(model.linears, list) # => True
isinstance(model.linears_module_list, nn.Module) # => True
isinstance(model.linears_module_list, list) # => False
Only single_linear and linears_module_list are in fact submodules, as they are the only attributes that are instances of nn.Module. The submodules can be seen with model._modules, but that's an implementation detail and you shouldn't rely on it.
# The registered modules
model._modules
# => OrderedDict([
# ('single_linear', Linear(in_features=10, out_features=20, bias=True)),
# ('linears_module_list', ModuleList(
# (0): Linear(in_features=20, out_features=10, bias=True)
# (1): Linear(in_features=20, out_features=30, bias=True)
# (2): Linear(in_features=20, out_features=40, bias=True)
# )
# )])
| https://stackoverflow.com/questions/61855285/ |
BERT encoding layer produces same output for all inputs during evaluation (PyTorch) | I don't understand why my BERT model returns the same output during evaluation. The output of my model during training seems correct, as the values were different, but is totally the same during evaluation.
Here is my BERT model class
class BERTBaseUncased(nn.Module):
def __init__(self):
super(BERTBaseUncased, self).__init__()
self.bert = BertModel.from_pretrained("bert-base-uncased")
self.bert_drop = nn.Dropout(0.3)
self.out = nn.Linear(768, 4)
def forward(self, ids, mask, token_type_ids):
_, o2 = self.bert(ids, attention_mask=mask, token_type_ids=token_type_ids) # Use one of the outputs
bo = self.bert_drop(o2)
return self.out(bo)
My dataset class
class BERTDataset:
def __init__(self, review, target, tokenizer, classes=4):
self.review = review
self.target = target
self.tokenizer = tokenizer
self.max_len = max_len
self.classes = classes
def __len__(self):
return len(self.review)
def __getitem__(self, item):
review = str(self.review)
review = " ".join(review.split())
inputs = self.tokenizer.encode_plus(review, None, add_special_tokens=True, max_length= self.max_len,
pad_to_max_length=True, return_token_type_ids=True,
return_attention_masks=True)
ids = inputs["input_ids"]
mask = inputs["attention_mask"]
token_type_ids = inputs["token_type_ids"]
return {
'ids': torch.tensor(ids, dtype=torch.long),
'mask': torch.tensor(mask, dtype=torch.long),
'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long),
'targets': torch.tensor(to_categorical(self.target[item], self.classes), dtype=torch.float)
}
My evaluation function
def eval_fn(data_loader, model, device):
model.eval()
total_loss = 0.0
with torch.no_grad():
for bi, d in tqdm(enumerate(data_loader), total=len(data_loader)):
ids = d['ids']
token_type_ids = d['token_type_ids']
mask = d['mask']
targets = d['targets']
ids = ids.to(device, dtype=torch.long)
token_type_ids = token_type_ids.to(device, dtype=torch.long)
mask = mask.to(device, dtype=torch.long)
targets = targets.to(device, dtype=torch.float)
outputs = model(
ids=ids,
mask=mask,
token_type_ids=token_type_ids
)
loss = loss_fn(outputs, targets)
total_loss += loss.item()
And my training function
def train_fn(data_loader, model, optimizer, device, scheduler):
model.train()
total_loss = 0.0
for bi, d in tqdm(enumerate(data_loader), total=len(data_loader)):
ids = d['ids']
token_type_ids = d['token_type_ids']
mask = d['mask']
targets = d['targets']
ids = ids.to(device, dtype=torch.long)
token_type_ids = token_type_ids.to(device, dtype=torch.long)
mask = mask.to(device, dtype=torch.long)
targets = targets.to(device, dtype=torch.float)
optimizer.zero_grad()
outputs = model(
ids=ids,
mask=mask,
token_type_ids=token_type_ids
)
loss = loss_fn(outputs, targets)
total_loss += loss.item()
loss.backward()
optimizer.step()
scheduler.step()
return total_loss/len(data_loader)
Thanks!
| In case anybody else has the problem, perhaps you forgot to use one of the recommended learning rates from the official paper: 5e-5, 3e-5, 2e-5
Gradients seem to polarize if the learning rate is too high, such as 0.01, causing repeatedly the same logits for the val set.
| https://stackoverflow.com/questions/61855486/ |
Difficulty in Implementing a simple single-layer RNN using Pytorch's base class “nn.Linear” class | While working on making a simple RNN using Pytorch nn.linear function. So firstly I initialized my weights as
self.W_x = nn.Linear(self.input_dim, self.hidden_dim, bias=True)
self.W_h = nn.Linear(self.hidden_dim, self.hidden_dim, bias=True)
Now in the main step where I am getting the result of the current state by using the previous state and the values of the weights using this code statement
h_t = np.tanh((inp * self.W_x) + (prev_h * self.W_h))
So here I am getting the python error as shown below
TypeError: mul(): argument 'other' (position 1) must be Tensor, not Linear
Can anyone help me with his regards...
| Your W_x and W_h are not weights, but linear layers, which use a weight and bias (since bias=True). They need to be called as a function.
Furthermore, you cannot use NumPy operations with PyTorch tensors, but if you convert your tensors to NumPy arrays you can't back propagate through them, since only PyTorch operations are tracked in the computational graph. There is no need for np.tanh anyway, as PyTorch has torch.tanh as well.
h_t = torch.tanh(self.W_x(inp) + self.W_h(prev_h))
| https://stackoverflow.com/questions/61858053/ |
How to handle class imbalance in multi-label classification using pytorch | We are attempting to implement multi-label classification using CNN in pytorch. We have 8 labels and around 260 images using a 90/10 split for train/validation sets.
The classes are highly imbalanced with the most frequent class occurring in over 140 images. On the other hand, the least frequent class occurs in less than 5 images.
We attempted BCEWithLogitsLoss function initially that led to the model predicting the same label for all images.
We then implemented a focal loss approach to handle class imbalance as follows:
import torch.nn as nn
import torch
class FocalLoss(nn.Module):
def __init__(self, alpha=1, gamma=2):
super(FocalLoss, self).__init__()
self.alpha = alpha
self.gamma = gamma
def forward(self, outputs, targets):
bce_criterion = nn.BCEWithLogitsLoss()
bce_loss = bce_criterion(outputs, targets)
pt = torch.exp(-bce_loss)
focal_loss = self.alpha * (1 - pt) ** self.gamma * bce_loss
return focal_loss
This resulted in the model predicting empty sets (no labels) for every image since it could not get a greater than 0.5 confidence for any classes.
Is there a approach in pytorch to help address this situation?
| There's basically three ways of dealing with this.
Discard data from the more common class
Weight minority class loss values more heavily
Oversample the minority class
Option 1 is implemented by selecting the files you include in your Dataset.
Option 2 is implemented with the pos_weight parameter for BCEWithLogitsLoss
Option 3 is implemented with a custom Sampler passed to your Dataloader
For deep learning, oversampling typically works best.
| https://stackoverflow.com/questions/61879612/ |
Weird behavior when calling cuda() on different tensors in pytorch | I am trying to train a pytorch neural network on a GPU device. In order to do so, I load my inputs and network onto the default cuda enabled GPU decive. However, when I load my inputs, the model's weights do not stay cuda tensors. Here is my train function
def train(network: nn.Module, name: str, learning_cycles: dict, num_epochs):
# check we have a working gpu to train on
assert(torch.cuda.is_available())
# load model onto gpu
network = network.cuda()
# load train and test data with a transform
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_set = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=128,
shuffle=True, num_workers=2)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(network.parameters(), lr=0.001, momentum=0.9)
for epoch in range(num_epochs):
for i, data in enumerate(train_loader, 0):
inputs, labels = data
# load inputs and labels onto gpu
inputs, labels = inputs.cuda(), labels.cuda()
optimizer.zero_grad()
outputs = network(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
When calling train, I get the following error.
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
Interestingly, when I delete the line inputs, labels = inputs.cuda(), labels.cuda() I get the error RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
I would very much like to train my network, and I have searched the internet to no avail. Any good ideas?
| Given that a device mismatch crops up regardless of the device the inputs are on, it's likely that some of your model's parameters are not being moved over to the GPU when you call network = network.cuda(). You have model parameters on both the CPU and the GPU.
Post your model code. It's likely you have a Pytorch module in an incorrect container.
Lists of modules should be in a nn. ModuleList. Modules in a Python list will not transfer over. Compare
layers1 = [nn.Linear(256, 256), nn.Linear(256, 256), nn.Linear(256, 256)]
layers2 = nn.ModuleList([nn.Linear(256, 256), nn.Linear(256, 256), nn.Linear(256, 256)])
If you called model.cuda() on a model with the above two lines, the layers in layer1 would remain on the CPU, while the layers in layer2 would be moved to the GPU.
Similarly, a list of nn.Parameter objects should be contained in an nn.ParameterList object.
There's also nn. ModuleDict and nn.ParameterDict for dictionary containers.
| https://stackoverflow.com/questions/61880544/ |
Training models interactively in Pytorch | I need to train two models in parallel. Each model has a different activation function with trainable parameters. I want to train model one and model two in the way that the parameters of the activation function from model one (e.g., alpha1) is separated from the parameters in model two (e.g., alpha2) by a gap of 2; i.e., |alpha_1 - alpha_2| > 2. I wonder how I could include it into the loss function for training.
| Example module definition
I will use torch.nn.PReLU as parametric activation you talk about.
get_weight created for convenience.
import torch
class Module(torch.nn.Module):
def __init__(self, in_features, out_features):
super().__init__()
self.input = torch.nn.Linear(in_features, 2 * in_features)
self.activation = torch.nn.PReLU()
self.output = torch.nn.Linear(2 * in_features, out_features)
def get_weight(self):
return self.activation.weight
def forward(self, inputs):
return self.output(self.activation(self.inputs(inputs)))
Modules and setup
Here I'm using one optimizer to optimize parameters of both modules you talk about. criterion can be mean squared error, cross entropy or any other thing you need.
module1 = Module(20, 1)
module2 = Module(20, 1)
optimizer = torch.optim.Adam(
itertools.chain(module1.parameters(), module2.parameters())
)
critertion = ...
Training
Here is a single step, you should pack it in a for-loop over your data as is usually done, hopefully it's enough for you to get the idea:
inputs = ...
targets = ...
output1 = module1(inputs)
output2 = module2(inputs)
loss1 = criterion(output1, targets)
loss2 = criterion(output2, targets)
total_loss = loss1 + loss2
total_loss += torch.nn.functional.relu(
2 - torch.abs(module1.get_weight() - module2.get_weight()).sum()
)
total_loss.backward()
optimizer.step()
This line is what you are after in this case:
total_loss += torch.nn.functional.relu(
2 - torch.abs(module1.get_weight() - module2.get_weight()).sum()
)
relu is used so the network won't reap infinite benefit solely from creating divergent weights. If there wasn't one, loss would become negative the greater the difference between weights would be. In this case the bigger the difference the better, but it makes no difference after the gap is greater or equal to 2.
You may have to increase 2 to 2.1 or something if you have to pass the threshold of 2 as the incentive to optimize the value when it's close to 2.0 would be small.
Edit
Without explicitly given threshold it might be hard, but maybe something like this would work:
total_loss = (
(torch.abs(module1) + torch.abs(module2)).sum()
+ (1 / torch.abs(module1) + 1 / torch.abs(module2)).sum()
- torch.abs(module1 - module2).sum()
)
It's kinda hackish for the network, but might be worth a try (if you apply additional L2 regularization).
In essence, this loss will have optimum at -inf, +inf pairs of weights in the corresponding positions and never will be smaller than zero.
For those weights
weights_a = torch.tensor([-1000.0, 1000, -1000, 1000, -1000])
weights_b = torch.tensor([1000.0, -1000, 1000, -1000, 1000])
Loss for each part will be:
(torch.abs(module1) + torch.abs(module2)).sum() # 10000
(1 / torch.abs(module1) + 1 / torch.abs(module2)).sum() # 0.0100
torch.abs(module1 - module2).sum() # 10000
In this case network can reap easy benefits just by making the weights greater with opposite signs in both modules and disregard what you want to optimize (large L2 on weights of both modules might help and I think optimum value would be 1/-1 in case L2's alpha is equal to 1) and I suspect the network might be highly unstable.
With this loss function if the network gets a sign of large weight wrong it will be heavily penalized.
In this case you would be left with L2 alpha parameter to tune to make it work, which is not that strict, but still requires a hyperparameter choice.
| https://stackoverflow.com/questions/61888716/ |
trouble importing Pytorch in Jupyter notebook | Iam new to deep learning and Iam trying to import Pytorch on Jupyter Notebook.
I installed Pytorch with the following lines of code in Anaconda Prompt.
conda create -n pytorch_p37 python=3.7
conda activate pytorch_p37
conda install pytorch torchvision -c pytorch
conda install jupyter
conda list
it all executed well.
but on importing pytorch it shows errors.
import torch
this error below:-
OSError: [WinError 126] The specified module could not be found
error showing image
| !pip install torch
It worked for me in a Anaconda's Jupyter notebook.
| https://stackoverflow.com/questions/61897853/ |
How to solve UserWarning: Using a target size (torch.Size([])) that is different to the input size (torch.Size([1]))? | I am trying to run code from a book I purchased about reinforcement learning in Pytorch.
The code should work according to the book, but for me the model doesn't converge and the reward remains negative. It also get the following user warning:
/home/user/.local/lib/python3.6/site-packages/ipykernel_launcher.py:30: UserWarning: Using a target size (torch.Size([])) that is different to the input size (torch.Size([1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
I am a complete beginner in Pytorch, but I assume a size([]) is not a valid tensor size? I think something is going wrong in the code, but after trying to work trough it for a while, I have yet to find anything. I also messaged the book publisher some time ago, but I unfortunately did not hear back from them.
That's why I would like to ask here if anyone has ever seen this error and perhaps knows how to fix it?
The code is for implementing A2C reinforcement learning on a mountain car gym environment. I can also be found here: https://github.com/PacktPublishing/PyTorch-1.x-Reinforcement-Learning-Cookbook/blob/master/Chapter08/chapter8/actor_critic_mountaincar.py
'''
Source codes for PyTorch 1.0 Reinforcement Learning (Packt Publishing)
Chapter 8: Implementing Policy Gradients and Policy Optimization
Author: Yuxi (Hayden) Liu
'''
import torch
import gym
import torch.nn as nn
import torch.nn.functional as F
env = gym.make('MountainCarContinuous-v0')
class ActorCriticModel(nn.Module):
def __init__(self, n_input, n_output, n_hidden):
super(ActorCriticModel, self).__init__()
self.fc = nn.Linear(n_input, n_hidden)
self.mu = nn.Linear(n_hidden, n_output)
self.sigma = nn.Linear(n_hidden, n_output)
self.value = nn.Linear(n_hidden, 1)
self.distribution = torch.distributions.Normal
def forward(self, x):
x = F.relu(self.fc(x))
mu = 2 * torch.tanh(self.mu(x))
sigma = F.softplus(self.sigma(x)) + 1e-5
dist = self.distribution(mu.view(1, ).data, sigma.view(1, ).data)
value = self.value(x)
return dist, value
class PolicyNetwork():
def __init__(self, n_state, n_action, n_hidden, lr=0.001):
self.model = ActorCriticModel(n_state, n_action, n_hidden)
self.optimizer = torch.optim.Adam(self.model.parameters(), lr)
def update(self, returns, log_probs, state_values):
"""
Update the weights of the Actor Critic network given the training samples
@param returns: return (cumulative rewards) for each step in an episode
@param log_probs: log probability for each step
@param state_values: state-value for each step
"""
loss = 0
for log_prob, value, Gt in zip(log_probs, state_values, returns):
advantage = Gt - value.item()
policy_loss = - log_prob * advantage
value_loss = F.smooth_l1_loss(value, Gt)
loss += policy_loss + value_loss
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
def predict(self, s):
"""
Compute the output using the continuous Actor Critic model
@param s: input state
@return: Gaussian distribution, state_value
"""
self.model.training = False
return self.model(torch.Tensor(s))
def get_action(self, s):
"""
Estimate the policy and sample an action, compute its log probability
@param s: input state
@return: the selected action, log probability, predicted state-value
"""
dist, state_value = self.predict(s)
action = dist.sample().numpy()
log_prob = dist.log_prob(action[0])
return action, log_prob, state_value
def actor_critic(env, estimator, n_episode, gamma=1.0):
"""
continuous Actor Critic algorithm
@param env: Gym environment
@param estimator: policy network
@param n_episode: number of episodes
@param gamma: the discount factor
"""
for episode in range(n_episode):
log_probs = []
rewards = []
state_values = []
state = env.reset()
while True:
state = scale_state(state)
action, log_prob, state_value = estimator.get_action(state)
action = action.clip(env.action_space.low[0],
env.action_space.high[0])
next_state, reward, is_done, _ = env.step(action)
total_reward_episode[episode] += reward
log_probs.append(log_prob)
state_values.append(state_value)
rewards.append(reward)
if is_done:
returns = []
Gt = 0
pw = 0
for reward in rewards[::-1]:
Gt += gamma ** pw * reward
pw += 1
returns.append(Gt)
returns = returns[::-1]
returns = torch.tensor(returns)
returns = (returns - returns.mean()) / (returns.std() + 1e-9)
estimator.update(returns, log_probs, state_values)
print('Episode: {}, total reward: {}'.format(episode, total_reward_episode[episode]))
break
state = next_state
import sklearn.preprocessing
import numpy as np
state_space_samples = np.array(
[env.observation_space.sample() for x in range(10000)])
scaler = sklearn.preprocessing.StandardScaler()
scaler.fit(state_space_samples)
def scale_state(state):
scaled = scaler.transform([state])
return scaled[0]
n_state = env.observation_space.shape[0]
n_action = 1
n_hidden = 128
lr = 0.0003
policy_net = PolicyNetwork(n_state, n_action, n_hidden, lr)
n_episode = 200
gamma = 0.9
total_reward_episode = [0] * n_episode
actor_critic(env, policy_net, n_episode, gamma)
| size([]) is valid, but it represents a single value, not an array, whereas size([1]) is a 1 dimensional array containing only one item item. It is like comparing 5 to [5]. One solution to this is
returns = returns[::-1]
returns_amount = len(returns)
returns = torch.tensor(returns)
returns = (returns - returns.mean()) / (returns.std() + 1e-9)
returns.resize_(returns_amount, 1)
This converts returns into a 2-dimensional array, so that each Gt you get from it will be a 1-dimensional array, not a float.
| https://stackoverflow.com/questions/61912681/ |
How to build a model to predict a graph (not a image) in time series? | There is an adjacent matrix dataset that is based on time series. I would like to know if it is possible to build a neural network model to predict tn time point's matrix by using the previous time-series data. In my opinion, traditional models such as CNN may not fit for the sparse matrix graph.
| Maybe you should give a look at Graph Neural Networks (specialy Spatial-Temporal Graph Networks). They use temporal information about graphs and its adjacency matrix to predict future nodes states, such values in the next-step.
You can read this survey paper as a start point and follow its cited works therefore.
| https://stackoverflow.com/questions/61925599/ |
Autoencoder to encode features/categories of data | My question is regarding the use of autoencoders (in PyTorch). I have a tabular dataset with a categorical feature that has 10 different categories. Names of these categories are quite different - some names consist of one word, some of two or three words. But all in all I have 10 unique category names. What I'm trying to do is to create an autoencoder which will encode names of these categories - for example, if I have a category named 'Medium size class', I want to see if it is possible to train autoencoder to encode this name as something like 'mdmsc' or something like that. The use of it would be to found out which data points are hard to encode or not typical or something like that. I tried to adapt autoencoder architectures from various tutorials online however nothing seems to work for me or I simply do not know how to use them as they are all about images. Maybe someone has any idea how this type of autoencoder might be accomplished if it is at all possible?
Edit: here's the model I have so far (I just tried to adapt some architectures I found online):
class Autoencoder(nn.Module):
def __init__(self, input_shape, encoding_dim):
super(Autoencoder, self).__init__()
self.encode = nn.Sequential(
nn.Linear(input_shape, 128),
nn.ReLU(True),
nn.Linear(128, 64),
nn.ReLU(True),
nn.Linear(64, encoding_dim),
)
self.decode = nn.Sequential(
nn.Linear(encoding_dim, 64),
nn.ReLU(True),
nn.Linear(64, 128),
nn.ReLU(True),
nn.Linear(128, input_shape)
)
def forward(self, x):
x = self.encode(x)
x = self.decode(x)
return x
model = Autoencoder(input_shape=10, encoding_dim=5)
And also I use LabelEncoder() and then OneHotEncoder()to give these features/categories I mentioned numerical form. However, after training, output is the same as was input (no changes on the category name) but when I try to use only encoder part I'm unable to apply LabelEncoder() and then OneHotEncoder() because of dimension issues. I feel like maybe I can do something differently at the beginning, then I try to give those features numerical form, however I'm not sure what should I do.
| First you will need to set up a train_loader depending on your data that will iterate over your data points.
Then you need to figure out what kind of loss you are going to use and optimizer:
# mean-squared error loss
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=.001) #learning rate depend on your task
Once you have that ready, you can train your autoencoder with basic steps:
for epoch in range(epochs):
for features in train_loader:
optimizer.zero_grad()
outputs = model(batch_features)
train_loss = criterion(outputs, features)
train_loss.backward()
optimizer.step()
Once model is done training you can examine embeddings using:
embedding = model.encode(your_input)
| https://stackoverflow.com/questions/61940062/ |
Pytorch runtime error: Cuda Out of memory. Works fine with jupyter notebook but doesn't as a script | I have a special kind of problem. I am able to run the code in jupyter notebook perfectly fine with no OOM error. However when i run the same code as a script in linux it gives me the OOM error. Has anyone have the same issue. I tried gc.collect() and torch.cuda.empty_cache() inside the code and nothing helps.
It always gives me this error.
RuntimeError: CUDA out of memory. Tried to allocate 1.30 GiB (GPU 0; 7.79 GiB total capacity; 4.80 GiB already allocated; 922.69 MiB free; 6.12 GiB reserved in total by PyTorch
The code:
def lemmatize(phrase):
"""Return lematized words"""
spa = spacy.load("en_core_web_sm")
return " ".join([word.lemma_ for word in spa(phrase)])
def reading_csv(path_to_csv):
"""Return text column in csv"""
data = pd.read_csv(path_to_csv)
ctx_paragraph = []
for txt in data['text']:
if not pd.isna(txt):
ctx_paragraph.append(txt)
return ctx_paragraph
def processing_question(ques, paragraphs, domain_lemma_cache, domain_pickle):
"""Return answer"""
#Lemmatizing whole csv text column
lemma_cache = domain_lemma_cache
if not os.path.isfile(lemma_cache):
lemmas = [lemmatize(par) for par in tqdm(paragraphs)]
df = pd.DataFrame(data={'context': paragraphs, 'lemmas': lemmas})
df.to_feather(lemma_cache)
df = pd.read_feather(lemma_cache)
paragraphs = df.context
lemmas = df.lemmas
#Vectorizor cache
if not os.path.isfile(VEC_PICKLE_LOC):
vectorizer = TfidfVectorizer(
stop_words='english', min_df=5, max_df=.5, ngram_range=(1, 3))
vectorizer.fit_transform(lemmas)
pickle.dump(vectorizer, open(VEC_PICKLE_LOC, "wb"))
#Vectorized lemmas cache cache
if not os.path.isfile(domain_pickle):
tfidf = vectorizer.fit_transform(lemmas)
pickle.dump(tfidf, open(domain_pickle, "wb"))
vectorizer = pickle.load(open(VEC_PICKLE_LOC, "rb"))
tfidf = pickle.load(open(domain_pickle, "rb"))
question = ques
query = vectorizer.transform([lemmatize(question)])
(query > 0).sum(), vectorizer.inverse_transform(query)
scores = (tfidf * query.T).toarray()
results = (np.flip(np.argsort(scores, axis=0)))
qapipe = pipeline('question-answering',
model='distilbert-base-uncased-distilled-squad',
tokenizer='bert-base-uncased',
device=0)
candidate_idxs = [(i, scores[i]) for i in results[0:10, 0]]
contexts = [(paragraphs[i], s) for (i, s) in candidate_idxs if s > 0.01]
question_df = pd.DataFrame.from_records([{
'question': question,
'context': ctx
} for (ctx, s) in contexts])
preds = qapipe(question_df.to_dict(orient="records"))
answer_df = pd.DataFrame.from_records(preds)
answer_df["context"] = question_df["context"]
answer_df = answer_df.sort_values(by="score", ascending=False)
return answer_df
| I had a similar thing happen to me recently.
I would run my model in a Jupyter notebook, on a AWS EC2 p2.xlarge instance, and the model would run correctly. Then, I would ssh into the same instance, and re-run a .py script of the same model, and receive the OOM errors that you described.
All I had to do was reset the kernal of the Jupyter notebook, to get the .py script to work.
| https://stackoverflow.com/questions/61944703/ |
Kernel size can't be greater than actual input size | I have a data with depth = 3 and I want to pass it through 3 convolution layers with 3x3x3 kernels each.
My current code is below. The first input is
[batch_size=10, in_channels=1, depth=3, height=128, width=256]
and I notice after the first conv3d layer the output is [10,8,1,126,254]. Obviously it has now depth 1 and doesn't accept it for another 3x3x3 layer. How can I achieve this?
class CNet(nn.Module):
def __init__(self, **kwargs):
super().__init__()
self.conv1 = nn.Conv3d(1, 8, kernel_size=3, stride=1, padding=0)
self.conv2 = nn.Conv3d(8, 16, kernel_size=3, stride=1, padding=0)
self.conv3 = nn.Conv3d(16, 32, kernel_size=3, stride=1, padding=0)
self.fc1 = nn.Linear(value, 2)
def forward(self, X):
X = F.relu(self.conv1(X))
X = F.relu(self.conv2(X))
X = F.max_pool2d(X,2)
X = self.conv3(X)
X = F.max_pool2d(X,2)
X = self.fc1(X)
return F.softmax(X,dim =1)
| You need to use padding. If you only want to pad the input for the convolutions after the first one and only in the depth dimensions to get the minimum dimension of 3, you would use padding=(1, 0, 0) (it's 1 because the same padding is applied to both sides, i.e. (padding, input, padding) along that dimension).
self.conv2 = nn.Conv3d(8, 16, kernel_size=3, stride=1, padding=(1, 0, 0))
self.conv3 = nn.Conv3d(16, 32, kernel_size=3, stride=1, padding=(1, 0, 0))
However, it is common to use padding=1 for all dimensions when using kernel_size=3, because that keeps the dimensions unchanged, which makes it much easier to build deeper network, as you don't need to worry about the sizes suddenly getting too small, as it happened already for your depth dimension. Also when no padding is used, the corners are only included in a single calculation, whereas all other elements contribute to multiple calculations. It is recommended to use kernel_size=3 and padding=1 for all your convolutions.
self.conv1 = nn.Conv3d(1, 8, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Conv3d(8, 16, kernel_size=3, stride=1, padding=1)
self.conv3 = nn.Conv3d(16, 32, kernel_size=3, stride=1, padding=1)
| https://stackoverflow.com/questions/61945404/ |
Unable to import torch (ImportError: libcudart.so.10.0) | I'm currently working on a Nvidia Jetson Nano and I'm not very familiar with Linux. I am trying to run a python file which imports a package called torch. I have installed it alongside with torchvision while following the instructions from NVIDIA here.
When I run pip list on my terminal, I am able to see torch listed as one of the packages installed. However, I am unable to run the python file due to the error seen below. When I try to run it on python shell, the same error pops up.
FYI: Previously it had issues as the system was using python 2 by default but I have already fixed the path by switching to python 3 by editing the .bashrc file.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jiayi/.local/lib/python3.6/site-packages/torch/__init__.py", line 81, in <module>
from torch._C import *
ImportError: libcudart.so.10.0: cannot open shared object file: No such file or directory
I have tried uninstalling and installing via pip but to no avail. When I try to install the pytorch package (following the instructions from a github repo here), an error occurs as seen below and it is due to the same issue. It is able to detect that the torch package is installed but there seems to be an internal issue.
Requirement already satisfied: torch==1.4.0 from file:///home/jiayi/jetson-inference/build/torch-1.4.0-cp36-cp36m-linux_aarch64.whl in /home/jiayi/.local/lib/python3.6/site-packages (1.4.0)
[jetson-inference] cloning torchvision...
[sudo] password for jiayi:
Cloning into 'torchvision-36'...
remote: Enumerating objects: 71, done.
remote: Counting objects: 100% (71/71), done.
remote: Compressing objects: 100% (56/56), done.
remote: Total 8219 (delta 37), reused 29 (delta 15), pack-reused 8148
Receiving objects: 100% (8219/8219), 10.22 MiB | 3.60 MiB/s, done.
Resolving deltas: 100% (5631/5631), done.
[jetson-inference] building torchvision for Python 3.6...
Traceback (most recent call last):
File "setup.py", line 14, in <module>
import torch
File "/home/jiayi/.local/lib/python3.6/site-packages/torch/__init__.py", line 81, in <module>
from torch._C import *
ImportError: libcudart.so.10.0: cannot open shared object file: No such file or directory
[jetson-inference] installation complete, exiting with status code 0
[jetson-inference] to run this tool again, use the following commands:
$ cd <jetson-inference>/build
$ ./install-pytorch.sh
| I meet the exact same problem. The problem seems to be cuda 10.2. Downgrading to 10.0 does not help either. Probably the solution is to manually install everything from Jetpack and making sure that the cuda version to be installed is 10.0.
| https://stackoverflow.com/questions/61948074/ |
How to fine tune BERT on unlabeled data? | I want to fine tune BERT on a specific domain. I have texts of that domain in text files. How can I use these to fine tune BERT?
I am looking here currently.
My main objective is to get sentence embeddings using BERT.
| The important distinction to make here is whether you want to fine-tune your model, or whether you want to expose it to additional pretraining.
The former is simply a way to train BERT to adapt to a specific supervised task, for which you generally need in the order of 1000 or more samples including labels.
Pretraining, on the other hand, is basically trying to help BERT better "understand" data from a certain domain, by basically continuing its unsupervised training objective ([MASK]ing specific words and trying to predict what word should be there), for which you do not need labeled data.
If your ultimate objective is sentence embeddings, however, I would strongly suggest you to have a look at Sentence Transformers, which is based on a slightly outdated version of Huggingface's transformers library, but primarily tries to generate high-quality embeddings. Note that there are ways to train with surrogate losses, where you try to emulate some form ofloss that is relevant for embeddings.
Edit: The author of Sentence-Transformers recently joined Huggingface, so I expect support to greatly improve over the upcoming months!
| https://stackoverflow.com/questions/61962710/ |
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first | The following is from a project that I'm doing in Udacity on Deep Learning. The project is on Generating TV scripts. The error that i encountered is the one below.
The following function is the one after model training.
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
param decoder: The PyTorch Module that holds the trained neural network
param prime_id: The word id to start the first prediction
param int_to_vocab: Dict of word id keys to word values
param token_dict: Dict of puncuation tokens keys to puncuation values
param pad_value: The value used to pad a sequence
param predict_len: The length of text to generate
return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
after this the following code is run:
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
Upon running this code, I get the following error
TypeError Traceback (most recent call last)
<ipython-input-40-68a17c4d1704> in <module>()
7 """
8 pad_word = helper.SPECIAL_WORDS['PADDING']
----> 9 generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
10 print(generated_script)
3 frames
<ipython-input-39-b86c7a305356> in generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len)
53
54 # the generated word becomes the next "current sequence" and the cycle can continue
---> 55 current_seq = np.roll(current_seq, -1, 1)
56 current_seq[-1][-1] = word_i
57
<__array_function__ internals> in roll(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/numpy/core/numeric.py in roll(a, shift, axis)
1179
1180 """
-> 1181 a = asanyarray(a)
1182 if axis is None:
1183 return roll(a.ravel(), shift, 0).reshape(a.shape)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py in asanyarray(a, dtype, order)
136
137 """
--> 138 return array(a, dtype, copy=False, order=order, subok=True)
139
140
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in __array__(self, dtype)
490 def __array__(self, dtype=None):
491 if dtype is None:
--> 492 return self.numpy()
493 else:
494 return self.numpy().astype(dtype, copy=False)
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Can anyone please help me out?
| np.roll(current_seq, -1, 1) requires the input to be a NumPy array, but current_seq is a tensor, so it tries to convert it to a NumPy array, which fails, because the tensor is on the GPU. In order to convert it to a NumPy array, you need to have the tensor on the CPU.
current_seq = np.roll(current_seq.cpu(), -1, 1)
| https://stackoverflow.com/questions/61964863/ |
Convert np array of arrays to torch tensor when inner arrays are of different sizes | I have several videos, which I have loaded frame by frame into a numpy array of arrays. For example if I have 8 videos, they are converted into an 8 dimensional numpy array of arrays where each inner array has a different dimension depending on the number of frames of the individual video. When I print
array.shape
my output is (8,)
Now I would like to create a dataloader for this data, and for that I would like to convert this numpy array into a torch tensor. However when I try to convert it using the torch.from_numpy or even simply the torch.tensor functions I get the error
TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool.
which I assume is because my inner arrays are of different sizes. One possible solution is to artificially add a dimension to my videos to make them be of the same size and then use np.stack but that may lead to possible problems later on. Is there any better solution?
Edit: Actually adding a dimension won't work because np.stack requires all dimensions to be the same.
Edit: Sample Array would be something like:
[ [1,2,3], [1,2], [1,2,3,4] ]
This is stored as a (3,) shaped np array. The real arrays are actually 4-dimensional( Frames x Height x Width x Channels), so this is just an example.
| You can use rnn util function pad_sequence to make them same size.
ary
array([list([1, 2, 3]), list([1, 2]), list([1, 2, 3, 4])], dtype=object)
from torch.nn.utils.rnn import pad_sequence
t = pad_sequence([torch.tensor(x) for x in ary], batch_first=True)
t
tensor([[1, 2, 3, 0],
[1, 2, 0, 0],
[1, 2, 3, 4]])
t.shape
torch.Size([3, 4])
| https://stackoverflow.com/questions/61970047/ |
Can I use a PyTorch or Tensorflow project on a machine without GPU? | I'm a noob when it comes to Python and machine learning. I'm trying to run two different projects that have to do with something called Deep Image Matting:
https://github.com/Joker316701882/Deep-Image-Matting with Tensorflow
https://github.com/huochaitiantang/pytorch-deep-image-matting with Pytorch
I'm just trying to run the tests in these projects but I run into various problems. Can I run these on a machine without GPU? I thought that GPU is only for speeding up processing, but I'm only interested in seeing these run before getting a machine with GPU.
I apologize in advance, as I know I'm a total noob in this
When I try the Tensorflow project:
I get an error with this line gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction = args.gpu_fraction) probably because I was tf2 and this requires tf1
After I downgraded to tf1 when I try to run the test I get W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
and InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'MaxPoolWithArgmax' with these attrs. Registered devices: [CPU], Registered kernels:
<no registered kernels> and now I'm stuck because I have no clue what this means
When I try the Pytorch project:
First I get this error: RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
So I added map_location=torch.device('cpu') when the model is loaded, but now I get RuntimeError: Error(s) in loading state_dict for VGG16:
size mismatch for conv6_1.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). And I'm stuck again
Can someone help?
thank you in advance!
| For the PyTorch one, there were two problems and it looks like you've solved the first one on your own with map_location. The second problem is that the weights in your checkpoint and the weights in your model don't have the same shape! A quick detour to the github repo; let's visit net.py in core. Take a look at lines 26 to 28:
# model released before 2019.09.09 should use kernel_size=1 & padding=0
# self.conv6_1 = nn.Conv2d(512, 512, kernel_size=1, padding=0,bias=True)
self.conv6_1 = nn.Conv2d(512, 512, kernel_size=3, padding=1,bias=True)
I'm guessing the checkpoint is loading weights where conv6_1 has a kernel size of 1 rather than 3, like the commented out line of code. So try uncommenting the line with kernel_size=1 and comment out the line with kernel_size=3.
| https://stackoverflow.com/questions/61974153/ |
Install torch on python 3.8.1 windows 10 | I have been reading this post How to install pytorch in windows? but no one answer work for me on the versio 3.8.1 of python. Anything else I can do?
| Maybe this can help you.
pip install torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
Please ensure that you have met the prerequisites, depending on your package manager. You can also use Anaconda as a package manager since it installs all dependencies.
| https://stackoverflow.com/questions/61981438/ |
PyTorch C++ FrontEnd returning multiple Tensors in forward | I was wonder how can I return a std::vector<torch::Tensor> in my forward pass of a Module Class,
I read about the Macro of FORWARD_HAS_DEFAULT_ARGS in the docs, but didn’t really
understand how to use it, and also how to use it for making it possible to return a vector in return.
Thank you in advance.
| FORWARD_HAS_DEFAULT_ARGS is a C++ macro and according to documentation:
This macro enables a module with default arguments in its forward
method to be used in a Sequential module.
So it's not what you are after.
I assume you are returning multiple torch::Tensor values contained in std::vector. You could just do that, but you should appropriately unpack it after returning like this:
# Interprets returned IValue as your desired return type
# You may have to use module.forward(inputs) depending how you loaded model
auto outputs = module->forward(inputs).toTensorVector();
# Print first tensor
std::cout << outputs[0] << std::endl;
If you want to return multiple values of different types from forward method you should just return std::tuple containing your desired types.
After this you can unpack it like this (for two torch::Tensor return values) (source here):
auto outputs = module->forward(inputs).toTuple();
torch::Tensor out1 = outputs->elements()[0].toTensor();
torch::Tensor out2 = outputs->elements()[1].toTensor();
You could also concatenate pytorch tensors (if that's all you are returning and they are of the same shape) and use view or a-like methods to unpack it. C++ frontend is pretty similar to Python's all in all, refer to docs if in doubt.
| https://stackoverflow.com/questions/61988134/ |
PyTorch error - 'numpy.ndarray' object has no attribute 'relu' | I am testing my CNN model, but keep on getting error "AttributeError: 'numpy.ndarray' object has no attribute 'relu'".
my dataset is extracted by below code:
import torch
from torch.utils.data import Dataset, DataLoader
from torch.autograd import Variable
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import numpy as np
class MyDataset(Dataset):
def __init__(self, data, target, transform=None):
self.data = torch.from_numpy(data).float()
self.target = torch.from_numpy(target).long()
self.transform = transform
def __getitem__(self, index):
x = self.data[index]
y = self.target[index]
if self.transform:
x = self.transform(x)
return x, y
def __len__(self):
return len(self.data)
numpy_data = np.random.randn(100,3,224,224) # 10 samples, image size = 224 x 224 x 3
numpy_target = np.random.randint(0,5,size=(100))
dataset = MyDataset(numpy_data, numpy_target)
my model is very simple as below:
class Network(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(in_channels = 3, out_channels = 36, kernel_size = 100)
def forward(self, t):
t = self.conv1(t)
print(t.shape)
print(type(t))
t = F.relu(t)
print(t.shape)
return t
I test model using below:
sample, target = next(iter(dataset))
network=Network()
pred = network(sample.unsqueeze(0))
I got below result and error:
torch.Size([1, 6, 125, 125])
<class 'torch.Tensor'>
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-40-37e58cfe971f> in <module>
----> 1 pred = network(sample.unsqueeze(0))
C:\Miniconda\envs\py37_default\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
<ipython-input-30-0d0592ad705d> in forward(self, t)
22 print(t.shape)
23 print(type(t))
---> 24 t = F.relu(t)
25 print(t.shape)
26 #t = F.max_pool2d(t, kernel_size=2, stride=2)
AttributeError: 'numpy.ndarray' object has no attribute 'relu'
I could not figure out why, type(t) does output as , why error is showing it is numpy.ndarray?
| Where is F defined? F seems to be the numpy array.
Did you maybe mean to do:
import torch.nn.functional as F? Otherwise, the relu function isn't defined anywhere.
| https://stackoverflow.com/questions/61990429/ |
Pytorch : GPU Memory Leak | I speculated that I was facing a GPU memory leak in the training of Conv nets using PyTorch framework. Below image
To resolve it, I added -
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
which resolved the memory problem, as shown below -
but as I was using torch.nn.DataParallel, so I expect my code to utilise all the GPUs, but now it is utilising only the GPU:1.
Before using os.environ['CUDA_LAUNCH_BLOCKING'] = "1", the GPU utilisation was below (which is equally bad)-
On digging further, I come to know that, when we use torch.nn.DataParallel, we are supposed to not use CUDA_LAUNCH_BLOCKING', because it puts the network in some deadlock mechanism.
So, now I have come back again in GPU memory issue, because I think my code is not utilising that much memory which it is showing without setting CUDA_LAUNCH_BLOCKING=1.
My code to use torch.nn.DataParallel-
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model_transfer = nn.DataParallel(model_transfer.cuda(),device_ids=range(torch.cuda.device_count()))
model_transfer.to(device)
How to resolve the GPU memory issue?
Edit:
Minimal code -
image_dataset = datasets.ImageFolder(train_dir_path,transform = transform)
train_loader = torch.utils.data.DataLoader(image_dataset['train'], batch_size=batch_size,shuffle = True)
model_transfer = models.resnet18(pretrained=True)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model_transfer = nn.DataParallel(model_transfer.cuda(),device_ids=range(torch.cuda.device_count()))
model_transfer.to(device)
## Training function
for epoch in range(1, n_epochs+1):
for batch_idx, (data, target) in enumerate(train_loader):
if use_cuda:
data, target = data.to('cuda',non_blocking = True), target.to('cuda',non_blocking = True)
optimizer.zero_grad()
output = model(data)
loss = criterion(output,target)
loss.backward()
optimizer.step()
train_loss += ((1 / (batch_idx + 1)) * (loss.item() - train_loss))
## Validation loop same as training loop so not mentioning here
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
if valid_loss <= valid_loss_min:
valid_loss_min,valid_loss))
torch.save(model.state_dict(), 'case_3_model.pt')
valid_loss_min = valid_loss
| So the way I resolved some of my CUDA out of memory issue is by making sure to delete useless tensors and trim tensors that may stay referenced for some hidden reason. The problem may arise from either requesting for more memory than you have the capacity for or an accumulation of garbage data that you don't need, but somehow is left behind on the memory.
One of the most important aspects of this memory management is how you are loading in the data. Instead of reading the entire dataset, it may be more memory efficient to read from disk (using memmap when reading npy) or doing batch loading, where you only read a batch of images or whatever data you have at a time. Although this may be computationally slower, it does give you flexibility for not going out and buying more GPUS to store your memory just to run your code.
We're not sure how your code is structured in terms of reading the data or training your CNN so this is as much advice I can give.
| https://stackoverflow.com/questions/61991467/ |
pythorch-lightning train_dataloader runs out of data | I started to use pytorch-lightning and faced a problem of my custom data loaders:
Im using an own dataset and a common torch.utils.data.DataLoader. Basically the dataset takes a path and loads the data corresponding to an given index the dataloader loads its.
def train_dataloader(self):
train_set = TextKeypointsDataset(parameters...)
train_loader = torch.utils.data.DataLoader(train_set, batch_size, num_workers)
return train_loader
When I use the pytorch-lightning modules train_dataloader and training_step everything runs fine. When I add val_dataloader and validation_step Im facing this error:
Epoch 1: 45%|████▌ | 10/22 [00:02<00:03, 3.34it/s, loss=5.010, v_num=131199]
ValueError: Expected input batch_size (1500) to match target batch_size (5)
In this case my dataset is really small (to test functionality) of 84 samples, my batch size is 8. The dataset for training and validation has the same length (just for testing purposes again).
So in total its 84 * 2 = 168 and 168 / 8 (batchsize) = 21, which are roughly the total steps (22) shown above. This means that after running on the training dataset for 10 times (10 * 8 = 80) the loader expects a new full sample of 8, but since there are only 84 samples I get an error (at least this is my current understanding).
I faced a similar problem in my own implementation (not using pytorch-lighntning) and used this pattern to solve it. Basically I am resetting the iterator, when running out of data:
try:
data = next(data_iterator)
source_tensor = data[0]
target_tensor = data[1]
except StopIteration: # reinitialize data loader if num_iteration > amount of data
data_iterator = iter(data_loader)
Right now it seems like Im facing sth similar? I dont know how to reset/reinitialize the data loader in pytorch-lightning when my training_dataloader is running out of data. I guess there must be another sophisticated way Im not familar with. Thank you
| The solution was:
I used source_tensor = source_tensor.view(-1, self.batch_size, self.input_size) which lead to some errors later on, now Im using source_tensor = source_tensor.permute(1, 0, 2), which fixed the problem.
| https://stackoverflow.com/questions/62006977/ |
Confusion regarding batch size while using DataLoader in pytorch | I am new to pytorch.
I am training an ANN for classification on the MNIST dataset.
train_loader = DataLoader(train_data,batch_size=200,shuffle=True)
I am confused. The dataset is of 60,000 images and I have set batch size of 6000 and my model has 30 epochs.
Will every epoch see only 6000 images or will every epoch see 10 batches of 6000 images?
| Every call to the dataset iterator will return batch of images of size batch_size. Hence you will have 10 batches until you exhaust all the 60000 images.
| https://stackoverflow.com/questions/62012673/ |
Pytorch: How to know if GPU memory being utilised is actually needed or is there a memory leak | I have 3 Tesla V100(16 GB). I am doing transfer learning using efficeint net (63 Million parameters) on images of (512,512) with a batch size of 20.
My GPU memory utilisation is below -
As you can see, it has almost filled up all the 3 GPUs(almost 80%).
My question is is there any theoretical way of calculating that the utilisation of GPU memory being shown is what is required by the model at a certain image and batch size or is there a memory leak in my GPU?
| I'm not sure that is what you asked for but have you tried doing something like:
memory_usage = number_of_variables * memory_usage_per_variable.
So if you use torch.float32 tensors, and you have 125 000 variables sent to the GPU with .cuda(). Then you are using 4Gbytes of memory on your GPU. You can compare with how much memory you have available on your memory.
Another Sanity check would be to check memory usage on the GPU's per iteration your model, if it doubles then you have a memory leak.
Hope this helps.
| https://stackoverflow.com/questions/62013841/ |
Pytorch could not find module | I have installed pytorch with command:
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch -y
Python complains regarding line import torch with message:
Could not find module 'C:\ProgramData\Anaconda3\envs\edx\lib\site-packages\torch\lib\caffe2_nvrtc.dll' (or one of its dependencies). Try using the full path with constructor syntax
This directory contains library caffe2_nvrtc.dll. What might be wrong and how to fix this error?
| I faced the same problem. If your OS is Windows then I would recommend using Anaconda and installing pytorch in separate conda environment. Quick solution is to search for nvcuda.dll file on google and download this file. If you are running the code on Jupyter notebook the output will give you the complete path of the 'lib' folder of conda environment. By default it is 'C:\Users\YourUserName\anaconda3\envs\mera_beta\Lib\site-packages\torch\lib'. Go this directory and paste the file in this folder. Rerun your code. Hopefully it will run.
| https://stackoverflow.com/questions/62021601/ |
pip install torch killed at 99% -- Excessive memory usage | This is while I was installing torch on my laptop. It was getting killed continuously so I thought I will check the memory usage. It hanged my laptop, I had to take a picture with my phone.
If you can't see the image below, it shows pip using 5.8 GiB memory out of 7.8 GiB available. That was a sudden spike at 99%.
System Monitor pip memory usage
| If you are running low on memory you could try with pip install package --no-cache-dir
| https://stackoverflow.com/questions/62030345/ |
Is torch.as_tensor() the same as torch.from_numpy() for a numpy array on a CPU? | On a CPU, is torch.as_tensor(a) the same as torch.from_numpy(a) for a numpy array, a? If not, then why not?
From the docs for torch.as_tensor
if the data is an ndarray of the corresponding dtype and
the device is the cpu, no copy will be performed.
From the docs for torch.from_numpy:
The returned tensor and ndarray share the same memory. Modifications to
the tensor will be reflected in the ndarray and vice versa.
In both cases, any changes the resulting tensor changes the original numpy array.
a = np.array([[1., 2], [3, 4]])
t1 = torch.as_tensor(a)
t2 = torch.from_numpy(a)
t1[0, 0] = 42.
print(a)
# prints [[42., 2.], [3., 4.]]
t2[1, 1] = 55.
print(a)
# prints [[42., 2.], [3., 55.]]
Also, in both cases, attempting to resize_ the tensor results in an error.
| They are basically the same, except than as_tensor is more generic:
Contrary to from_numpy, it supports a wide range of datatype, including list, tuple, and native Python scalars.
as_tensor supports changing dtype and device directly, which is very convenient in practice since the default dtype of Torch tensor is float32, while for Numpy array it is float64.
as_tensor is sharing memory with the original data if and only if the original object is a Numpy array, and the requested dtype, if any, is the same than the original data. Those are the same conditions than from_numpy, but are always satisfied by design for the later.
| https://stackoverflow.com/questions/62033283/ |
How embedding_bag exactly works in PyTorch | in PyTorch, torch.nn.functional.embedding_bag seems to be the main function responsible for doing the real job of embedding lookup. On PyTorch's documentation, it has been mentioned that embedding_bag does its job > without instantiating the intermediate embeddings. What does that exactly mean? Does this mean for example when the mode is "sum" it does in-place summation? or it just means that no additional Tensors will be produced when calling embedding_bag but still from the system's point of view all the intermediate row-vectors are already fetched into the processor to be used for calculating the final Tensor?
| In the simplest case, torch.nn.functional.embedding_bag is conceptually a two step process. The first step is to create an embedding and the second step is to reduce (sum/mean/max, according to the "mode" argument) the embedding output across dimension 0. So you can get the same result that embedding_bag gives by calling torch.nn.functional.embedding, followed by torch.sum/mean/max. In the following example, embedding_bag_res and embedding_mean_res are equal.
>>> weight = torch.randn(3, 4)
>>> weight
tensor([[ 0.3987, 1.6173, 0.4912, 1.5001],
[ 0.2418, 1.5810, -1.3191, 0.0081],
[ 0.0931, 0.4102, 0.3003, 0.2288]])
>>> indices = torch.tensor([2, 1])
>>> embedding_res = torch.nn.functional.embedding(indices, weight)
>>> embedding_res
tensor([[ 0.0931, 0.4102, 0.3003, 0.2288],
[ 0.2418, 1.5810, -1.3191, 0.0081]])
>>> embedding_mean_res = embedding_res.mean(dim=0, keepdim=True)
>>> embedding_mean_res
tensor([[ 0.1674, 0.9956, -0.5094, 0.1185]])
>>> embedding_bag_res = torch.nn.functional.embedding_bag(indices, weight, torch.tensor([0]), mode='mean')
>>> embedding_bag_res
tensor([[ 0.1674, 0.9956, -0.5094, 0.1185]])
However, the conceptual two step process does not reflect how it's actually implemented. Since embedding_bag does not need to return the intermediate result, it doesn't actually generate a Tensor object for the embedding. It just goes straight to computing the reduction, pulling in the appropriate data from the weight argument according to the indices in the input argument. Avoiding the creation of the embedding Tensor allows for better performance.
So the answer to your question (if I understand it correctly)
it just means that no additional Tensors will be produced when calling embedding_bag but still from the system's point of view all the intermediate row-vectors are already fetched into the processor to be used for calculating the final Tensor?
is yes.
| https://stackoverflow.com/questions/62052734/ |
PyTorch Model Training: RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR | After training a PyTorch model on a GPU for several hours, the program fails with the error
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
Training Conditions
Neural Network: PyTorch 4-layer nn.LSTM with nn.Linear output
Deep Q Network Agent (Vanilla DQN with Replay Memory)
state passed into forward() has the shape (32, 20, 15), where 32 is the batch size
50 seconds per episode
Error occurs after about 583 episodes (8 hours) or 1,150,000 steps, where each step involves a forward pass through the LSTM model.
My code also has the following values set before the training began
torch.manual_seed(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(0)
How can we troubleshoot this problem? Since this occurred 8 hours into the training, some educated guess will be very helpful here!
Thanks!
Update:
Commenting out the 2 torch.backends.cudnn... lines did not work. CUDNN_STATUS_INTERNAL_ERROR still occurs, but much earlier at around Episode 300 (585,000 steps).
torch.manual_seed(0)
#torch.backends.cudnn.deterministic = True
#torch.backends.cudnn.benchmark = False
np.random.seed(0)
System
PyTorch 1.6.0.dev20200525
CUDA 10.2
cuDNN 7604
Python 3.8
Windows 10
nVidia 1080 GPU
Error Traceback
RuntimeError Traceback (most recent call last)
<ipython-input-18-f5bbb4fdfda5> in <module>
57
58 while not done:
---> 59 action = agent.choose_action(state)
60 state_, reward, done, info = env.step(action)
61 score += reward
<ipython-input-11-5ad4dd57b5ad> in choose_action(self, state)
58 if np.random.random() > self.epsilon:
59 state = T.tensor([state], dtype=T.float).to(self.q_eval.device)
---> 60 actions = self.q_eval.forward(state)
61 action = T.argmax(actions).item()
62 else:
<ipython-input-10-94271a92f66e> in forward(self, state)
20
21 def forward(self, state):
---> 22 lstm, hidden = self.lstm(state)
23 actions = self.fc1(lstm[:,-1:].squeeze(1))
24 return actions
~\AppData\Local\Continuum\anaconda3\envs\rl\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
575 result = self._slow_forward(*input, **kwargs)
576 else:
--> 577 result = self.forward(*input, **kwargs)
578 for hook in self._forward_hooks.values():
579 hook_result = hook(self, input, result)
~\AppData\Local\Continuum\anaconda3\envs\rl\lib\site-packages\torch\nn\modules\rnn.py in forward(self, input, hx)
571 self.check_forward_args(input, hx, batch_sizes)
572 if batch_sizes is None:
--> 573 result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
574 self.dropout, self.training, self.bidirectional, self.batch_first)
575 else:
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
Update: Tried try... except on my code where this error occurs at, and in addition to RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR, we also get a second traceback for the error RuntimeError: CUDA error: unspecified launch failure
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
<ipython-input-4-e8f15cc8cf4f> in <module>
61
62 while not done:
---> 63 action = agent.choose_action(state)
64 state_, reward, done, info = env.step(action)
65 score += reward
<ipython-input-3-1aae79080e99> in choose_action(self, state)
58 if np.random.random() > self.epsilon:
59 state = T.tensor([state], dtype=T.float).to(self.q_eval.device)
---> 60 actions = self.q_eval.forward(state)
61 action = T.argmax(actions).item()
62 else:
<ipython-input-2-6d22bb632c4c> in forward(self, state)
25 except Exception as e:
26 print('error in forward() with state:', state.shape, 'exception:', e)
---> 27 print('state:', state)
28 actions = self.fc1(lstm[:,-1:].squeeze(1))
29 return actions
~\AppData\Local\Continuum\anaconda3\envs\rl\lib\site-packages\torch\tensor.py in __repr__(self)
152 def __repr__(self):
153 # All strings are unicode in Python 3.
--> 154 return torch._tensor_str._str(self)
155
156 def backward(self, gradient=None, retain_graph=None, create_graph=False):
~\AppData\Local\Continuum\anaconda3\envs\rl\lib\site-packages\torch\_tensor_str.py in _str(self)
331 tensor_str = _tensor_str(self.to_dense(), indent)
332 else:
--> 333 tensor_str = _tensor_str(self, indent)
334
335 if self.layout != torch.strided:
~\AppData\Local\Continuum\anaconda3\envs\rl\lib\site-packages\torch\_tensor_str.py in _tensor_str(self, indent)
227 if self.dtype is torch.float16 or self.dtype is torch.bfloat16:
228 self = self.float()
--> 229 formatter = _Formatter(get_summarized_data(self) if summarize else self)
230 return _tensor_str_with_formatter(self, indent, formatter, summarize)
231
~\AppData\Local\Continuum\anaconda3\envs\rl\lib\site-packages\torch\_tensor_str.py in __init__(self, tensor)
99
100 else:
--> 101 nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0))
102
103 if nonzero_finite_vals.numel() == 0:
RuntimeError: CUDA error: unspecified launch failure
| The error RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR is notoriously difficult to debug, but surprisingly often it's an out of memory problem. Usually, you would get the out of memory error, but depending on where it occurs, PyTorch cannot intercept the error and therefore not provide a meaningful error message.
A memory issue seems to be likely in your case, because you are using a while loop until the agent is done, which might take long enough that you run out of memory, it's just a matter of time. That can also possibly occur rather late, once the model's parameters in combination with a certain input is unable to finish in time.
You can avoid that scenario by limiting the number of allowed actions instead of hoping that the actor will be done in a reasonable time.
What you also need to be careful about, is that you don't occupy unnecessary memory. A common mistake is to keep computing gradients of the past states in future iterations. The state from the last iteration should be considered constant, since the current action should not affect past actions, therefore no gradients are required. This is usually achieved by detaching the state from the computational graph for the next iteration, e.g. state = state_.detach(). Maybe you are already doing that, but without the code it's impossible to tell.
Similarly, if you keep a history of the states, you should detach them and even more importantly put them on the CPU, i.e. history.append(state.detach().cpu()).
| https://stackoverflow.com/questions/62067849/ |
Pytorch: IndexError: index out of range in self. How to solve? | This training code is based on the run_glue.py script found here:
# Set the seed value all over the place to make this reproducible.
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# Store the average loss after each epoch so we can plot them.
loss_values = []
# For each epoch...
for epoch_i in range(0, epochs):
# ========================================
# Training
# ========================================
# Perform one full pass over the training set.
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
# Measure how long the training epoch takes.
t0 = time.time()
# Reset the total loss for this epoch.
total_loss = 0
# Put the model into training mode. Don't be mislead--the call to
# `train` just changes the *mode*, it doesn't *perform* the training.
# `dropout` and `batchnorm` layers behave differently during training
# vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch)
model.train()
# For each batch of training data...
for step, batch in enumerate(train_dataloader):
# Progress update every 100 batches.
if step % 100 == 0 and not step == 0:
# Calculate elapsed time in minutes.
elapsed = format_time(time.time() - t0)
# Report progress.
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
# Unpack this training batch from our dataloader.
#
# As we unpack the batch, we'll also copy each tensor to the GPU using the
# `to` method.
#
# `batch` contains three pytorch tensors:
# [0]: input ids
# [1]: attention masks
# [2]: labels
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
# Always clear any previously calculated gradients before performing a
# backward pass. PyTorch doesn't do this automatically because
# accumulating the gradients is "convenient while training RNNs".
# (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch)
model.zero_grad()
# Perform a forward pass (evaluate the model on this training batch).
# This will return the loss (rather than the model output) because we
# have provided the `labels`.
# The documentation for this `model` function is here:
# https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
# The call to `model` always returns a tuple, so we need to pull the
# loss value out of the tuple.
loss = outputs[0]
# Accumulate the training loss over all of the batches so that we can
# calculate the average loss at the end. `loss` is a Tensor containing a
# single value; the `.item()` function just returns the Python value
# from the tensor.
total_loss += loss.item()
# Perform a backward pass to calculate the gradients.
loss.backward()
# Clip the norm of the gradients to 1.0.
# This is to help prevent the "exploding gradients" problem.
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# Update parameters and take a step using the computed gradient.
# The optimizer dictates the "update rule"--how the parameters are
# modified based on their gradients, the learning rate, etc.
optimizer.step()
# Update the learning rate.
scheduler.step()
# Calculate the average loss over the training data.
avg_train_loss = total_loss / len(train_dataloader)
# Store the loss value for plotting the learning curve.
loss_values.append(avg_train_loss)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epcoh took: {:}".format(format_time(time.time() - t0)))
# ========================================
# Validation
# ========================================
# After the completion of each training epoch, measure our performance on
# our validation set.
print("")
print("Running Validation...")
t0 = time.time()
# Put the model in evaluation mode--the dropout layers behave differently
# during evaluation.
model.eval()
# Tracking variables
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
# Evaluate data for one epoch
for batch in validation_dataloader:
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and
# speeding up validation
with torch.no_grad():
# Forward pass, calculate logit predictions.
# This will return the logits rather than the loss because we have
# not provided labels.
# token_type_ids is the same as the "segment ids", which
# differentiates sentence 1 and 2 in 2-sentence tasks.
# The documentation for this `model` function is here:
# https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask)
# Get the "logits" output by the model. The "logits" are the output
# values prior to applying an activation function like the softmax.
logits = outputs[0]
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# Calculate the accuracy for this batch of test sentences.
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
# Accumulate the total accuracy.
eval_accuracy += tmp_eval_accuracy
# Track the number of batches
nb_eval_steps += 1
# Report the final accuracy for this validation run.
print(" Accuracy: {0:.2f}".format(eval_accuracy/nb_eval_steps))
print(" Validation took: {:}".format(format_time(time.time() - t0)))
print("")
print("Training complete!")
The error is as follows, while running the training for text classification using bert models came across the follow.
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1722 # remove once script supports set_grad_enabled
1723 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1724 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1725
1726
IndexError: index out of range in self
How can I fix it?
| I think you have messed up with input dimension declared torch.nn.Embedding and with your input. torch.nn.Embedding is a simple lookup table that stores embeddings of a fixed dictionary and size.
Any input less than zero or more than declared input dimension raise this error.
Compare your input and the dimension mentioned in torch.nn.Embedding.
Attached code snippet to simulate the issue.
from torch import nn
input_dim = 10
embedding_dim = 2
embedding = nn.Embedding(input_dim, embedding_dim)
err = True
if err:
#Any input more than input_dim - 1, here input_dim = 10
#Any input less than zero
input_to_embed = torch.tensor([10])
else:
input_to_embed = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
embed = embedding(input_to_embed)
print(embed)
Hope this will solve your issue.
| https://stackoverflow.com/questions/62081155/ |
forward() takes 1 positional argument but 2 were given | I'm trying to build a Model using EfficientNet-B0.
The details of the Model are shown in the code below.
I got the following error when I tried to learn.
TypeError Traceback (most recent call last)
'''
<ipython-input-17-fb3850894108> in forward(self, *x)
24 #x: bs*N x 3 x 128 x 128
25 print(x.shape) #([384, 3, 224, 224])
---> 26 x = self.enc(x)
27 #x: bs*N x C x 4 x 4
28 shape = x.shape
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
98 def forward(self, input):
99 for module in self:
--> 100 input = module(input)
101 return input
102
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
TypeError: forward() takes 1 positional argument but 2 were given
I suspect that m.children() may have an effect.
If anyone knows the cause of this error, please let me know.
Thank you.
class Model(nn.Module):
def __init__(self, arch='efficientnet-b0', n=6, pre=True):
super().__init__()
m = EfficientNet.from_pretrained('efficientnet-b0')
#print(*list(m.children())
nc = m._fc.in_features
print(nc)
self.enc = nn.Sequential(*list(m.children())[:-2])
#nc = list(m.children())[-2].in_features
self.head = nn.Sequential(*list(m.children())[-2:])
self.head._fc = nn.Linear(nc, n)
#self.enc = nn.Sequential(*list(m.children()))
#print('fc_infeatures : {}'.format(nc))
#self.head = nn.Sequential(Flatten(),nn.Linear(nc,512),
# relu(),nn.BatchNorm1d(512), nn.Dropout(0.5),nn.Linear(512,n))
def forward(self, *x):
print(x[0].shape) #([32, 3, 224, 224])
shape = x[0].shape
n = len(x) # n = 12
#torch.stack直後では32*12*3*224*224(bs x N x 3 x 224 x 224)
x = torch.stack(x,1).view(-1,shape[-3],shape[-2],shape[-1])
#x: bs*N x 3 x 128 x 128
print(x.shape) #([384, 3, 224, 224])
x = self.enc(x)
#x: bs*N x C x 4 x 4
shape = x.shape
#concatenate the output for tiles into a single map
x = x.view(-1,n,shape[1],shape[2],shape[3]).permute(0,2,1,3,4).contiguous()\
.view(-1,shape[1],shape[2]*n,shape[3])
#x: bs x C x N*4 x 4
x = self.head(x)
return x
| Now you can easily get the network without the last layers by using the include_top parameter:
m = EfficientNet.from_pretrained('efficientnet-b0', include_top=False)
What it does, as can be easily seen in the code, is not calling forward method for the last layers (AveragePool, Dropout, FC).
Other alternative approaches can be summarized as followings:
Define a new MyEfficientNet that override the forward method to avoid calling the last layers
Overwrite the layers you don't need with an identity layer. For example:
m._fc = nn.Identity()
The first approach should be preferred when using the efficientnet as a features extractor while the others should be used when more customisation is needed.
| https://stackoverflow.com/questions/62084245/ |
Is this the right way to compute gradients of two losses from two different NN's in pytorch? | I have a NN defined in pytorch and I have created two instances of that net as self.actor_critic_r1 and self.actor_critic_r2. I calculate the losses of each net i.e. loss1 and loss2 and I sum it up and calculate the grads in the following way,
loss_r1 = value_loss_r1 + action_loss_r1 - dist_entropy_r1 * args.entropy_coef
loss_r2 = value_loss_r2 + action_loss_r2 - dist_entropy_r2 * args.entropy_coef
self.optimizer_r1.zero_grad()
self.optimizer_r2.zero_grad()
loss = loss_r1 + loss_r2
loss.backward()
self.optimizer_r1.step()
self.optimizer_r2.step()
clip_grad_norm_(self.actor_critic_r1.parameters(), args.max_grad_norm)
clip_grad_norm_(self.actor_critic_r2.parameters(), args.max_grad_norm)
Alternatively, should I update the loss individually like this,
self.optimizer_r1.zero_grad()
(value_loss_r1 + action_loss_r1 - dist_entropy_r1 * args.entropy_coef).backward()
self.optimizer_r1.step()
clip_grad_norm_(self.actor_critic_r1.parameters(), args.max_grad_norm)
self.optimizer_r2.zero_grad()
(value_loss_r2 + action_loss_r2 - dist_entropy_r2 * args.entropy_coef).backward()
self.optimizer_r2.step()
clip_grad_norm_(self.actor_critic_r2.parameters(), args.max_grad_norm)
I am not sure if this the right approach to update a network with multiple loss please provide your suggestion.
| It should be the sum approach. If there is no interplay then the gradient of the 'wrong' loss will be zero for the 'wrong' optimizer anyway, and if there is interplay you likely want to optimize for that interplay.
Only if you know that there is interplay but you do not want to optimize for it should you use approach #2.
| https://stackoverflow.com/questions/62102840/ |
PyTorch paste values into tensor by row index with increasing column index | I have a tensor output into which I want to put some values. I know the row that each value should go in, but I don't have an index tensor describing the columns. Instead, if there are k values that belong to one row, they should go in columns 0, 1, ..., k-1. This is perhaps better explained with an example:
import torch
n = 4
max_cols = 5
output = torch.zeros(n, max_cols)
row_idx = torch.tensor([0, 0, 0, 0, 1, 1, 2, 2, 2, 3])
values = torch.arange(len(row_idx)).float() # the values could be anything, not just arange
# output[??] = values
Here the first 4 values should be in output at [0, 0], ... [0, 3], the next 2 values at [1, 0], [1, 1], and so on.
Here's how I'm doing this now
_, counts = torch.unique(row_idx, return_counts=True)
range_ = torch.arange(max_cols)
col_idx = torch.cat([range_[:c] for c in counts])
output[row_idx, col_idx] = values
output
tensor([[0., 1., 2., 3., 0.],
[4., 5., 0., 0., 0.],
[6., 7., 8., 0., 0.],
[9., 0., 0., 0., 0.]])
Is there any more efficient way to paste these values into the appropriate positions?
(feel free to suggest a better title for this if you can think of one)
| I think your solution has linear time complexity. So, I am not sure if it can be further improved. However, I think the solution you provided is not correct. Let me give an example.
For the following input:
row_idx = torch.tensor([0, 0, 1, 0, 0, 1, 2, 2, 2, 3])
Your solution outputs the following.
tensor([[4., 1., 3., 3., 0.],
[2., 5., 2., 0., 0.],
[6., 7., 8., 0., 0.],
[9., 0., 0., 0., 0.]])
However, I think your expected output is:
tensor([[0., 1., 3., 4., 0.],
[2., 5., 0., 0., 0.],
[6., 7., 8., 0., 0.],
[9., 0., 0., 0., 0.]])
So, I suggest the following solution that I believe is correct.
def helper(a):
idx = a.cumsum(-1)
id_arr = torch.ones(idx[-1], dtype=int)
id_arr[0] = 0
id_arr[idx[:-1]] = -a[:-1]+1
return id_arr.cumsum(-1)
n = 4
max_cols = 5
output = torch.zeros(n, max_cols)
row_idx = torch.tensor([0, 0, 1, 0, 0, 1, 2, 2, 2, 3])
values = torch.arange(len(row_idx)).float()
count = torch.unique(row_idx, return_counts=True)[1]
col_idx = helper(count)[row_idx.argsort().argsort()]
output[row_idx, col_idx] = values
print(output)
Update
You can simply add one line to your code as follows to make it work correctly.
_, counts = torch.unique(row_idx, return_counts=True)
range_ = torch.arange(max_cols)
col_idx = torch.cat([range_[:c] for c in counts])
col_idx = col_idx[row_idx.argsort().argsort()] # <== UPDATE
output[row_idx, col_idx] = values
print(output)
| https://stackoverflow.com/questions/62105292/ |
Calculate the standard deviation of a moving windows using 2d convolution | I am doing a image processing project.
I want to calculate the standard deviation of a silding window using 2d convolution. I can now calculate the mean, but I cannot find a way to calculate the standard deviation. Here is my code:
import torch
import numpy as np
import cv2
import matplotlib.pyplot as plt
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
torch.set_default_tensor_type('torch.cuda.FloatTensor')
testvideo=cv2.VideoCapture ("test.mp4")
if testvideo.isOpened():
ret, frame = testvideo.read()
if ret == True:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
imgarr=np.asarray(gray)
#plt.imshow(imgarr,cmap="gray")
imgtensor=torch.from_numpy(imgarr).float().to(device).unsqueeze(0).unsqueeze(0)
print(imgtensor.device)
#print(intensity.device)
MeanConv=torch.nn.Conv2d(1, 1, kernel_size=100)
kernel=(torch.ones(100,100)/10000).unsqueeze(0).unsqueeze(0)
MeanConv.weight = torch.nn.Parameter(kernel)
print(kernel)
intensity=MeanConv(imgtensor)
plt.imshow(intensity.squeeze(0).squeeze(0).detach().cpu().numpy(),cmap="gray")
#for i in range(714):
#for j in range(714):
#intensity[i,j]=torch.mean(imgtensor[i:i+7,j:j+7])
#sd[i,j]=torch.std(imgtensor[i:i+7,j:j+7])
#x=(sd/intensity).cpu()
#plt.imshow(x.numpy(),cmap="gray")
testvideo.release()
| I think it can be somehow like this:
mean (img) using convolution.
Subtract from the original image of p. 1
Calculate the square of each element.
By convolution we find the average, p. 3
Calculates the square root of the elements of p. 4
| https://stackoverflow.com/questions/62110234/ |
Multiple threads accessing same model on GPU for inference | I have a cnn model that is loaded onto the GPU and for every image, a new thread has to be created and detached to run the model on this image. Is this possible and if so, Is it safe?
| Yes, you definitely can. There are two aspects to it. If you want to run each model in parallel, then you have to load the same model in multiple GPUs. If you don't need that (just want the threading part), then you can load the model and use concurrent.futures.ThreadPoolExecutor(). In each call, you can pass an image.
I demonstrated one example with the darknet framework.
I loaded the model in two separate GPUs (for parallel operation, you can avoid that too) and each time I get a request, I use ThreadPoolExecutor to pass the images to the processing function.
from darknet import *
import concurrent.futures
import time
# you can avoid this part if you don't need multiple GPUs
set_gpu(0) # running on GPU 0
net1 = load_net(b"cfg/yolov3-lp_vehicles.cfg", b"backup/yolov3-lp_vehicles.backup", 0)
meta1 = load_meta(b"data/lp_vehicles.data")
set_gpu(1) # running on GPU 1
net2 = load_net(b"cfg/yolov3-lp_vehicles.cfg", b"backup/yolov3-lp_vehicles.backup", 0)
meta2 = load_meta(b"data/lp_vehicles.data")
def f(x):
if x[0] == 0: # gpu 0
return detect_np_lp(net1, meta1, x[1])
else:
return detect_np_lp(net2, meta2, x[1])
def func2(): # with threading
a1 = cv2.imread("lp_tester/bug1.jpg")
a2 = cv2.imread("lp_tester/bug2.jpg")
nums = [(0, a1), (1, a2)] # the first element in tuple denotes GPU ID
with concurrent.futures.ThreadPoolExecutor() as executor:
r_m = [val for val in executor.map(f, nums)]
print('out f2')
#return r_m
t1 = time.time()
func2()
t2 = time.time()
print(t2-t1)
| https://stackoverflow.com/questions/62111922/ |
fp16 inference on cpu Pytorch | I have a pretrained pytorch model I want to inference on fp16 instead of fp32, I have already tried this while using the gpu but when I try it on cpu I get:
"sum_cpu" not implemented for 'Half' torch.
any fixes?
| As I know, a lot of CPU-based operations in Pytorch are not implemented to support FP16; instead, it's NVIDIA GPUs that have hardware support for FP16(e.g. tensor cores in Turing arch GPU) and PyTorch followed up since CUDA 7.0(ish). To accelerate inference on CPU by quantization to FP16, you may wanna try torch.bfloat16 dtype(https://github.com/pytorch/pytorch/issues/23509).
| https://stackoverflow.com/questions/62112534/ |
pytorch got None after backward() | I am learning pytorch and write a simple code as below.
import torch
x = torch.randn(3,requires_grad=True).cuda()
print(x)
y = x * x
print(y)
y.backward(torch.tensor([1,1.0,1]).cuda())
print(x.grad)
tensor([ 0.5934, -1.8813, -0.7817], device='cuda:0', grad_fn=<CopyBackwards>)
tensor([0.3521, 3.5392, 0.6111], device='cuda:0', grad_fn=<MulBackward0>)
None
if I change the code as
from torch.autograd import Variable
import torch
# x = torch.randn(3,requires_grad=True).cuda()
x = Variable(torch.randn(3).cuda(),requires_grad=True)
print(x)
y = x * x
print(y)
y.backward(torch.tensor([1,1.0,1]).cuda())
print(x.grad)
tensor([0.9800, 0.3597, 1.6315], device='cuda:0', requires_grad=True)
tensor([0.9605, 0.1294, 2.6617], device='cuda:0', grad_fn=<MulBackward0>)
tensor([1.9601, 0.7194, 3.2630], device='cuda:0')
The grad is ok. But why? I hate the Variable class.
env
python:3.8
pytorch:1.5
cuda :10.2
| I got it.
x = torch.randn(3,requires_grad=True).cuda()
x is create by cuda(). So x is not a leaf tensor.
Change the code as below will be ok.
x = torch.randn(3,requires_grad=True,device=0)
| https://stackoverflow.com/questions/62114631/ |
Training 1D CNN in Pytorch | I want to train the model given below. I am developing 1D CNN model in PyTorch. Usually we use dataloaders in PyTorch. But I am not using dataloaders for my implementation. I need guidance on how i can train my model in pytorch.
import torch
import torch.nn as nn
import torch.nn.functional as F
class CharCNN(nn.Module):
def __init__(self,num_labels=11):
super(CharCNN, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv1d(num_channels, depth_1, kernel_size=kernel_size_1, stride=stride_size),
nn.ReLU(),
nn.MaxPool1d(kernel_size=kernel_size_1, stride=stride_size),
nn.Dropout(0.1),
)
self.conv2 = nn.Sequential(
nn.Conv1d(depth_1, depth_2, kernel_size=kernel_size_2, stride=stride_size),
nn.ReLU(),
nn.MaxPool1d(kernel_size=kernel_size_2, stride=stride_size),
nn.Dropout(0.25)
)
self.fc1 = nn.Sequential(
nn.Linear(depth_2*kernel_size_2, num_hidden),
nn.ReLU(),
nn.Dropout(0.5)
)
self.fc2 = nn.Sequential(
nn.Linear(num_hidden, num_labels),
nn.ReLU(),
nn.Dropout(0.5)
)
def forward(self, x):
out = self.conv1(x)
out = self.conv2(out)
# collapse
out = x.view(x.size(0), -1)
# linear layer
out = self.fc1(out)
# output layer
out = self.fc2(out)
#out = self.log_softmax(x,dim=1)
return out
I am training my network like this:
criterion = nn.CrossEntropyLoss()
opt = torch.optim.Adam(model.parameters(),lr=learning_rate)
for e in range(training_epochs):
if(train_on_gpu):
net.cuda()
train_losses = []
for batch in iterate_minibatches(train_x, train_y, batch_size):
x, y = batch
inputs, targets = torch.from_numpy(x), torch.from_numpy(y)
if(train_on_gpu):
inputs, targets = inputs.cuda(), targets.cuda()
opt.zero_grad()
output = model(inputs, batch_size)
loss = criterion(output, targets.long())
train_losses.append(loss.item())
loss.backward()
opt.step()
val_losses = []
accuracy=0
f1score=0
print("Epoch: {}/{}...".format(e+1, training_epochs),
"Train Loss: {:.4f}...".format(np.mean(train_losses)))
But i am getting the following error
TypeError Traceback (most recent call last)
<ipython-input-60-3a3df06ef2f8> in <module>
14 inputs, targets = inputs.cuda(), targets.cuda()
15 opt.zero_grad()
---> 16 output = model(inputs, batch_size)
17
18 loss = criterion(output, targets.long())
~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self,
* input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
TypeError: forward() takes 2 positional arguments but 3 were given
Please guide me how i can resolve this issue.
| The forward method of your model only takes one argument, but you are calling it with two arguments:
output = model(inputs, batch_size)
It should be:
output = model(inputs)
| https://stackoverflow.com/questions/62120826/ |
is there any similar function with clamp_ in tensorflow > 2.0 | I'm converting torch code to tensorflow 2.0
prior_boxes = torch.FloatTensor(prior_boxes).to(device) # (8732, 4)
prior_boxes.clamp_(0, 1) # (8732, 4)
is there any replacement of clamp_(0,1) in tensorflow > 2.0?
| Try tf.clip_by_value, though unlike clamp_, it is not in-place:
t = tf.constant([[-10., -1., 0.], [0., 2., 10.]])
t2 = tf.clip_by_value(t, clip_value_min=-1, clip_value_max=1)
t2.numpy()
# gives [[-1., -1., 0.], [0., 1., 1.]]
| https://stackoverflow.com/questions/62143092/ |
No performance improvement using quantization model in pytorch | I have trained a model in pytorch with float data type. I want to improve my inference time by converting this model to quantized model. I have used torch.quantization.convert api to convert my model's weight to uint8 data type. However, when I use this model for inference, I do not get any performance improvement. Am I doing something wrong here ?
The Unet Model code:
def gen_initialization(m):
if type(m) == nn.Conv2d:
sh = m.weight.shape
nn.init.normal_(m.weight, std=math.sqrt(2.0 / (sh[0]*sh[2]*sh[3])))
nn.init.constant_(m.bias, 0)
elif type(m) == nn.BatchNorm2d:
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
class TripleConv(nn.Module):
def __init__(self, in_ch, out_ch):
super(TripleConv, self).__init__()
mid_ch = (in_ch + out_ch) // 2
self.conv = nn.Sequential(
nn.Conv2d(in_ch, mid_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.BatchNorm2d(num_features=mid_ch),
nn.LeakyReLU(negative_slope=0.1),
nn.Conv2d(mid_ch, mid_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.BatchNorm2d(num_features=mid_ch),
nn.LeakyReLU(negative_slope=0.1),
nn.Conv2d(mid_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.BatchNorm2d(num_features=out_ch),
nn.LeakyReLU(negative_slope=0.1)
)
self.conv.apply(gen_initialization)
def forward(self, x):
return self.conv(x)
class Down(nn.Module):
def __init__(self, in_ch, out_ch):
super(Down, self).__init__()
self.triple_conv = TripleConv(in_ch, out_ch)
self.avg_pool_conv = nn.AvgPool2d(2, 2)
self.in_ch = in_ch
self.out_ch = out_ch
def forward(self, x):
self.cache = self.triple_conv(x)
pad = torch.zeros(x.shape[0], self.out_ch - self.in_ch, x.shape[2], x.shape[3], device=x.device)
x = torch.cat((x, pad), dim=1)
self.cache += x
return self.avg_pool_conv(self.cache)
class Center(nn.Module):
def __init__(self, in_ch, out_ch):
super(Center, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True),
nn.BatchNorm2d(num_features=out_ch),
nn.LeakyReLU(negative_slope=0.1, inplace=True)
)
self.conv.apply(gen_initialization)
def forward(self, x):
return self.conv(x)
class Up(nn.Module):
def __init__(self, in_ch, out_ch):
super(Up, self).__init__()
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear',
align_corners=True)
self.triple_conv = TripleConv(in_ch, out_ch)
def forward(self, x, cache):
x = self.upsample(x)
x = torch.cat((x, cache), dim=1)
x = self.triple_conv(x)
return x
class UNet(nn.Module):
def __init__(self, in_ch, first_ch=None):
super(UNet, self).__init__()
if not first_ch:
first_ch = 32
self.down1 = Down(in_ch, first_ch)
self.down2 = Down(first_ch, first_ch*2)
self.down3 = Down(first_ch*2, first_ch*4)
self.down4 = Down(first_ch*4, first_ch*8)
self.center = Center(first_ch*8, first_ch*8)
self.up4 = Up(first_ch*8*2, first_ch*4)
self.up3 = Up(first_ch*4*2, first_ch*2)
self.up2 = Up(first_ch*2*2, first_ch)
self.up1 = Up(first_ch*2, first_ch)
self.output = nn.Conv2d(first_ch, in_ch, kernel_size=3, stride=1,
padding=1, bias=True)
self.output.apply(gen_initialization)
def forward(self, x):
x = self.down1(x)
x = self.down2(x)
x = self.down3(x)
x = self.down4(x)
x = self.center(x)
x = self.up4(x, self.down4.cache)
x = self.up3(x, self.down3.cache)
x = self.up2(x, self.down2.cache)
x = self.up1(x, self.down1.cache)
return self.output(x)
The inference code:
from tqdm import tqdm
import os
import numpy as np
import torch
import gan_network
import torch.nn.parallel
from torch.utils.data import DataLoader
import torch.utils.data as data
import random
import glob
import scipy.io
import time
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"
class DataFolder(data.Dataset):
def __init__(self, file):
super(DataFolder, self).__init__()
self.image_names = []
fid = file
for line in fid:
# line = line[:-1]
if line == '':
continue
# print(line)
self.image_names.append(line)
random.shuffle(self.image_names)
self.image_names = self.image_names[0:]
def __len__(self):
return len(self.image_names)
def __getitem__(self, index):
path = self.image_names[index]
img = np.load(path)
img = np.rollaxis(img, 2, 0)
img = torch.from_numpy(img[:, :, :])
return img, path
if __name__ == '__main__':
batch_size = 1
image_size = 2048
channels = 6
model_path = 'D:/WorkProjects/Network_Training_Aqusens/FullFovReconst/network/network_epoch9.pth'
test_data = glob.glob('D:/save/temp/*.npy')
dest_dir = 'D:/save/temp/results/'
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
net = gan_network.UNet(6, 32)
if torch.cuda.device_count() > 1:
net = torch.nn.DataParallel(net)
net.to(device)
net.load_state_dict(torch.load(model_path))
quantized_model = torch.quantization.convert(net, {torch.nn.Conv2d, torch.nn.BatchNorm2d}, inplace=False)
dataset = DataFolder(file=test_data)
print(f'{len(dataset)}')
data_loader = DataLoader(dataset=dataset, num_workers=4,
batch_size=batch_size, shuffle=False,
drop_last=False, pin_memory=True)
input = torch.Tensor(batch_size, channels, image_size, image_size).to(device)
t0 = time.time()
with torch.no_grad():
for i, batch in enumerate(tqdm(data_loader)):
input.copy_(batch[0])
output = net(input).cpu().clone().numpy()
np.array(output)
output = np.rollaxis(output, 1, 4)
for num in range(batch_size):
arr = output[num, :, :, :]
file_name = os.path.basename(batch[1][num])
save_name = os.path.join(dest_dir, file_name)
save_name = save_name.replace(".npy", "")
scipy.io.savemat(save_name+'.mat', {'output': arr})
t1 = time.time()
print(f'Elapsed time = {t1-t0}')
For models net and quantized model, i get the elapsed time around 30 seconds for 12 images passed through them.
| PyTorch documentation suggests three ways to perform quantization. You are doing post-training dynamic quantization (the simplest quantization method available) which only supports torch.nn.Linear and torch.nn.LSTM layers as listed here. To quantize CNN layers, you would want to check out the other two techniques (these are the ones that support CNN layers): post-training static quantization and quantization aware training. This tutorial shows both these techniques applied on CNNs.
| https://stackoverflow.com/questions/62143162/ |
Difference between src_mask and src_key_padding_mask | I am having a difficult time in understanding transformers. Everything is getting clear bit by bit but one thing that makes my head scratch is
what is the difference between src_mask and src_key_padding_mask which is passed as an argument in forward function in both encoder layer and decoder layer.
https://pytorch.org/docs/master/_modules/torch/nn/modules/transformer.html#Transformer
| Difference between src_mask and src_key_padding_mask
The general thing is to notice the difference between the use of the tensors _mask vs _key_padding_mask.
Inside the transformer when attention is done we usually get an squared intermediate tensor with all the comparisons
of size [Tx, Tx] (for the input to the encoder), [Ty, Ty] (for the shifted output - one of the inputs to the decoder)
and [Ty, Tx] (for the memory mask - the attention between output of encoder/memory and input to decoder/shifted output).
So we get that this are the uses for each of the masks in the transformer
(note the notation from the pytorch docs is as follows where Tx=S is the source sequence length
(e.g. max of input batches),
Ty=T is the target sequence length (e.g. max of target length),
B=N is the batch size,
D=E is the feature number):
src_mask [Tx, Tx] = [S, S] – the additive mask for the src sequence (optional).
This is applied when doing atten_src + src_mask. I'm not sure of an example input - see tgt_mask for an example
but the typical use is to add -inf so one could mask the src_attention that way if desired.
If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged.
If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged.
If a FloatTensor is provided, it will be added to the attention weight.
tgt_mask [Ty, Ty] = [T, T] – the additive mask for the tgt sequence (optional).
This is applied when doing atten_tgt + tgt_mask. An example use is the diagonal to avoid the decoder from cheating.
So the tgt is right shifted, the first tokens are start of sequence token embedding SOS/BOS and thus the first
entry is zero while the remaining. See concrete example at the appendix.
If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged.
If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged.
If a FloatTensor is provided, it will be added to the attention weight.
memory_mask [Ty, Tx] = [T, S]– the additive mask for the encoder output (optional).
This is applied when doing atten_memory + memory_mask.
Not sure of an example use but as previously, adding -inf sets some of the attention weight to zero.
If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged.
If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged.
If a FloatTensor is provided, it will be added to the attention weight.
src_key_padding_mask [B, Tx] = [N, S] – the ByteTensor mask for src keys per batch (optional).
Since your src usually has different lengths sequences it's common to remove the padding vectors
you appended at the end.
For this you specify the length of each sequence per example in your batch.
See concrete example in appendix.
If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged.
If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged.
If a FloatTensor is provided, it will be added to the attention weight.
tgt_key_padding_mask [B, Ty] = [N, t] – the ByteTensor mask for tgt keys per batch (optional).
Same as previous.
See concrete example in appendix.
If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged.
If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged.
If a FloatTensor is provided, it will be added to the attention weight.
memory_key_padding_mask [B, Tx] = [N, S] – the ByteTensor mask for memory keys per batch (optional).
Same as previous.
See concrete example in appendix.
If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged.
If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged.
If a FloatTensor is provided, it will be added to the attention weight.
Appendix
Examples from pytorch tutorial (https://pytorch.org/tutorials/beginner/translation_transformer.html):
1 src_mask example
src_mask = torch.zeros((src_seq_len, src_seq_len), device=DEVICE).type(torch.bool)
returns a tensor of booleans of size [Tx, Tx]:
tensor([[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False]])
2 tgt_mask example
mask = (torch.triu(torch.ones((sz, sz), device=DEVICE)) == 1)
mask = mask.transpose(0, 1).float()
mask = mask.masked_fill(mask == 0, float('-inf'))
mask = mask.masked_fill(mask == 1, float(0.0))
generates the diagonal for the right shifted output which the input to the decoder.
tensor([[0., -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf,
-inf, -inf, -inf],
[0., 0., -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf,
-inf, -inf, -inf],
[0., 0., 0., -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf,
-inf, -inf, -inf],
...,
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., -inf],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0.]])
usually the right shifted output has the BOS/SOS at the beginning and it's the tutorial gets the right shift simply
by appending that BOS/SOS at the front and then triming the last element with tgt_input = tgt[:-1, :].
3 _padding
The padding is just to mask the padding at the end.
The src padding is usually the same as the memory padding.
The tgt has it's own sequences and thus it's own padding.
Example:
src_padding_mask = (src == PAD_IDX).transpose(0, 1)
tgt_padding_mask = (tgt == PAD_IDX).transpose(0, 1)
memory_padding_mask = src_padding_mask
Output:
tensor([[False, False, False, ..., True, True, True],
...,
[False, False, False, ..., True, True, True]])
note that a False means there is no padding token there (so yes use that value in the transformer forward pass) and a True means that there is a padding token (so masked it out so the transformer pass forward does not get affected).
The answers are sort of spread around but I found only these 3 references being useful
(the separate layers docs/stuff wasn't very useful honesty):
long tutorial: https://pytorch.org/tutorials/beginner/translation_transformer.html
MHA docs: https://pytorch.org/docs/master/generated/torch.nn.MultiheadAttention.html#torch.nn.MultiheadAttention
transformer docs: https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html
| https://stackoverflow.com/questions/62170439/ |
Convolution in PyTorch with non-trainable pre-defined kernel | I would like to introduce a custom layer to my neural network. The mathematical operation should be a discrete 2D cross correlation (or convolution) with a non-trainable kernel. The values in the kernel depend on three things: kernel shape, strides and padding. I intend to multiply the output element-wise with a weight matrix.
PyTorch already has an implementation of a discrete 2D cross correlation class called 'Conv2d', however it generates a random kernel and trains using the entries of said kernel. If possible I would like a class similar to 'Conv2d' that does what I need, to make sure to use my GPU most effectively. I tried implementing this on my own, but couldn't figure out how to obtain the correct shapes for the input array. 'Conv2d' only uses 'in_channels', if I understood correctly.
| If I understand correctly,you want a Conv2d layer with defined kernel and you don't want it to be learnable.
In that case,you can use the conv2d function like this:
import torch.nn.functional as F
output_tensor = F.conv2d(input_tensor, your_kernel, ...)
the parameter your_kernel is your weight matrix,also you need to declare other parameters of the function like padding and stride
Then you need to set the requires_grad attribute to 'False', and exclude it from the optimizer if you don't want it to be learnable.
And about the shape issue ,maybe you wanna check this out
| https://stackoverflow.com/questions/62189366/ |
Pytorch: accessing a subtensor using lists of indices | I have a pair of tensors S and T of dimensions (s1,...,sm) and (t1,...,tn) with si < ti. I want to specify a list of indices in each dimensions of T to "embed" S in T. If I1 is a list of s1 indices in (0,1,...,t1) and likewise for I2 up to In, I would like to do something like
T.select(I1,...,In)=S
that will have the effect that now T has entries equal to the entries of S over the indices (I1,...,In).
for example
`S=
[[1,1],
[1,1]]
T=
[[0,0,0],
[0,0,0],
[0,0,0]]
T.select([0,2],[0,2])=S
T=
[[1,0,1],
[0,0,0],
[1,0,1]]`
| If you're flexible with using NumPy only for the indices part, then here's one approach by constructing an open mesh using numpy.ix_() and using this mesh to fill-in the values from the tensor S. If this is not acceptable, then you can use torch.meshgrid()
Below is an illustration of both approaches with descriptions interspersed in comments.
# input tensors to work with
In [174]: T
Out[174]:
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
# I'm using unique tensor just for clarity; But any tensor should work.
In [175]: S
Out[175]:
tensor([[10, 11],
[12, 13]])
# indices where we want the values from `S` to be filled in, along both dimensions
In [176]: idxs = [[0,2], [0,2]]
Now we will leverage np.ix_() or torch.meshgrid() to generate an open mesh by passing in the indices:
# mesh using `np.ix_`
In [177]: mesh = np.ix_(*idxs)
# as an alternative, we can use `torch.meshgrid()`
In [191]: mesh = torch.meshgrid([torch.tensor(lst) for lst in idxs])
# replace the values from tensor `S` using basic indexing
In [178]: T[mesh] = S
# sanity check!
In [179]: T
Out[179]:
tensor([[10, 0, 11],
[ 0, 0, 0],
[12, 0, 13]])
| https://stackoverflow.com/questions/62200105/ |
How to solve this question "RuntimeError: CUDA out of memory."? | I'm going to extract a feature from pictures.I first define a tensor data_feature_map, and then use torch.cat to stack the features of one picture.
My code is :
data_feature_map = torch.ones(1,2048)
for i, data in enumerate(train_loader, 0):
img, _ = data
img.requires_grad_=False
if torch.cuda.is_available():
img = img.cuda()
out = model(img)
# out.shape = [1,2048]
out = out.view(1,-1).cpu()
data_feature_map = torch.cat((data_feature_map, out), 0)
but when i run it, it will show the error "RuntimeError: CUDA out of memory."
please tell me why this error occurs.Thank you very much.
| Since your GPU is running out of memory, you can try few things:
1.) Reduce your batch size
2.) Reduce your network size
| https://stackoverflow.com/questions/62210030/ |
Differential Privacy decreases the model performance significantly | Background Information
I trained a classifier to predict three labels: COVID/Pneumonia/Healthy based on chest X-Ray images. It's a PyTorch implementation of COVID-Net. I use a training set to train on, validation set to save the best performing model, and then a test set to measure the "real" performance of the model. However, I noticed that my model "learned" to classify normal/pneumonia really good, but it just ignored the underpopulated COVID set. Therefore I choose to undersample (reduce the number of training instances of the other classes (normal and pneumonia) in order to get equal populations). This worked well, but my sample set has been reduced to ~1500 samples (low!). The results are somewhat worse than COVID-Net, I achieve an accuracy of ~80% and lower sensitivity on underpopulated classes (COVID) then they report. I suppose that they report better performance because they do not use a validation-set and use the test-set each epoch. I figured that they might indirectly overfit on the test-set because of that. I have chosen to explain this so that the reader gets a context.
Question
I tried adding privacy to the training procedure by using Differential Privacy. Specifically, I used Facebook's PyTorch-DP module. Training works just as well if I choose to add almost no-privacy (this can be achieved by choosing a really low noise multiplier value (sigma), i.e. 1e-7) and a really high delta. So it's not that the module itself is not working/faulty, but, if I use a lower sigma (so I add more noise) then I get more privacy (epsilon decreases) but the model fails to fit the data at all.
The question is: how do I manage to add privacy to a somewhat meaningful degree while making sure that my model somewhat fits the data still?
Performance differences
Confusion Matrix of Model without Differential Privacy added. It's not "good" but it's at least somewhat meaningful and the model reaches an accuracy of ~80%.
Confusion Matrix of Model with Differential Privacy (epsilon: 2.3) after 100 epochs. It looks as if the model does not know what to do, at all.
Possible explanations
I read a paper that stated that adding Differential Privacy can cause bad performance in the sense that the accuracy decreases for underpopulated classes. But, I used undersampling and I think this should've solved that, but the accuracy stays bad (for all classes!).
Maybe because my sample set is so small, differential privacy is much harder to achieve, and therefore the performance is bad? However, even if add a really tiny bit of privacy, with an epsilon value >20000, the model still struggles in learning how to classify. So I'm not sure.
| It seems the PyTorch Differential Privacy library from Facebook Research is built on the concept of Renyi differential privacy guarantee that is well-suited for expressing guarantees of privacy-preserving algorithms and for composition of heterogeneous mechanisms. We need to have a good estimation of the heterogenity in this COVID-Net dataset.
In particular, Rényi divergence satisfies the data processing inequality. It seems the current library is more suitable for machine learning problems with more heterogeneity in datasets. This library is using an implementation of the differentially private stochastic gradient descent (SGD) algorithm. It follows the sequence of random initialisation, computation of gradient, clipping gradient, adding noise and doing the descent. The clipping and noise parameters may vary with the number of training steps and epochs.
The success of differential privacy on deep learning problems are driven by the extent of pre-processing the gradient to protect privacy and privacy_accounting which keeps track of the privacy spending over the course of training. It is highlighted that in the differentially private deep learning the model accuracy is more sensitive to training parameters such as batch size and noise level than to the structure of a neural network.
In the PyTorch library, we can see the examples on ImageNet, MNIST, DCGAN etc. In all these examples we can see how each of the mentioned parameters such as clipping, batch size etc. can be varied for getting the required accuracy levels. Kindly refer to the following example scripts in the PyTorch DP library.
PyTorch DP Example Scripts for Various Models
| https://stackoverflow.com/questions/62246851/ |
How to get the imagenet dataset on which pytorch models are trained on | Can anyone please tell me how to download the complete imagenet dataset on which the pytorch torchvision models are trained on and their Top-1 error is reported on?
I have downloaded Tiny-Imagenet from Imagenet website and used pretrained resnet-101 model which provides only 18% Top-1 accuracy.
| Download the ImageNet dataset from http://www.image-net.org/ (you have to sign in)
Then, you should move validation images to labeled subfolders, which could be done automatically using the following shell script:
https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh
| https://stackoverflow.com/questions/62248037/ |
Passing variable to nn.Conv2d arguments within a class init definition python | So I want to pass some new variables such as kernel_size when I initiate a new object. Let's say net=Net10(5,2,4,3,1,1). so that I get an object of this class with the parameters I want not something always constant, cos otherwise I will have to define lots of classes. Now, I want to pass kernel_size within the self.Conv2d, and I get a syntax error: positional argument follows keyword argument
Does anyone know how to fix this? Should I change it all to functions instead of classes?
class Net10(nn.Module):
def __init__ (self, kernel_size, stride, pooling, num_classes, neurons, ActFunn, *args):
super(Net10, self).__init__()
self.kernel = kernel_size
self.stride = stride
self.pooling = pooling
self.num_classes = num_classes
self.neurons = neurons
self.Actfun = ActFunn
self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel, padding=2, stride=1)
self.pool = nn.MaxPool2d(pooling, pooling)
self.fcinput= round (28 / pooling)
| Try:
self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=self.kernel, padding=2, stride=1)
In more details: python functions (e.g., __init__ of the conv layer) can have input arguments is two "flavors": positional arguments: that is associating an input argument to a function variable according to its position in the argument list.
The other "flavor" is keyword argument: argument that is given with its keyword, e.g., in_channels=1 etc.
As a rule python does not allow wild mixing of positional and keyword arguments.
You can have positional arguments followed by keyword argument, but you cannot have a positional argument once you started declaring keyword arguments.
self.conv1 = nn.Conv2d(in_channels=1, # keyword argument
out_channels=32, # keyword argument
kernel, # positional argument (no "keyword" defining this argument)
padding=2, # keyword
stride=1) # keyword
| https://stackoverflow.com/questions/62248832/ |
How to load dataset from pickle files into PyTorch? | I have X_train(inputs) and Y_train(labels) in separate pickle files in form of integer matrices. Now, I need to load them and train using PyTorch. I tried torch.utils.data.DataLoader and torchvision.datasets.DatasetFolder but nothing worked or I might be getting wrong somewhere. Please suggest a proper way for the same.
| You should really give a clear description of your problem with some examples. Anyway, as far as I understand you are looking for something like this.
import pickle
from torch.utils.data import Dataset
from torchvision import transforms
from torch.utils.data import DataLoader
class YourDataset(Dataset):
def __init__(self, X_Train, Y_Train, transform=None):
self.X_Train = X_Train
self.Y_Train = Y_Train
self.transform = transform
def __len__(self):
return len(self.X_Train)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
x = self.X_Train[idx]
y = self.Y_Train[idx]
if self.transform:
x = self.transform(x)
y = self.transform(y)
return x, y
file = open('FILENAME_X_train', 'rb')
X_train = pickle.load(file)
file.close()
file = open('FILENAME_Y_train', 'rb')
Y_train = pickle.load(file)
file.close()
your_dataset = YourDataset(X_train, Y_train, transform=transforms.Compose([transforms.ToTensor()]))
your_data_loader = DataLoader(your_dataset, batch_size=8, shuffle=True, num_workers=0)
Note that I have not tested the code, but I think that it gives the general idea. Hope it helps.
| https://stackoverflow.com/questions/62260217/ |
Downloading transformers models to use offline | I have a trained transformers NER model that I want to use on a machine not connected to the internet. When loading such a model, currently it downloads cache files to the .cache folder.
To load and run the model offline, you need to copy the files in the .cache folder to the offline machine. However, these files have long, non-descriptive names, which makes it really hard to identify the correct files if you have multiple models you want to use. Any thoughts on this?
Example of model files
| One relatively easy way to deal with this issue is to simply "rename" the pretrained models, as is detailed in this thread.
Essentially, all you have to do is something like this for whatever model you're trying to work with:
from transformers import BertModel
model = BertModel.from_pretrained("bert-base-uncased")
model.save_pretrained("./my_named_bert")
The thread also details how the local model folders are named, see LysandreJik's post:
Hi, they are named as such because that's a clean way to make sure the model on the S3 is the same as the model in the cache. The name is created from the etag of the file hosted on the S3. [...]
| https://stackoverflow.com/questions/62261602/ |
what is torch's unsqueeze equivalence with tensorflow? | what is torch's unsqueeze equivalence with tensorflow?
#tensorflow auto-broadcasts singleton dimensions
lower_bounds = tf.argmax(set_1[:, :2].unsqueeze(1), set_2[:, :2].unsqueeze(0)) # (n1, n2, 2)
upper_bounds = tf.argmin(set_1[:, 2:].unsqueeze(1), set_2[:, 2:].unsqueeze(0)) # (n1, n2, 2)
| Maybe you wanna try this:
tf.expand_dims(x, axis)
| https://stackoverflow.com/questions/62273504/ |
PyTorch LSTM dropout vs Keras LSTM dropout | I'm trying to port my sequential Keras network to PyTorch. But I'm having trouble with the LSTM units:
LSTM(512,
stateful = False,
return_sequences = True,
dropout = 0.5),
LSTM(512,
stateful = False,
return_sequences = True,
dropout = 0.5),
How should I formulate this in PyTorch? Especially dropout appears to work very differently in PyTorch than it does in Keras.
| The following should work for you.
lstm = nn.LSTM(
input_size = ?,
hidden_size = 512,
num_layers = 1,
batch_first = True,
dropout = 0.5
)
You need to set the input_size. Check out the documentation on LSTM.
Update
In a 1-layer LSTM, there is no point in assigning dropout since dropout is applied to the outputs of intermediate layers in a multi-layer LSTM module. So, PyTorch may complain about dropout if num_layers is set to 1. If we want to apply dropout at the final layer's output from the LSTM module, we can do something like below.
lstm = nn.Sequential(
nn.LSTM(
input_size = ?,
hidden_size = 512,
num_layers = 1,
batch_first = True
),
nn.Dropout(0.5)
)
According to the above definition, the output of the LSTM would pass through a Dropout layer.
| https://stackoverflow.com/questions/62274014/ |
Where is the numpy data stored? | In python, If only import torch (but not import numpy), "torch.numpy()" can still work. Is that means the numpy data can be stored and displayed without numpy package? Where is the numpy data stored and how does it display (without numpy package)?
example codes:
import torch
a = torch.tensor([[1,2,3],[4,5,6]])
a = a.numpy()
print(a)
array([[1, 2, 3],
[4, 5, 6]])
| PyTorch uses NumPy internally. You don't need to manually import everything a package uses, that is one of the core principles of modules. It's still an object of the same NumPy class and you need to have NumPy installed for it to work, otherwise you would get an import error, just that the import happens in one of PyTorch's files, rather than your own.
| https://stackoverflow.com/questions/62274612/ |
what is the torch's torch.cat equivalence with tensorflow? | def cxcy_to_xy(cxcy):
"""
Convert bounding boxes from center-size coordinates (c_x, c_y, w, h) to boundary coordinates (x_min, y_min, x_max, y_max).
:param cxcy: bounding boxes in center-size coordinates, a tensor of size (n_boxes, 4)
:return: bounding boxes in boundary coordinates, a tensor of size (n_boxes, 4)
"""
return torch.cat([cxcy[:, :2] - (cxcy[:, 2:] / 2), # x_min, y_min
cxcy[:, :2] + (cxcy[:, 2:] / 2)], 1) # x_max, y_max
I want to change
this torch.cat with tensorflow 2.0
| Few options depending on the API in TF you're using:
tf.concat - most similar to torch.cat:
tf.concat(values, axis, name='concat')
tf.keras.layers.concatenate - if you're using Keras sequential API:
tf.keras.layers.concatenate(values, axis=-1, **kwargs)
tf.keras.layers.Concatenate - if you're using Keras functional API:
x = tf.keras.layers.Concatenate(axis=-1, **kwargs)(values)
If you're using the Keras API, this answer is informative for understanding the differences between all the Keras concatenation functions.
| https://stackoverflow.com/questions/62274656/ |
what does clamp_ does in pytorch and how to change it to the tensorflow 2.0? | prior_boxes = torch.FloatTensor(prior_boxes).to(device) # (8732, 4)
prior_boxes.clamp_(0, 1) # (8732, 4)
what dooes clamp_ do in pytorch and how to change it to the tensorflow 2.0?
I'm not sure what clamp_ do exactly?
| clamp_(0, 1) Clamp all elements in prior_boxes into the range [ 0, 1].
Tensorflow:
tf.clip_by_value
https://www.tensorflow.org/api_docs/python/tf/clip_by_value
| https://stackoverflow.com/questions/62275778/ |
Array Slicing with step 2 | Have array like
arr = [1,2,3,4,5,6,7,8,9,10].
How I can get array like this:
[1,2,5,6,9,10]
take 2 elements with step 2(::2)
I try something like arr[:2::2].it's not work
| [:2::2] is not valid Python syntax. A slice only takes 3 values - start, stop, step. You are trying to provide 4.
Here's what you need to do:
In [233]: arr = np.arange(1,11)
In [234]: arr
Out[234]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
first reshape to form groups of 2:
In [235]: arr.reshape(5,2)
Out[235]:
array([[ 1, 2],
[ 3, 4],
[ 5, 6],
[ 7, 8],
[ 9, 10]])
now slice to get every other group:
In [236]: arr.reshape(5,2)[::2 ,:]
Out[236]:
array([[ 1, 2],
[ 5, 6],
[ 9, 10]])
and then back to 1d:
In [237]: arr.reshape(5,2)[::2,:].ravel()
Out[237]: array([ 1, 2, 5, 6, 9, 10])
You have to step back a bit, and imagine the array as a whole, and ask how to make it fit the desire pattern.
| https://stackoverflow.com/questions/62275877/ |
Pytorch loss does't change in vgg 19 model | In pytorch I made a model vgg19 for classification tiny imagenet:
model = nn.Sequential(
nn.BatchNorm2d(3),
nn.Conv2d(in_channels=3, out_channels=64, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=64, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.MaxPool2d((2,2)),
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=128, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.MaxPool2d((2,2)),
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256, out_channels=256, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.MaxPool2d((2,2)),
nn.Conv2d(in_channels=256, out_channels=512, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.MaxPool2d((2,2)),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3,3), padding=1),
nn.ReLU(),
nn.MaxPool2d((2,2)),
nn.Flatten(),
nn.Linear(25088, 4096),
nn.Linear(4096, 1000),
nn.Linear(1000, 200),
nn.Softmax(),
nn.Dropout2d(),
)
In the learning process, the loss remains at approximately the same value (5.3 +- 0.01).
###############
iter = 4000 / 80000
loss = 5.295811176300049
###############
iter = 4800 / 80000
loss = 5.298299789428711
###############
iter = 5600 / 80000
loss = 5.309792995452881
###############
iter = 6400 / 80000
loss = 5.3179707527160645
###############
iter = 7200 / 80000
loss = 5.3179707527160645
I have already tried to increase and decrease lr, batch_size. But still no idea how to fix it.
Training code
(epochs = 10, loss = cross_entropy, optim = Adam(lr = 0.01), X_batch.shape = (batch_size, 3, 224, 224):
for epoch in range(num_epochs):
i = 0
for (X_batch, y_batch) in train_batch_gen:
X_batch = Variable(torch.FloatTensor(X_batch)).cuda()
y_batch = Variable(torch.LongTensor(y_batch)).cuda()
logits = model.cuda().forward(X_batch)
opt.zero_grad()
loss = lossFunc(logits, y_batch)
loss.backward()
opt.step()
train_loss.append(loss.data.cpu().numpy())
if i % (batch_size*100) == 0:
print("###############")
print(f"iter = {i} / {80000}")
print(f"loss = {np.mean(train_loss[-len(train_dataset) // batch_size :])}")
i += batch_size
| nn.CrossEntropyLoss applies log-softmax, but you also apply softmax in the model:
nn.Linear(1000, 200),
nn.Softmax(),
nn.Dropout2d(),
The output of your model must be the raw logits, without the nn.Softmax().
Additionally, dropout should not be used just before the output of the model, since that effectively wipes out some of the classes, making the loss punish something that would have been correct otherwise. Dropout should only be used between layers as a regularisation.
| https://stackoverflow.com/questions/62284832/ |
Why torch.dot(a,b) makes requires_grad=False | I have some losses in a loop storing them in a tensor loss. Now I want to multiply a weight tensor to the loss tensor to have final loss, but after torch.dot(), the result scalar, ll_new, has requires_grad=False. The following is my code.
loss_vector = torch.FloatTensor(total_loss_q)
w_norm = F.softmax(loss_vector, dim=0)
ll_new = torch.dot(loss_vector,w_norm)
How can I have requires_grad=False for the ll_new after doing the above?
| I think the issue is in the line: loss_vector = torch.FloatTensor(total_loss_q) as requires_grad for loss_vector is False (default value). So, you should do:
loss_vector = torch.FloatTensor(total_loss_q, requires_grad=True)
| https://stackoverflow.com/questions/62294833/ |
Whenever I try to install torch, it displays killed | I just want to install pytorch, I ran this in the terminal:
pip install torch
And it displays:
Collecting torch
Killed
What is the problem?
| It says your your free ram is not enough to install the package, but there is a method that you can still use it.
pip install torch --no-cache-dir
| https://stackoverflow.com/questions/62301268/ |
Subsets and Splits