instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
nn.Linear should be mismatch, but it works successfully | I'm confused about the in-feature of nn.linear. For out-feature of the model VGG-19's last nn.MaxPool2d, the tensor size is (512, 7, 7). The model below uses pooling function and resizes the tensor to (512, 49), then uses nn.linear(512, 7) directly. Why can't it work successfully without mismatch problem?
source
'''VGG11/13/16/19 in Pytorch.'''
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
cfg = {
'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
class VGG(nn.Module):
def __init__(self, vgg_name):
super(VGG, self).__init__()
self.features = self._make_layers(cfg[vgg_name])
self.classifier = nn.Linear(512, 7)
def forward(self, x):
out = self.features(x)
out = out.view(out.size(0), -1)
out = F.dropout(out, p=0.5, training=self.training)
out = self.classifier(out)
return out
def _make_layers(self, cfg):
layers = []
in_channels = 3
for x in cfg:
if x == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
layers += [nn.Conv2d(in_channels, x, kernel_size=3, padding=1),
nn.BatchNorm2d(x),
nn.ReLU(inplace=True)]
in_channels = x
layers += [nn.AvgPool2d(kernel_size=1, stride=1)]
return nn.Sequential(*layers)
| Why is the assumption that this code works? I tested it, and got the following shapes, and the expected size mismatch error.
def forward(self, x):
out = self.features(x) # torch.Size([1, 512, 7, 7])
out = out.view(out.size(0), -1) # torch.Size([1, 25088])
out = F.dropout(out, p=0.5, training=self.training) # torch.Size([1, 25088])
out = self.classifier(out) # RuntimeError: size mismatch, m1: [1 x 25088], m2: [512 x 7]
return out
One mistake that you made with inferring the sizes, is that you omitted the batch dimension. That is why you may falsely conclude that the shape change with out.view(out.size(0), -1) is [512, 7, 7] -> [512, 49] instead the correct [b, 512, 7, 7] -> [b, 25088] where b is the batch size.
As expected when classifier is changed to
self.classifier = nn.Linear(25088, 7)
then the forward function works, no size mismatch error.
| https://stackoverflow.com/questions/55462619/ |
Why does multi-class classification fails with sigmoid? | MNIST trained with Sigmoid fails while Softmax works fine
I am trying to investigate how different activation affects the final results, so I implemented a simple net for MNIST with PyTorch.
I am using NLLLoss (Negative log likelihood) as it implements Cross Entropy Loss when used with softmax.
When I have softmax as activation of the last layer, it works great.
But when I used sigmoid instead, I noticed that things fall apart
Here is my network code
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 80)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.XXXX(x)
where XXXX is the activation function
both Sigmoid and Softmax output values between (0, 1).
Yes Softmax guarantees the sum of 1 but I am not sure if this answers why the training fails with Sigmoid.
Is there any detail I am not catching here?
| Sigmoid + crossentropy can be used for multilabel classification (assume a picture with a dog and a cat, you want the model to return "dog and cat"). It works when the classes aren't mutually exclusive or the samples contain more than one object that you want to recognize.
In your case MNIST has mutually exclusive classes and in each image there is only one number, so it is better to use logsoftmax + negative loglikelihood, which assume that the classes are mutually exclusive and there is only one correct label associated to the image.
So, you can't really expect to have that behavior from sigmoid.
| https://stackoverflow.com/questions/55463251/ |
Applying Kullback-Leibler (aka kl divergence) element-wise in Pytorch | I have two tensors named x_t, x_k with follwing shapes NxHxW and KxNxHxW respectively, where K, is the number of autoencoders used to reconstruct x_t (if you have no idea what is this, assume they're K different nets aiming to predict x_t, this probably has nothing to do with the question anyways) N is batch size, H matrix height, W matrix width.
I'm trying to apply Kullback-Leibler divergence algorithm to both tensors (after broadcasting x_t as x_k along the Kth dimension) using Pytorch's nn.functional.kl_div method.
However, it does not seem to be working as I expected. I'm looking to calcualte the kl_div between each observation in x_t and x_k resulting in a tensor of size KxN (i.e., kl_div of each observation for each K autoencoder).
The actual output is a single value if I use the reduction argument, and the same tensor size (i.e., KxNxHxW) if I do not use it.
Has anyone tried something similar?
Reproducible example:
import torch
import torch.nn.functional as F
# K N H W
x_t = torch.randn( 10, 5, 5)
x_k = torch.randn( 3, 10, 5, 5)
x_broadcasted = x_t.expand_as(x_k)
loss = F.kl_div(x_t, x_k, reduction="none") # or "batchmean", or there are many options
| It's unclear to me what exactly constitutes a probability distribution in your model. With reduction='none', kl_div, given log(x_n) and y_n, computes kl_div = y_n * (log(y_n) - log(x_n)), which is the "summed" part of the actual Kullback-Leibler divergence. Summation (or, in other words, taking the expectation) is up to you. If your point is that H, W are the two dimensions over which you want to take expectation, it's as simple as
loss = F.kl_div(x_t, x_k, reduction="none").sum(dim=(-1, -2))
Which is of shape [K, N]. If your network output is to be interpreted differently, you need to better specify which are the event dimensions and which are sample dimensions of your distribution.
| https://stackoverflow.com/questions/55466270/ |
Pytorch: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead | Calling tensor.numpy() gives the error:
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
tensor.cpu().detach().numpy() gives the same error.
| Error reproduced
import torch
tensor1 = torch.tensor([1.0,2.0],requires_grad=True)
print(tensor1)
print(type(tensor1))
tensor1 = tensor1.numpy()
print(tensor1)
print(type(tensor1))
which leads to the exact same error for the line tensor1 = tensor1.numpy():
tensor([1., 2.], requires_grad=True)
<class 'torch.Tensor'>
Traceback (most recent call last):
File "/home/badScript.py", line 8, in <module>
tensor1 = tensor1.numpy()
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
Process finished with exit code 1
Generic solution
this was suggested to you in your error message, just replace var with your variable name
import torch
tensor1 = torch.tensor([1.0,2.0],requires_grad=True)
print(tensor1)
print(type(tensor1))
tensor1 = tensor1.detach().numpy()
print(tensor1)
print(type(tensor1))
which returns as expected
tensor([1., 2.], requires_grad=True)
<class 'torch.Tensor'>
[1. 2.]
<class 'numpy.ndarray'>
Process finished with exit code 0
Some explanation
You need to convert your tensor to another tensor that isn't requiring a gradient in addition to its actual value definition. This other tensor can be converted to a numpy array. Cf. this discuss.pytorch post. (I think, more precisely, that one needs to do that in order to get the actual tensor out of its pytorch Variable wrapper, cf. this other discuss.pytorch post).
| https://stackoverflow.com/questions/55466298/ |
How can I reduce a tensor's last dimension in PyTorch? | I have tensor of shape (1, 3, 256, 256, 3). I need to reduce one of the dimensions to obtain the shape (1, 3, 256, 256). How can I do it?
Thanks!
| If you intend to apply mean over the last dimension, then you can do so with:
In [18]: t = torch.randn((1, 3, 256, 256, 3))
In [19]: t.shape
Out[19]: torch.Size([1, 3, 256, 256, 3])
# apply mean over the last dimension
In [23]: t_reduced = torch.mean(t, -1)
In [24]: t_reduced.shape
Out[24]: torch.Size([1, 3, 256, 256])
# equivalently
In [32]: torch.mean(t, t.ndimension()-1).shape
Out[32]: torch.Size([1, 3, 256, 256])
| https://stackoverflow.com/questions/55471260/ |
Code conversion from python 2 to python 3 | I'm setting up a new algorithm which combines an object detector(bounding box detector) which is in python 3 and a mask generator which is in python 2. The problem here is I have several python 2 files which is required for the mask generation algorithm. So I tried 2to3 to convert all my python 2 files to python 3. The script seemed like working but as it was a deep learning algorithm(for mask generation when bounding box coordinates are given as input) which needs some pytorch weights to be loaded, while testing the model in python 3 the program was throwing out error like
"RuntimeError: Expected object of type torch.FloatTensor but found
type torch.cuda.FloatTensor for argument #2 ‘weight’"
I have searched in PyTorch forums but none of the posts were useful to me. Is it because my mask generation code is trained in python 2 ?
Does that means while loading the weights and testing the model I should use python 2 not python 3 ? It would be great if someone can shed some light on this. As a work around I can still use the object detector code downgraded to python 2. But still I want to know why it was throwing the error.
| I just resolved the issue by re-installing the torch(0.4.0) and torchvision(0.2.1) for my conda environment. I had to downgrade the versions of both of them. Finally I was successful in converting my python 2.7 code to python 3. Thanks to 2to3 library. Actually this error was happening in the image normalize function of the PyTorch. That was an internal function which accepts the image array as tensors.
tensor.sub_(mean[:, None, None]).div_(std[:, None, None])
RuntimeError: expected type torch.cuda.FloatTensor but got torch.FloatTensor
| https://stackoverflow.com/questions/55471459/ |
Error libtorch_python.so: cannot open shared object file: No such file or directory | I'm trying to implement fastai pretrain language model and it requires torch to work. After run the code, I got some problem about the import torch._C
I run it on my linux, python 3.7.1, via pip: torch 1.0.1.post2, cuda V7.5.17. I'm getting this error:
Traceback (most recent call last):
File "pretrain_lm.py", line 7, in <module>
import fastai
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/__init__.py", line 1, in <module>
from .basic_train import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/basic_train.py", line 2, in <module>
from .torch_core import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/torch_core.py", line 2, in <module>
from .imports.torch import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/imports/__init__.py", line 2, in <module>
from .torch import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/imports/torch.py", line 1, in <module>
import torch, torch.nn.functional as F
File "/home/andira/anaconda3/lib/python3.7/site-packages/torch/__init__.py", line 84, in <module>
from torch._C import *
ImportError: libtorch_python.so: cannot open shared object file: No such file or directory
So I tried to run this line:
from torch._C import *
and got same result
ImportError: libtorch_python.so: cannot open shared object file: No such file or directory
I checked /home/andira/anaconda3/lib/python3.7/site-packages/torch/lib and there are only libcaffe2_gpu.so and libshm.so file, and I can't find libtorch_python.so either. My question is, what is actually libtorch_python.so? I've read some of article and like most of it talked about undefined symbol, not cannot open shared object file: No such file or directory like mine. I'm new at python and torch so I really appreciate your answer.
| My problem is solved. I'm uninstalling my torch twice
pip uninstall torch
pip uninstall torch
and then re-installing it back:
pip install torch==1.0.1.post2
| https://stackoverflow.com/questions/55476131/ |
Matrix-vector multiplication for only one dimension in a tensor | Is it possible to multiply only one (last) dimension in a tensor alone with other vectors?
For example, assume a tensor T=[100, 20, 400] and a matrix M =[400, 400].
Is it possible to make the operation h_{transpose}*M*h, where h is the last dimension in the tensor T? In other words, is it possible to make use of (possibly pytorch) built-in functions to get the resulting tensor of size [100, 20, 1]?
| I think the easiest (certainly the shortest) solution is with einsum.
import torch
T = torch.randn(100, 20, 400)
M = torch.randn(400, 400)
res = torch.einsum('abc,cd,abd->ab', (T, M, T)).unsqueeze(-1)
It basically says "for all (a, b, c, d) in bounds, multiply T[a, b, c] with M[c, d] and T[a, b, d] and accumulate it in res[a, b]".
Since einsum is implemented in terms of basic building blocks like mm, transpose etc, this could certainly be unrolled into a more "classical" solution, but right now my brain fails me at that.
| https://stackoverflow.com/questions/55476990/ |
Regarding number of epochs for torchvision models | I was trying to find, how many epochs was the pretrained Alexnet model (available from torchvision) trained for on Imagenet and also what learning rate was used? I tried checking the checkpoint keys to see if any epoch info was stored.
Any suggestions on how to find it out?
| According to this comment on GitHub by a PyTorch team member, most of the training was done with a variant of https://github.com/pytorch/examples/tree/master/imagenet. All the models were trained on Imagenet. According to the file:
The default learning rate schedule starts at 0.1 and decays by a factor of 10 every 30 epochs, though they recommend using 0.01 for Alexnet as initial learning rate.
The default value for epochs is 90.
| https://stackoverflow.com/questions/55476998/ |
Pytorch - add rows of a 2D tensor element-wise | I have the following tensor :
ts = torch.tensor([[1,2,3],[4,6,7],[8,9,10]])
> tensor([[ 1, 2, 3],
[ 4, 6, 7],
[ 8, 9, 10]])
I am looking for a pytorch generic operation that adds all rows element-wise like that:
ts2 = ts[0]+ts[1]+ts[2]
print(ts2)
> tensor([13, 17, 20])
In reality, the number of rows corresponds to the batch size that vary.
| You can sum over an axis/dimension like so:
torch.sum(ts, dim=0)
| https://stackoverflow.com/questions/55500527/ |
Pytorch loss function error in the last batch | Assume that I have 77 samples to train my CNN, and my batch size is 10. Then the last batch has a batch size of 7 instead of 10. Somehow when I pass it to the loss function such as nn.MSELoss(), it gives me the error:
RuntimeError: The size of tensor a (10) must match the size of tensor
b (7) at non-singleton dimension 1
So pytorch doesn't support batches with different sizes?
My code in doubt:
import numpy as np
import torch
from torch import nn
import torchvision
import torch.nn.functional as F
import torch.optim as optim
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, (5,4))
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(64, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, x.shape[1] * x.shape[2] * x.shape[3])
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Net()
batch_size = 10
# Generating Artifical data
x_train = torch.randn((77,1,20,20))
y_train = torch.randint(0,10,size=(77,),dtype=torch.float)
trainset = torch.utils.data.TensorDataset(x_train,y_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=0)
# testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=0)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
for epoch in range(20): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i%10==0:
print('epoch{}, step{}, loss: {}'.format(epoch + 1, i + 1, running_loss))
# print("frac post = {}".format(frac_post))
running_loss = 0.0
| The problem is not due to the batch size, but to a failure to broadcast properly between the 10 outputs of your CNN and the single label provided in each example.
If you look at the model output and label tensor shapes during the batch where the error is thrown,
print(outputs.shape, labels.shape)
#out: torch.Size([7, 10]) torch.Size([7])
you'll see that the labels are stored in a singleton tensor. According to pytorch broadcasting rules, to be broadcastable two tensors have to be compatible in all trailing dimensions. In this case, the trailing dimension of the model output (10) is incompatible with that of the label (7).
To fix, either add a dummy dimension to the label (assuming you actually want to broadcast the labels to match your ten network outputs), or define a network with scalar outputs. For example:
y_train = torch.randint(0,10,size=(77,1),dtype=torch.float)
results in
print(outputs.shape, labels.shape)
#out: torch.Size([7, 10]) torch.Size([7,1])
# these are broadcastable
| https://stackoverflow.com/questions/55507391/ |
how to load the gpu trained model into the cpu? | I am using PyTorch. I am going to use the already trained model on multiple GPUs with CPU. how to do this task?
I tried on Anaconda 3 and pytorch with cpu only i dont have gpu
model = models.get_pose_net(config, is_train=False)
gpus = [int(i) for i in config.GPUS.split(',')]
model = torch.nn.DataParallel(model, device_ids=gpus).cuda()
print('Created model...')
print(model)
checkpoint = torch.load(config.MODEL.RESUME)
model.load_state_dict(checkpoint)
model.eval()
print('Loaded pretrained weights...')
the error i got is
AssertionError Traceback (most recent call last)
<ipython-input-15-bbfcd201d332> in <module>()
2 model = models.get_pose_net(config, is_train=False)
3 gpus = [int(i) for i in config.GPUS.split(',')]
----> 4 model = torch.nn.DataParallel(model, device_ids=gpus).cuda()
5 print('Created model...')
6 print(model)
C:\Users\psl\Anaconda3\lib\site-packages\torch\nn\modules\module.py in cuda(self, device)
258 Module: self
259 """
--> 260 return self._apply(lambda t: t.cuda(device))
261
262 def cpu(self):
C:\Users\psl\Anaconda3\lib\site-packages\torch\nn\modules\module.py in
_apply(self, fn)
185 def _apply(self, fn):
186 for module in self.children():
--> 187 module._apply(fn)
188
189 for param in self._parameters.values():
C:\Users\psl\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
185 def _apply(self, fn):
186 for module in self.children():
--> 187 module._apply(fn)
188
189 for param in self._parameters.values():
C:\Users\psl\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
191 # Tensors stored in modules are graph leaves, and we don't
192 # want to create copy nodes, so we have to unpack the data.
--> 193 param.data = fn(param.data)
194 if param._grad is not None:
195 param._grad.data = fn(param._grad.data)
C:\Users\psl\Anaconda3\lib\site-packages\torch\nn\modules\module.py in <lambda>(t)
258 Module: self
259 """
--> 260 return self._apply(lambda t: t.cuda(device))
261
262 def cpu(self):
C:\Users\psl\Anaconda3\lib\site-packages\torch\cuda\__init__.py in _lazy_init()
159 raise RuntimeError(
160 "Cannot re-initialize CUDA in forked subprocess. " + msg)
--> 161 _check_driver()
162 torch._C._cuda_init()
163 _cudart = _load_cudart()
C:\Users\psl\Anaconda3\lib\site-packages\torch\cuda\__init__.py in _check_driver()
80 Found no NVIDIA driver on your system. Please check that you
81 have an NVIDIA GPU and installed a driver from
---> 82 http://www.nvidia.com/Download/index.aspx""")
83 else:
84 # TODO: directly link to the alternative bin that needs install
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
| To force load the saved model onto cpu, use the following command.
torch.load('/path/to/saved/model', map_location='cpu')
In your case change it to
torch.load(config.MODEL.RESUME, map_location='cpu')
| https://stackoverflow.com/questions/55511857/ |
How to dynamically index the tensor in pytorch? | For example, I got a tensor:
tensor = torch.rand(12, 512, 768)
And I got an index list, say it is:
[0,2,3,400,5,32,7,8,321,107,100,511]
I wish to select 1 element out of 512 elements on dimension 2 given the index list. And then the tensor's size would become (12, 1, 768).
Is there a way to do it?
| There is also a way just using PyTorch and avoiding the loop using indexing and torch.split:
tensor = torch.rand(12, 512, 768)
# create tensor with idx
idx_list = [0,2,3,400,5,32,7,8,321,107,100,511]
# convert list to tensor
idx_tensor = torch.tensor(idx_list)
# indexing and splitting
list_of_tensors = tensor[:, idx_tensor, :].split(1, dim=1)
When you call tensor[:, idx_tensor, :] you will get a tensor of shape: (12, len_of_idx_list, 768). Where the second dimension depends on your number of indices.
Using torch.split this tensor is split into a list of tensors of shape: (12, 1, 768).
So finally list_of_tensors contains tensors of the shape:
[torch.Size([12, 1, 768]),
torch.Size([12, 1, 768]),
torch.Size([12, 1, 768]),
torch.Size([12, 1, 768]),
torch.Size([12, 1, 768]),
torch.Size([12, 1, 768]),
torch.Size([12, 1, 768]),
torch.Size([12, 1, 768]),
torch.Size([12, 1, 768]),
torch.Size([12, 1, 768]),
torch.Size([12, 1, 768]),
torch.Size([12, 1, 768])]
| https://stackoverflow.com/questions/55529236/ |
Cannot Obtain Similar DL Prediction Result in Pytorch C++ API Compared to Python | I have trained a deep learning model using unet architecture in order to segment the nuclei in python and pytorch. I would like to load this pretrained model and make prediction in C++. For this reason, I obtained trace file(with pt extension). Then, I have run this code:
#include <iostream>
#include <torch/script.h> // One-stop header.
#include <iostream>
#include <memory>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main(int argc, const char* argv[]) {
Mat image;
image = imread("C:/Users/Sercan/PycharmProjects/samplepyTorch/test_2.png", CV_LOAD_IMAGE_COLOR);
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load("C:/Users/Sercan/PycharmProjects/samplepyTorch/epistroma_unet_best_model_trace.pt");
module->to(torch::kCUDA);
std::vector<int64_t> sizes = { 1, 3, image.rows, image.cols };
torch::TensorOptions options(torch::ScalarType::Byte);
torch::Tensor tensor_image = torch::from_blob(image.data, torch::IntList(sizes), options);
tensor_image = tensor_image.toType(torch::kFloat);
auto result = module->forward({ tensor_image.to(at::kCUDA) }).toTensor();
result = result.squeeze().cpu();
result = at::sigmoid(result);
cv::Mat img_out(image.rows, image.cols, CV_32F, result.data<float>());
cv::imwrite("img_out.png", img_out);
}
Image outputs ( First image: test image, Second image: Python prediction result, Third image: C++ prediction result):
As you see, C++ prediction output is not similar to python prediction output. Could you offer a solution to fix this problem?
| Even though the question is old it might be useful to some. This answer is based on pytorch 1.5.0 release (and first stable version of C++ frontend), the case might be a little different in previous versions (though 1.4.0+ would work the same IIRC).
PyTorch C++ frontend code
no need to explicitly create torch::TensorOptions object if you only want to specify the type in torch::from_blob. Check Configuring Properties of Tensor in PyTorch notes, this will clear it up further. Basically, you can just use torch::ScalarType::Byte.
This type is equivalent to torch::kUInt8 which is easier to find in the docs IMO
No need to create std::vector object to keep shape as torch::from_blob has it's second argument of type IntArrayRef, which is a typedef for ArrayRef<int64_t> (see ArrayRef documentation). This class, in turn, has multiple overloaded constructors, one of which takes std::initializer_list (which is exactly yours { 1, 3, image.rows, image.cols })
With all that in mind you can create tensor_image in a single line like so (added auto as returned type is IMO obvious and const as it won't be modified further as the type is changed in the same line):
const auto tensor_image =
torch::from_blob(image.data, {1, 3, image.rows, image.cols},
torch::kUInt8)
.toType(torch::kFloat);
Actual error
OpenCV loads images in BGR (blue-green-red) format, while PyTorch usually uses RGB (say in torchvision in Python). Solution is to permute your image so the colors match.
Including above change, whole code becomes:
const auto tensor_image =
torch::from_blob(image.data, {1, 3, image.rows, image.cols},
torch::kUInt8)
.toType(torch::kFloat)
.permute(0, 3, 2, 1);
And you should be fine now with your predictions. Maybe it would be beneficial to get tensor > 0 instead of sigmoid as it's probably binary classification and there is no need for this operation per se.
Other PyTorch related stuff
There is no need to use at (ATen - as described in docs, foundational tensor and mathematical operation library on which all else is built) namespace anymore as torch:: namespace redirects to it.
Clearer and less confusing options would be:
torch::kCUDA instead of at::kCUDA
torch::sigmoid instead of at::sigmoid
Also .data<T> is deprecated in favor of .data_ptr<T>
All in all you rarely need to use different namespace than torch:: and it's sub-namespaces.
| https://stackoverflow.com/questions/55531432/ |
Understanding Gradient in Pytorch | I have some Pytorch code which demonstrates the gradient calculation within Pytorch, but I am thoroughly confused what got calculated and how it is used. This post here demonstrates the usage of it, but it does not make sense to me in terms of the back propagation algorithm. Looking at the gradient of in1 and in2 in the example below, I realized the gradient of in1 and in2 is the derivative of the loss function but my understanding is that the update needs to also account for the actual loss value as well? Where is the loss value getting used? Am I missing something here?
in1 = torch.randn(2,2,requires_grad=True)
in2 = torch.randn(2,2,requires_grad=True)
target = torch.randn(2,2)
l1 = torch.nn.L1Loss()
l2 = torch.nn.MSELoss()
out1 = l1(in1,target)
out2 = l2(in2,target)
out1.backward()
out2.backward()
in1.grad
in2.grad
| Backpropagation is based on the chain-rule for calculating derivatives. This means the gradients are computed step-by-step from tail to head and always passed back to the previous step ("previous" w.r.t. to the preceding forward pass).
For scalar output the process is initiated by assuming a gradient of d (out1) / d (out1) = 1 to start the process. If you're calling backward on a (non-scalar) tensor though you need to provide the initial gradient since it is not unambiguous.
Let's look at an example that involves more steps to compute the output:
a = torch.tensor(1., requires_grad=True)
b = a**2
c = 5*b
c.backward()
print(a.grad) # Prints: 10.
So what happens here?
The process is initiated by using d(c)/d(c) = 1.
Then the previous gradient is computed as d(c)/d(b) = 5 and multiplied with the downstream gradient (1 in this case), i.e. 5 * 1 = 5.
Again the previous gradient is computed as d(b)/d(a) = 2*a = 2 and multiplied again with the downstream gradient (5 in this case), i.e. 2 * 5 = 10.
Hence we arrive at a gradient value of 10 for the initial tensor a.
Now in effect this calculates d(c)/d(a) and that's all there is to it. It is the gradient of c with respect to a and hence no notion of a "target loss" is used (even if the loss was zero that doesn't mean the gradient has to be; it is up to the optimizer to step into the correct (downhill) direction and to stop once the loss got sufficiently small).
| https://stackoverflow.com/questions/55543786/ |
How do I flatten a tensor in pytorch? | Given a tensor of multiple dimensions, how do I flatten it so that it has a single dimension?
torch.Size([2, 3, 5]) ⟶ flatten ⟶ torch.Size([30])
| TL;DR: torch.flatten()
Use torch.flatten() which was introduced in v0.4.1 and documented in v1.0rc1:
>>> t = torch.tensor([[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]]])
>>> torch.flatten(t)
tensor([1, 2, 3, 4, 5, 6, 7, 8])
>>> torch.flatten(t, start_dim=1)
tensor([[1, 2, 3, 4],
[5, 6, 7, 8]])
For v0.4.1 and earlier, use t.reshape(-1).
With t.reshape(-1):
If the requested view is contiguous in memory
this will equivalent to t.view(-1) and memory will not be copied.
Otherwise it will be equivalent to t.contiguous().view(-1).
Other non-options:
t.view(-1) won't copy memory, but may not work depending on original size and stride
t.resize(-1) gives RuntimeError (see below)
t.resize(t.numel()) warning about being a low-level method
(see discussion below)
(Note: pytorch's reshape() may change data but numpy's reshape() won't.)
t.resize(t.numel()) needs some discussion. The torch.Tensor.resize_ documentation says:
The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged)
Given the current strides will be ignored with the new (1, numel()) size, the order of the elements may apppear in a different order than with reshape(-1). However, "size" may mean the memory size, rather than the tensor's size.
It would be nice if t.resize(-1) worked for both convenience and efficiency, but with torch 1.0.1.post2, t = torch.rand([2, 3, 5]); t.resize(-1) gives:
RuntimeError: requested resize to -1 (-1 elements in total), but the given
tensor has a size of 2x2 (4 elements). autograd's resize can only change the
shape of a given tensor, while preserving the number of elements.
I raised a feature request for this here, but the consensus was that resize() was a low level method, and reshape() should be used in preference.
| https://stackoverflow.com/questions/55546873/ |
4d Input Tensor vs 1d Input Tensor (aka vector) to a neural network | Reading about machine learning, I keep seeing references to the "input vector" or "feature vector", a 1d tensor that holds the input to the neural network. So for example a 28x28 grayscale image would be a 784 dimensional vector.
Then I also keep seeing references to images being a 4 dimensional tensor with the dimensions being number in batch, color channel, height, and width. For example, this is how it's described in "Deep Learning with Python, by Francois Chollet".
I'm wondering, why is it described in the different ways? When would one be used vs the other?
| There are two main considerations.
First is due to batching. Since we usually want to perform each optimization step based on gradient calculation for a number of training examples (and not just one), it is helpful to run the calculations for all of them at once. Therefore standard approach in many libraries is that the first dimension is the batch dimension, and all operations are applied independently for each subtensor along first dimension. Therefore most tensors in the actual code are at least 2-dimensional: [batch, any_other_dimensions...]. However, from the perspective of the neural network, batching is an implementation detail, so it is often skipped for clarity. Your link talks about 784 dimensional vectors, which are in practice almost undoubtedly processed in batches, so example tensors with batch size of 16 would be of size [batch, features] = [16, 784]. Summing up, we have the first dimension explained as batch, and then there are the any_other_dimensions... which in the above example happens to be a single features dimension of size 784.
Then come the 4 dimensional tensors, which arise when using convolutional neural networks, instead of fully connected ones. A fully connected network uses full matrices, which means that every neuron of the previous layer contributes to every neuron of the following layer. Convolutional neural networks can be seen as using a specially structured sparse matrix, where each neuron of the previous layer influences only some neurons of the following layer, namely those within some fixed distance of its location. Therefore, convolutions impose a spatial structure, which needs to be reflected in the intermediate tensors. Instead of [batch, features], we therefore need [batch, x, y] to reflect the spatial structure of the data. Finally, convolutional neural networks, in everyday practice, have a bit of admixture of fully-connected ones: they have the notion of multiple "features" which are localized spatially - giving raise to the so-called "feature maps" and the tensor raises to 4d: [batch, feature, x, y]. Each value tensor_new[b, f, x, x] is calculated based on all previous values tensor_previous[b', f', x', x'], subject to the following constraints:
b = b': we do not mix the batch elements
x' is at most some distance away from x and similarly for y': we only use the values in the spatial neighborhood
All f's are used: this is the "fully connected" part.
Convolutional neural networks are better suited to visual tasks than fully connected ones, which become infeasible for large enough images (imagine storing a fully connected matrix of size (1024 * 1024) ^ 2 for a 1024 x 1024px image). 4d tensors in CNNs are specific to 2d vision, you can encounter 3d tensors in 1d signal processing (for example sound): [batch, feature, time], 5d in 3d volume processing [batch, feature, x, y, z] and entirely different layouts in other kinds of networks which are neither fully-connected nor convolutional.
Summing up: if somebody tells you they are using 1d vectors, that's a simplification: almost surely the use at least two, for batching. Then, in the context of 2d computer vision, convolutional networks are the standard and they come with 4d tensors. In other scenarios, you may see even different layouts and dimensionalities. Keywords to google for more reading: fully connected neural networks, convolutional neural networks, minibatching or stochastic gradient descend (these two are closely related).
| https://stackoverflow.com/questions/55547943/ |
Pytorch doesn't support one-hot vector? | I am very confused by how Pytorch deals with one-hot vectors. In this tutorial, the neural network will generate a one-hot vector as its output. As far as I understand, the schematic structure of the neural network in the tutorial should be like:
However, the labels are not in one-hot vector format. I get the following size
print(labels.size())
print(outputs.size())
output>>> torch.Size([4])
output>>> torch.Size([4, 10])
Miraculously, I they pass the outputs and labels to criterion=CrossEntropyLoss(), there's no error at all.
loss = criterion(outputs, labels) # How come it has no error?
My hypothesis:
Maybe pytorch automatically convert the labels to one-hot vector form. So, I try to convert labels to one-hot vector before passing it to the loss function.
def to_one_hot_vector(num_class, label):
b = np.zeros((label.shape[0], num_class))
b[np.arange(label.shape[0]), label] = 1
return b
labels_one_hot = to_one_hot_vector(10,labels)
labels_one_hot = torch.Tensor(labels_one_hot)
labels_one_hot = labels_one_hot.type(torch.LongTensor)
loss = criterion(outputs, labels_one_hot) # Now it gives me error
However, I got the following error
RuntimeError: multi-target not supported at
/opt/pytorch/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
So, one-hot vectors are not supported in Pytorch? How does Pytorch calculates the cross entropy for the two tensor outputs = [1,0,0],[0,0,1] and labels = [0,2] ? It doesn't make sense to me at all at the moment.
| PyTorch states in its documentation for CrossEntropyLoss that
This criterion expects a class index (0 to C-1) as the target for each value of a 1D tensor of size minibatch
In other words, it has your to_one_hot_vector function conceptually built in CEL and does not expose the one-hot API. Notice that one-hot vectors are memory inefficient compared to storing class labels.
If you are given one-hot vectors and need to go to class labels format (for instance to be compatible with CEL), you can use argmax like below:
import torch
labels = torch.tensor([1, 2, 3, 5])
one_hot = torch.zeros(4, 6)
one_hot[torch.arange(4), labels] = 1
reverted = torch.argmax(one_hot, dim=1)
assert (labels == reverted).all().item()
| https://stackoverflow.com/questions/55549843/ |
Dict[str, Any] or Dict[str, Field] in pytext | I'm reading the document of pytext (NLP modeling framework built on PyTorch) and this simple method from_config, a factory method to create a component from a config, has lines like Dict[str, Field] = {ExtraField.TOKEN_RANGE: RawField()}.
@classmethod
def from_config(cls, config: Config, model_input_config, target_config, **kwargs):
model_input_fields: Dict[str, Field] = create_fields(
model_input_config,
{
ModelInput.WORD_FEAT: TextFeatureField,
ModelInput.DICT_FEAT: DictFeatureField,
ModelInput.CHAR_FEAT: CharFeatureField,
},
)
target_fields: Dict[str, Field] = {WordLabelConfig._name: WordLabelField.from_config(target_config)}
extra_fields: Dict[str, Field] = {ExtraField.TOKEN_RANGE: RawField()}
kwargs.update(config.items())
return cls(
raw_columns=config.columns_to_read,
targets=target_fields,
features=model_input_fields,
extra_fields=extra_fields,
**kwargs,
)
and
def preprocess(self, data: List[Dict[str, Any]]):
tokens = []
for row in data:
tokens.extend(self.preprocess_row(row))
return [{"text": tokens}]
How can a dictionary have keys with 2 items? What exactly is this?
I would appreciate any pointer!
| What you're seeing are python type annotations. You can read about the syntax, design and rationale here and about the actual implementation (possible types, how to construct custom ones, etc) here. Note that here List and Dict are upper cased - Dict[str, Any] is meant to construct the type "a dictionary with string keys and Any values" and not to access an instance of that type.
Those are optional and by default are not used for anything (so you can just ignore them when reading your code, because python also does). However, there are tools like mypy which can interpret these type annotations and check whether they are consistent.
I don't know for sure how they are used in torchtext - I don't use it myself and I haven't found anything quickly searching the documentation - but they are likely helpful to the developers who use some special tooling. But they can also be helpful to you! From your perspective, they are best treated as comments rather than code. Reading the signature of preprocess you know that data should be a list of dicts with str keys and any value type. If you have bugs in your code and find that data is a str itself, you know for sure that it is a bug (perhaps not the only one).
| https://stackoverflow.com/questions/55556562/ |
How to import Python package (Pytorch-neat) that is not installable from pip/conda repositories? | I am trying to used Pytorch-neat package https://github.com/uber-research/PyTorch-NEAT but I don't understand the workflow of using it. I already installed python-neat package and I can import it using import neat in my Jupyter notebook. But what should I do with Pytroch-neat code? There is no pytorch-neat package in Conda or pip repositories, so, I guess that this Pytroch-neat code is not compiled and distributed as the Python package for Jupyter notebook. But what should I do with this code? E.g. sample script contains the code:
import neat
from pytorch_neat.multi_env_eval import MultiEnvEvaluator
So - neat is package and I am importing it. But how to understand the from clause? Should I load Pytroch-neat scripts somehow in the previous cells of my notebook and then I can call this from clause? Or maybe I should compile Pytroch-neat package locally and install it from local repository and import it similarly to neat package. But if so, why the examples use from clause?
I am starting to use Python and I am greatly confused with all of this!
| To import from pytorch_neat you have to clone the repository and manually copy directory pytorch_neat into your site-packages (or any directory in sys.path).
| https://stackoverflow.com/questions/55563322/ |
Pytorch. How does pin_memory work in Dataloader? | I want to understand how pin_memory in Dataloader works.
According to the documentation:
pin_memory (bool, optional) – If True, the data loader will copy tensors into CUDA pinned memory before returning them.
Below is a self-contained code example.
import torchvision
import torch
print('torch.cuda.is_available()', torch.cuda.is_available())
train_dataset = torchvision.datasets.CIFAR10(root='cifar10_pytorch', download=True, transform=torchvision.transforms.ToTensor())
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=64, pin_memory=True)
x, y = next(iter(train_dataloader))
print('x.device', x.device)
print('y.device', y.device)
Producing the following output:
torch.cuda.is_available() True
x.device cpu
y.device cpu
But I was expecting something like this, because I specified flag pin_memory=True in Dataloader.
torch.cuda.is_available() True
x.device cuda:0
y.device cuda:0
Also I run some benchmark:
import torchvision
import torch
import time
import numpy as np
pin_memory=True
train_dataset =torchvision.datasets.CIFAR10(root='cifar10_pytorch', download=True, transform=torchvision.transforms.ToTensor())
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=64, pin_memory=pin_memory)
print('pin_memory:', pin_memory)
times = []
n_runs = 10
for i in range(n_runs):
st = time.time()
for bx, by in train_dataloader:
bx, by = bx.cuda(), by.cuda()
times.append(time.time() - st)
print('average time:', np.mean(times))
I got the following results.
pin_memory: False
average time: 6.5701503753662
pin_memory: True
average time: 7.0254474401474
So pin_memory=True only makes things slower.
Can someone explain me this behaviour?
| The documentation is perhaps overly laconic, given that the terms used are fairly niche. In CUDA terms, pinned memory does not mean GPU memory but non-paged CPU memory. The benefits and rationale are provided here, but the gist of it is that this flag allows the x.cuda() operation (which you still have to execute as usually) to avoid one implicit CPU-to-CPU copy, which makes it a bit more performant. Additionally, with pinned memory tensors you can use x.cuda(non_blocking=True) to perform the copy asynchronously with respect to host. This can lead to performance gains in certain scenarios, namely if your code is structured as
x.cuda(non_blocking=True)
perform some CPU operations
perform GPU operations using x.
Since the copy initiated in 1. is asynchronous, it does not block 2. from proceeding while the copy is underway and thus the two can happen side by side (which is the gain). Since step 3. requires x to be already copied over to GPU, it cannot be executed until 1. is complete - therefore only 1. and 2. can be overlapping, and 3. will definitely take place afterwards. The duration of 2. is therefore the maximum time you can expect to save with non_blocking=True. Without non_blocking=True your CPU would be waiting idle for the transfer to complete before proceeding with 2..
Note: perhaps step 2. could also comprise of GPU operations, as long as they do not require x - I am not sure if this is true and please don't quote me on that.
Edit: I believe you're missing the point with your benchmark. There are three issues with it
You're not using non_blocking=True in your .cuda() calls.
You're not using multiprocessing in your DataLoader, which means that most of the work is done synchronously on main thread anyway, trumping the memory transfer costs.
You're not performing any CPU work in your data loading loop (aside from .cuda() calls) so there is no work to be overlaid with memory transfers.
A benchmark closer to how pin_memory is meant to be used would be
import torchvision, torch, time
import numpy as np
pin_memory = True
batch_size = 1024 # bigger memory transfers to make their cost more noticable
n_workers = 6 # parallel workers to free up the main thread and reduce data decoding overhead
train_dataset =torchvision.datasets.CIFAR10(
root='cifar10_pytorch',
download=True,
transform=torchvision.transforms.ToTensor()
)
train_dataloader = torch.utils.data.DataLoader(
train_dataset,
batch_size=batch_size,
pin_memory=pin_memory,
num_workers=n_workers
)
print('pin_memory:', pin_memory)
times = []
n_runs = 10
def work():
# emulates the CPU work done
time.sleep(0.1)
for i in range(n_runs):
st = time.time()
for bx, by in train_dataloader:
bx, by = bx.cuda(non_blocking=pin_memory), by.cuda(non_blocking=pin_memory)
work()
times.append(time.time() - st)
print('average time:', np.mean(times))
which gives an average of 5.48s for my machine with memory pinning and 5.72s without.
| https://stackoverflow.com/questions/55563376/ |
how to avoid split and sum of pieces in pytorch or numpy | I want to split a long vector into smaller unequal pieces, do a summation on each piece and gather the results into a new vector.
I need to do this in pytorch but I am also interested to see how this is done with numpy.
This can easily be accomplish by splitting the vector.
sizes = [3, 7, 5, 9]
X = torch.ones(sum(sizes))
Y = torch.tensor([s.sum() for s in torch.split(X, sizes)])
or with np.ones and np.split.
Is there a more efficient way to do this?
Edit:
Inspired by the first comment:
indices = np.cumsum([0]+sizes)[:-1]
Y = np.add.reduceat(X, indices.tolist())
solves it for numpy. I am still looking for a solution with pytorch.
| index_add_ is your friend!
# inputs
sizes = torch.tensor([3, 7, 5, 9], dtype=torch.long)
x = torch.ones(sizes.sum())
# prepare an index vector for summation (what elements of x are summed to each element of y)
ind = torch.zeros(sizes.sum(), dtype=torch.long)
ind[torch.cumsum(sizes, dim=0)[:-1]] = 1
ind = torch.cumsum(ind, dim=0)
# prepare the output
y = torch.zeros(len(sizes))
# do the actual summation
y.index_add_(0, ind, x)
| https://stackoverflow.com/questions/55567838/ |
1D correlation between 2 matrices | I want to find 1D correlation between two matrices. These two matrices are the output of a convolution operation on two different images. Let's call the first matrix as matrix A and the other one as matrix B. Both these matrices have the shape 100 x 100 x 64 (say).
I've been following a research paper which basically computes 1D correlation between these two matrices (matrix A and matrix B) in one of the steps and the output of the correlation operation is also a matrix with the shape 100 x 100 x 64. The link to the paper can be found here. The network can be found on Page 4. The correlation part is in the bottom part of the network. A couple of lines have been mentioned about it in the 2nd paragraph of section 3.3 (on the same page, below the network).
I am not really sure what they mean by 1D correlation and more so how to implement it in Python. I am also confused as to how the shape of the output remains the same as the input after applying correlation. I am using the PyTorch library for implementing this network.
Any help will be appreciated. Thanks.
| So they basically have 1 original image, which they treat as the left side view for the depth perception algorithm, but since you need stereo vision to calculate depth in a still image they use a neural structure to synthesise a right side view.
1 Dimensional Correlation takes 2 sequences and calculates the correlation at each point giving you another 1D sequence of the same length as the 2 inputs. So if you apply this correlation along a certain axis of a tensor the resultant tensor does not change shape.
Intuitively they thought it made sense to correlate the images along the horizontal axes a bit like reading the images like reading a book, but in this instance it should have an effect akin to identifying that things that are further away also appear to be points that are closer together in the left and right side views. The correlation is probably higher for left and right side data-points that are further away and this makes the depth classification for the neural network much easier.
| https://stackoverflow.com/questions/55574457/ |
Conv1D with kernel_size=1 vs Linear layer | I'm working on very sparse vectors as input. I started working with simple Linear (dense/fully connected layers) and my network yielded pretty good results (let's take accuracy as my metric here, 95.8%).
I later tried to use a Conv1d with a kernel_size=1 and a MaxPool1d, and this network works slightly better (96.4% accuracy).
Question: How are these two implementation different ? Shouldn't a Conv1d with a unit kernel_size do the same as a Linear layer?
I've tried multiple runs, the CNN always yields slightly better results.
| nn.Conv1d with a kernel size of 1 and nn.Linear give essentially the same results. The only differences are the initialization procedure and how the operations are applied (which has some effect on the speed). Note that using a linear layer should be faster as it is implemented as a simple matrix multiplication (+ adding a broadcasted bias vector)
@RobinFrcd your answers are either different due to MaxPool1d or due to the different initialization procedure.
Here are a few experiments to prove my claims:
def count_parameters(model):
"""Count the number of parameters in a model."""
return sum([p.numel() for p in model.parameters()])
conv = torch.nn.Conv1d(8,32,1)
print(count_parameters(conv))
# 288
linear = torch.nn.Linear(8,32)
print(count_parameters(linear))
# 288
print(conv.weight.shape)
# torch.Size([32, 8, 1])
print(linear.weight.shape)
# torch.Size([32, 8])
# use same initialization
linear.weight = torch.nn.Parameter(conv.weight.squeeze(2))
linear.bias = torch.nn.Parameter(conv.bias)
tensor = torch.randn(128,256,8)
permuted_tensor = tensor.permute(0,2,1).clone().contiguous()
out_linear = linear(tensor)
print(out_linear.mean())
# tensor(0.0067, grad_fn=<MeanBackward0>)
out_conv = conv(permuted_tensor)
print(out_conv.mean())
# tensor(0.0067, grad_fn=<MeanBackward0>)
Speed test:
%%timeit
_ = linear(tensor)
# 151 µs ± 297 ns per loop
%%timeit
_ = conv(permuted_tensor)
# 1.43 ms ± 6.33 µs per loop
As Hanchen's answer show, the results can differ very slightly due to numerical precision.
| https://stackoverflow.com/questions/55576314/ |
torch.nn.sequential vs. combination of multiple torch.nn.linear | I'm trying to create a multi layer neural net class in pytorch. I want to know if the following 2 pieces of code create the same network.
Model 1 with nn.Linear
class TestModel(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(TestModel, self).__init__()
self.fc1 = nn.Linear(input_dim,hidden_dim)
self.fc2 = nn.Linear(hidden_dim,output_dim)
def forward(self, x):
x = nn.functional.relu(self.fc1(x))
x = nn.functional.softmax(self.fc2(x))
return x
Model 2 with nn.Sequential
class TestModel2(nn.Module):
def __init__(self, input, hidden, output):
super(TestModel2, self).__init__()
self.seq = nn.Sequential(
nn.Linear(input_dim,hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim,output_dim),
nn.Softmax()
)
def forward(self, x):
return self.seq(x)
| Yes, these two pieces of code create the same network.
One way to convince yourself that this is true is to save both models to ONNX.
import torch.nn as nn
class TestModel(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(TestModel, self).__init__()
self.fc1 = nn.Linear(input_dim,hidden_dim)
self.fc2 = nn.Linear(hidden_dim,output_dim)
def forward(self, x):
x = nn.functional.relu(self.fc1(x))
x = nn.functional.softmax(self.fc2(x))
return x
class TestModel2(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(TestModel2, self).__init__()
self.seq = nn.Sequential(
nn.Linear(input_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, output_dim),
nn.Softmax()
)
def forward(self, x):
return self.seq(x)
m = TestModel(1, 2, 3)
m2 = TestModel2(1, 2, 3)
torch.onnx.export(m, torch.Tensor([0]), "test.onnx", verbose=True)
/opt/anaconda3/envs/py36/bin/ipython:9: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
graph(%0 : Float(1)
%1 : Float(2, 1)
%2 : Float(2)
%3 : Float(3, 2)
%4 : Float(3)) {
%5 : Float(1!, 2) = onnx::Transpose[perm=[1, 0]](%1), scope: TestModel/Linear[fc1]
%6 : Float(2) = onnx::MatMul(%0, %5), scope: TestModel/Linear[fc1]
%7 : Float(2) = onnx::Add(%6, %2), scope: TestModel/Linear[fc1]
%8 : Float(2) = onnx::Relu(%7), scope: TestModel
%9 : Float(2!, 3!) = onnx::Transpose[perm=[1, 0]](%3), scope: TestModel/Linear[fc2]
%10 : Float(3) = onnx::MatMul(%8, %9), scope: TestModel/Linear[fc2]
%11 : Float(3) = onnx::Add(%10, %4), scope: TestModel/Linear[fc2]
%12 : Float(3) = onnx::Softmax[axis=0](%11), scope: TestModel
return (%12);
}
torch.onnx.export(m2, torch.Tensor([0]), "test.onnx", verbose=True)
/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py:475: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
result = self._slow_forward(*input, **kwargs)
graph(%0 : Float(1)
%1 : Float(2, 1)
%2 : Float(2)
%3 : Float(3, 2)
%4 : Float(3)) {
%5 : Float(1!, 2) = onnx::Transpose[perm=[1, 0]](%1), scope: TestModel2/Sequential[seq]/Linear[0]
%6 : Float(2) = onnx::MatMul(%0, %5), scope: TestModel2/Sequential[seq]/Linear[0]
%7 : Float(2) = onnx::Add(%6, %2), scope: TestModel2/Sequential[seq]/Linear[0]
%8 : Float(2) = onnx::Relu(%7), scope: TestModel2/Sequential[seq]/ReLU[1]
%9 : Float(2!, 3!) = onnx::Transpose[perm=[1, 0]](%3), scope: TestModel2/Sequential[seq]/Linear[2]
%10 : Float(3) = onnx::MatMul(%8, %9), scope: TestModel2/Sequential[seq]/Linear[2]
%11 : Float(3) = onnx::Add(%10, %4), scope: TestModel2/Sequential[seq]/Linear[2]
%12 : Float(3) = onnx::Softmax[axis=0](%11), scope: TestModel2/Sequential[seq]/Softmax[3]
return (%12);
}
So both models result in the same ONNX graph with the same operations.
| https://stackoverflow.com/questions/55584747/ |
How do I update a tensor in Pytorch after indexing twice? | I know how to update a tensor after indexing into part of it like this:
import torch
b = torch.tensor([0, 1, 0, 1], dtype=torch.uint8)
b[b] = 2
b
# tensor([0, 2, 0, 2], dtype=torch.uint8)
but is there a way I can update the original tensor after indexing into it twice? E.g.
i = 1
b = torch.tensor([0, 1, 0, 1], dtype=torch.uint8)
b[b][i] = 2
b
# tensor([0, 1, 0, 1], dtype=torch.uint8)
What I'd like is for b to be tensor([0, 1, 0, 2]) at the end. Is there a way to do this?
I know that I can do
masked = b[b]
masked[i] = 2
b[b] = masked
b
# tensor([0, 1, 0, 2], dtype=torch.uint8)
but is there any better way? It seems that this must be inefficient; if masked is very large, I'm updating many locations in b when I've really only changed one.
(In case a different approach than indexing twice would work better, the general problem I have is how to change the value in an original tensor at the ith location of a masked version of that tensor.)
| I adopted another solution from here, and compared it to your solution:
Solution:
b[b.nonzero()[i]] = 2
Runtime comparison:
import torch as t
import numpy as np
import timeit
if __name__ == "__main__":
np.random.seed(12345)
b = t.tensor(np.random.randint(0,2, [1000]), dtype=t.uint8)
# inconvenient way to think of a random index halfway that is 1.
halfway = np.array(list(range(len(b))))[b == 1][len(b[b == 1]) //2]
runs = 100000
elapsed1 = timeit.timeit("mask=b[b]; mask[halfway] = 2; b[b] = mask",
"from __main__ import b, halfway", number=runs)
print("Time taken (original): {:.6f} ms per call".format(elapsed1 / runs))
elapsed2 = timeit.timeit("b[b.nonzero()[halfway]]=2",
"from __main__ import b, halfway", number=runs)
print("Time taken (improved): {:.6f} ms per call".format(elapsed2 / runs))
Results:
Time taken (original): 0.000096 ms per call
Time taken (improved): 0.000047 ms per call
Results for vector of length 100000
Time taken: 0.010284 ms per call
Time taken: 0.003667 ms per call
So the solutions differ only by factor 2. I'm not sure if this is the optimal solution, but depending on your size (and how often you call the function) it should give you a rough idea of what you're looking at.
| https://stackoverflow.com/questions/55584779/ |
PyTorch transforms on TensorDataset | I'm using TensorDataset to create dataset from numpy arrays.
# convert numpy arrays to pytorch tensors
X_train = torch.stack([torch.from_numpy(np.array(i)) for i in X_train])
y_train = torch.stack([torch.from_numpy(np.array(i)) for i in y_train])
# reshape into [C, H, W]
X_train = X_train.reshape((-1, 1, 28, 28)).float()
# create dataset and dataloaders
train_dataset = torch.utils.data.TensorDataset(X_train, y_train)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64)
How do I apply data augmentation (transforms) to TensorDataset?
For example, using ImageFolder, I can specify transforms as one of its parameters torchvision.datasets.ImageFolder(root, transform=...).
According to this reply by one of PyTorch's team members, it's not supported by default. Is there any alternative way to do so?
Feel free to ask if more code is needed to explain the problem.
|
By default transforms are not supported for TensorDataset. But we can create our custom class to add that option. But, as I already mentioned, most of transforms are developed for PIL.Image. But anyway here is very simple MNIST example with very dummy transforms. csv file with MNIST here.
Code:
import numpy as np
import torch
from torch.utils.data import Dataset, TensorDataset
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
# Import mnist dataset from cvs file and convert it to torch tensor
with open('mnist_train.csv', 'r') as f:
mnist_train = f.readlines()
# Images
X_train = np.array([[float(j) for j in i.strip().split(',')][1:] for i in mnist_train])
X_train = X_train.reshape((-1, 1, 28, 28))
X_train = torch.tensor(X_train)
# Labels
y_train = np.array([int(i[0]) for i in mnist_train])
y_train = y_train.reshape(y_train.shape[0], 1)
y_train = torch.tensor(y_train)
del mnist_train
class CustomTensorDataset(Dataset):
"""TensorDataset with support of transforms.
"""
def __init__(self, tensors, transform=None):
assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors)
self.tensors = tensors
self.transform = transform
def __getitem__(self, index):
x = self.tensors[0][index]
if self.transform:
x = self.transform(x)
y = self.tensors[1][index]
return x, y
def __len__(self):
return self.tensors[0].size(0)
def imshow(img, title=''):
"""Plot the image batch.
"""
plt.figure(figsize=(10, 10))
plt.title(title)
plt.imshow(np.transpose( img.numpy(), (1, 2, 0)), cmap='gray')
plt.show()
# Dataset w/o any tranformations
train_dataset_normal = CustomTensorDataset(tensors=(X_train, y_train), transform=None)
train_loader = torch.utils.data.DataLoader(train_dataset_normal, batch_size=16)
# iterate
for i, data in enumerate(train_loader):
x, y = data
imshow(torchvision.utils.make_grid(x, 4), title='Normal')
break # we need just one batch
# Let's add some transforms
# Dataset with flipping tranformations
def vflip(tensor):
"""Flips tensor vertically.
"""
tensor = tensor.flip(1)
return tensor
def hflip(tensor):
"""Flips tensor horizontally.
"""
tensor = tensor.flip(2)
return tensor
train_dataset_vf = CustomTensorDataset(tensors=(X_train, y_train), transform=vflip)
train_loader = torch.utils.data.DataLoader(train_dataset_vf, batch_size=16)
result = []
for i, data in enumerate(train_loader):
x, y = data
imshow(torchvision.utils.make_grid(x, 4), title='Vertical flip')
break
train_dataset_hf = CustomTensorDataset(tensors=(X_train, y_train), transform=hflip)
train_loader = torch.utils.data.DataLoader(train_dataset_hf, batch_size=16)
result = []
for i, data in enumerate(train_loader):
x, y = data
imshow(torchvision.utils.make_grid(x, 4), title='Horizontal flip')
break
Output:
| https://stackoverflow.com/questions/55588201/ |
Fixed Gabor Filter Convolutional Neural Networks | I'm trying to build a CNN with some conv layers where half of the filters in the layer are fixed and the other half is learnable while training the model. But I didn't find anything about that.
what I'm trying to do is similar to what they did in this paper https://arxiv.org/pdf/1705.04748.pdf
Is there a way to do that in Keras, Pytorch...
| Sure. In PyTorch you can use nn.Conv2d and
set its weight parameter manually to your desired filters
exclude these weights from learning
A simple example would be:
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv_learning = nn.Conv2d(1, 5, 3, bias=False)
self.conv_gabor = nn.Conv2d(1, 5, 3, bias=False)
# weights HAVE TO be wrapped in `nn.Parameter` even if they are not learning
self.conv_gabor.weight = nn.Parameter(torch.randn(1, 5, 3, 3))
def forward(self, x):
y = self.conv_learning(x)
y = torch.sigmoid(y)
y = self.conv_gabor(y)
return y.mean()
model = Model()
xs = torch.randn(10, 1, 30, 30)
ys = torch.randn(10)
loss_fn = nn.MSELoss()
# we can exclude parameters from being learned here, by filtering them
# out based on some criterion. For instance if all your fixed filters have
# "gabor" in name, the following will do
learning_parameters = (param for name, param in model.named_parameters()
if 'gabor' not in name)
optim = torch.optim.SGD(learning_parameters, lr=0.1)
epochs = 10
for e in range(epochs):
y = model(xs)
loss = loss_fn(y, ys)
model.zero_grad()
loss.backward()
optim.step()
| https://stackoverflow.com/questions/55592324/ |
nvcc and clang are not working well together when installing pytorch-gpu | I am trying to install pytorch with gpu support on my MacBook Pro following official instructions.
Things go smoothly until an error occurred:
[ 70%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/caffe2_gpu_generated_THCTensorMath.cu.o
nvcc fatal : The version ('90000') of the host compiler ('Apple clang') is not supported
nvcc fatal : The version ('90000') of the host compiler ('Apple clang') is not supported
CMake Error at caffe2_gpu_generated_THCBlas.cu.o.Release.cmake:219 (message):
Error generating
/Users/username/Dev/pytorch-gpu/pytorch/build/cCaMfafkee2 /ECrMraokre Faitle sc/acfaffef2e_2g_pgup_ug.ednierr/a_t_e/da_tTeHnC/Sslrece/pT.HcCu/..o/.cRaeflfeea2s_eg.pcum_agkeen:e2r1a9t e(dm_eTsHsCaBglea)s:.
It seems that CUDA and clang are not working well together.
I searched over internet and found these posts, but they did not solve my problem:
Revert Apple Clang Version For NVCC
https://github.com/pytorch/pytorch/issues/3047
Here's my environment:
macOS Sierra 10.12.6 (16G1618)
NVIDIA GeForce GT 750M
CUDA Driver Version: 387.178
GPU Driver Version: 378.05.05.25f11
Cuda compilation tools, release 8.0, V8.0.61
(Previous)Apple LLVM version 9.0.0 (clang-900.0.39.2)
(After downgrade)Apple LLVM version 8.1.0 (clang-802.0.42)
Xcode Version 9.2 (9C40b)
| I am answering my own question.
Incorrect CUDA installation on macOS could be a nightmare. The versions of CUDA, Xcode, clang and macOS really matter. Here are some of the official tested ones:
+------+--------------+------------+---------------------------------+--------+
| CUDA | Xcode | Apple LLVM | Mac OSX Version (native x86_64) | Yes/No |
+------+--------------+------------+---------------------------------+--------+
| 8.0 | 7.2 | 7.0.3 | 10.11 | YES |
| 8.0 | 7.2 | 7.0.3 | 10.12 | NO |
| 8.0 | 8.2 | 8.0.0 | 10.11 | NO |
| 8.0 | 8.2 | 8.0.0 | 10.12 | YES |
| 9.0 | 8.3.3 | 8.1.0 | 10.12 | YES |
| 9.1 | 9.2 | 9.0.0 | 10.13.3 | YES |
| 9.2 | 9.2 | 9.0.0 | 10.13.5 | YES |
| 10.0 | 9.4 | 9.0.0 | 10.13.6 | YES |
| 10.1 | 10.1 (10B61) | 10.0.0 | 10.13.6 (17G2307) | YES |
+------+--------------+------------+---------------------------------+--------+
For CUDA Releases before 8.0, please search for NVIDIA CUDA INSTALLATION GUIDE FOR MAC OS X plus the CUDA version number, there should be a table of version matching in that PDF file.
| https://stackoverflow.com/questions/55594309/ |
How to visualise filters in a CNN with PyTorch | I'm new to deep learning and Pytorch. I want to visual my filter in my CNN model so that I can iterate layer in the CNN model that I define. But I meet error like below.
error: 'CNN' object is not iterable
The CNN object is my model.
My iteration code is like below:
for index, layer in enumerate(self.model):
# Forward pass layer by layer
x = layer(x)
my model code like below:
class CNN(nn.Module):
def __init__(self):
super(CNN,self).__init__()
self.Conv1 = nn.Sequential( # input image size (1,28,20)
nn.Conv2d(1, 16, 5, 1, 2), # outputize (16,28,20)
nn.ReLU(),
nn.MaxPool2d(2), #outputize (16,14,10)
)
self.Conv2 = nn.Sequential( # input ize ? (16,,14,10)
nn.Conv2d(16, 32, 5, 1, 2), #output size(32,14,10)
nn.ReLU(),
nn.MaxPool2d(2), #output size (32,7,5)
)
self.fc1 = nn.Linear(32 * 7 * 5, 800)
self.fc2 = nn.Linear(800,500)
self.fc3 = nn.Linear(500,10)
#self.fc4 = nn.Linear(200,10)
def forward(self,x):
x = self.Conv1(x)
x = self.Conv2(x)
x = x.view(x.size(0), -1)
x = self.fc1(x)
x = F.dropout(x)
x = F.relu(x)
x = self.fc2(x)
x = F.dropout(x)
x = F.relu(x)
x = self.fc3(x)
#x = F.relu(x)
#x = self.fc4(x)
return x
So anyone can tell me how can I solve this problem.
| Essentially, you will need to access the features in your model and transpose those matrices into the right shape first, then you can visualise the filters
import numpy as np
import matplotlib.pyplot as plt
from torchvision import utils
def visTensor(tensor, ch=0, allkernels=False, nrow=8, padding=1):
n,c,w,h = tensor.shape
if allkernels: tensor = tensor.view(n*c, -1, w, h)
elif c != 3: tensor = tensor[:,ch,:,:].unsqueeze(dim=1)
rows = np.min((tensor.shape[0] // nrow + 1, 64))
grid = utils.make_grid(tensor, nrow=nrow, normalize=True, padding=padding)
plt.figure( figsize=(nrow,rows) )
plt.imshow(grid.numpy().transpose((1, 2, 0)))
if __name__ == "__main__":
layer = 1
filter = model.features[layer].weight.data.clone()
visTensor(filter, ch=0, allkernels=False)
plt.axis('off')
plt.ioff()
plt.show()
You should be able to get a grid visual.
There are a few more visualisation techniques, you can study them here
| https://stackoverflow.com/questions/55594969/ |
How to do numerical integration with pytorch similar to numpy's trapz function? | Title says it all. Is there a convenient function in pytorch that can do something like np.trapz(y, x) (integrating over the the points in x and y via trapezoidal rule)?
| There is no built-in tool for that, but it should not be difficult to implement it yourself, especially using the numpy code as a guideline.
| https://stackoverflow.com/questions/55605577/ |
Why is the derivative of f(x) with respect to 'x' 'x' and not 1 in pytorch? | I am trying to understand pytorch's autograd in full and I stumbled with this: let f(x)=x, from basic maths we know that f'(x)=1, however when I do that exercise in pytorch I get that f'(x) = x.
z = torch.linspace(-1, 1, steps=5, requires_grad=True)
y = z
y.backward(z)
print("Z tensor is: {} \n Gradient of y with respect to z is: {}".format(z, z.grad))
I would expect to get a tensor of size 5 full of 1 but instead I get:
Z tensor is: tensor([-1.0000, -0.5000, 0.0000, 0.5000, 1.0000], requires_grad=True)
Gradient of y with respect to z is: tensor([-1.0000, -0.5000, 0.0000, 0.5000, 1.0000])
Why is this the behavior of pytorch?
| First of all, given z = torch.linspace(-1, 1, steps=5, requires_grad=True) and y = z, the function is a vector-valued function, so the derivative of y w.r.t z is not as simple as 1 but a Jacobian matrix. Actually in your case z = [z1, z2, z3, z4, z5]T , the upper case T means z is a row vector. Here is what the official doc says:
Secondly, notice the official doc says: Now in this case y is no longer a scalar. torch.autograd could not compute the full Jacobian directly, but if we just want the vector-Jacobian product, simply pass the vector to backward as argument link. In that case x.grad is not the actual gradient value (matrix) but the vector-Jacobian product.
EDIT:
x.grad is the actual gradient if your output y is a scalar.
See the example here:
z = torch.linspace(-1, 1, steps=5, requires_grad=True)
y = torch.sum(z)
y.backward()
z.grad
This will output:
tensor([1., 1., 1., 1., 1.])
As you can see, it is the gradient. Notice the only difference is that y is a scalar value here while a vector value in your example. grad can be implicitly created only for scalar outputs
You might wonder what if the gradient is not a constant, like dependent on input z as in this case
z = torch.linspace(-1, 1, steps=5, requires_grad=True)
y = torch.sum(torch.pow(z,2))
y.backward()
z.grad
The output is:
tensor([-2., -1., 0., 1., 2.])
It is the same as
z = torch.linspace(-1, 1, steps=5, requires_grad=True)
y = torch.sum(torch.pow(z,2))
y.backward(torch.tensor(1.))
z.grad
The blitz tutorial is kind of brief so it is actually quite hard to understand for beginners.
| https://stackoverflow.com/questions/55613439/ |
Why isn't there inplace flag in F.sigmoid in pytorch? | Both relu, leakyrelu have inplace flag, so why not sigmoid?
Signature: F.sigmoid(input)
F.relu(input, inplace=False)
| According to docs:
nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
If you need in-place version, use sigmoid_:
import torch
torch.manual_seed(0)
a = torch.randn(5)
print(a)
a.sigmoid_()
print(a)
tensor([ 1.5410, -0.2934, -2.1788, 0.5684, -1.0845])
tensor([0.8236, 0.4272, 0.1017, 0.6384, 0.2527])
sigmoid docs
| https://stackoverflow.com/questions/55615813/ |
Why is torch.nn.Sigmoid a class instead of a method? | I'm trying to understand how pytorch works a little bit better. Usually, when defining a neural network class, in the init() constructor, people write self.sigmoid = nn.Sigmoid(), so that in the forward() method they can call the sigmoid function multiple times with having to reinstantiate nn.Sigmoid() every time.
But why isn't nn.Sigmoid just a method to begin with, instead of a class?
Also, I was curious what to refer to 'nn' in torch.nn as (package? library?).
Thanks!
| My understanding is that the nn.Sigmoid exists to be composable with other nn layers, like this:
net = nn.Sequential(
nn.Linear(3, 4),
nn.Sigmoid())
If you don't need this, you can just use torch.sigmoid function.
| https://stackoverflow.com/questions/55621322/ |
Evaluating pytorch models: `with torch.no_grad` vs `model.eval()` | When I want to evaluate the performance of my model on the validation set, is it preferred to use with torch.no_grad: or model.eval()?
| TL;DR:
Use both. They do different things, and have different scopes.
with torch.no_grad - disables tracking of gradients in autograd.
model.eval() changes the forward() behaviour of the module it is called upon
eg, it disables dropout and has batch norm use the entire population statistics
with torch.no_grad
The torch.autograd.no_grad documentation says:
Context-manager that disabled [sic] gradient calculation.
Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True. In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True.
model.eval()
The nn.Module.eval documentation says:
Sets the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.
The creator of pytorch said the documentation should be updated to suggest the usage of both, and I raised the pull request.
| https://stackoverflow.com/questions/55627780/ |
Indexing a 3d tensor using a 2d tensor | I have a 3d tensor, source of shape (bsz x slen1 x nhd) and a 2d tensor, index of shape (bsz x slen2). More specifically, I have:
source = 32 x 20 x 768
index = 32 x 16
Each value in the index tensor is in between [0, 19] which is the index of the desired vector according to the 2nd dim of the source tensor.
After indexing, I am expecting an output tensor of shape, 32 x 16 x 768.
Currently I am doing this:
bsz, _, nhid = source.size()
_, slen = index.size()
source = source.reshape(-1, nhid)
source = source[index.reshape(-1), :]
source = source.reshape(bsz, slen, nhid)
So, I am converting the 3d source tensor to a 2d tensor and 2d indexing tensor to a 1d tensor and then perform the indexing. Is this correct?
Is there any better way to do it?
Update
I checked that my code is not giving the expected result. To explain what I want, I am providing the following code snippet.
source = torch.FloatTensor([
[[ 0.2413, -0.6667, 0.2621],
[-0.4216, 0.3722, -1.2258],
[-0.2436, -1.5746, -0.1270],
[ 1.6962, -1.3637, 0.8820],
[ 0.3490, -0.0198, 0.7928]],
[[-0.0973, 2.3106, -1.8358],
[-1.9674, 0.5381, 0.2406],
[ 3.0731, 0.3826, -0.7279],
[-0.6262, 0.3478, -0.5112],
[-0.4147, -1.8988, -0.0092]]
])
index = torch.LongTensor([[0, 1, 2, 3],
[1, 2, 3, 4]])
And I want the output tensor as:
torch.FloatTensor([
[[ 0.2413, -0.6667, 0.2621],
[-0.4216, 0.3722, -1.2258],
[-0.2436, -1.5746, -0.1270],
[ 1.6962, -1.3637, 0.8820]],
[[-1.9674, 0.5381, 0.2406],
[ 3.0731, 0.3826, -0.7279],
[-0.6262, 0.3478, -0.5112],
[-0.4147, -1.8988, -0.0092]]
])
| Update:
source[torch.arange(source.shape[0]).unsqueeze(-1), index]
Note that torch.arange(source.shape[0]).unsqueeze(-1) gives:
tensor([[0],
[1]]) # 2 x 1
and index is:
tensor([[0, 1, 2, 3],
[1, 2, 3, 4]]) # 2 x 4
The arange indexes the batch dimension while index simultaneously indexes the slen1 dimension. The unsqueeze call adds the extra x 1 dimension to the arange result so that the two can be broadcast together.
| https://stackoverflow.com/questions/55628014/ |
Shape of tensor | I came across this piece of code
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print("Shape of x_train: " + str(x_train.shape))
print("Shape of y_train: " + str(y_train.shape))
And found that the output looks like this
(60000, 28, 28)
(60000,)
For the first line of output
So far my understanding, does it means that in 1st dimension it can hold 60k items, then in next dimension, it can hold 28 "array of 60k items"
and finally, in the last dimension, it can hold 28 "array of 28 "array of 60k items""
What I want to clarify is, Is this 60k samples of 28x28 data or something else?
For the second line of output, it seems like its just a 1d array of 60k items. So what does it actually represents? (i know that in x_train it was handwritten numbers and each number represents the intensity of grey in that cell)
Please note I have taken this code from some online example(i don't remember and won't mind if you want your credit to be added to this) and public dataset
tf.keras.datasets.mnist
| You are right the first line gives 60K items of 28x28 size data thus (60000, 28, 28).
The y_train are labels of the x_train. Thus they are a one dimensional and 60k in number.
For example: If the first item of the x_train is a handwritten image of 3, then the first item of y_train will be '3' which is the label.
| https://stackoverflow.com/questions/55629163/ |
Issues converting Keras code into PyTorch code (shaping) | I have some keras code that I need to convert to Pytorch. I am new to pytorch and I am having trouble wrapping my head around how to take in input the same way that I did in keras. I have spent many hours on this any tips or help is very appreciated.
Here is the keras code I am dealing with. The input shape is (5000,1)
def build(input_shape, classes):
model = Sequential()
filter_num = ['None',32,64,128,256]
kernel_size = ['None',8,8,8,8]
conv_stride_size = ['None',1,1,1,1]
pool_stride_size = ['None',4,4,4,4]
pool_size = ['None',8,8,8,8]
# Block1
model.add(Conv1D(filters=filter_num[1], kernel_size=kernel_size[1], input_shape=input_shape,
strides=conv_stride_size[1], padding='same',
name='block1_conv1'))
model.add(BatchNormalization(axis=-1))
model.add(ELU(alpha=1.0, name='block1_adv_act1'))
model.add(Conv1D(filters=filter_num[1], kernel_size=kernel_size[1],
strides=conv_stride_size[1], padding='same',
name='block1_conv2'))
model.add(BatchNormalization(axis=-1))
model.add(ELU(alpha=1.0, name='block1_adv_act2'))
model.add(MaxPooling1D(pool_size=pool_size[1], strides=pool_stride_size[1],
padding='same', name='block1_pool'))
model.add(Dropout(0.1, name='block1_dropout'))
# Block 2
model.add(Conv1D(filters=filter_num[2], kernel_size=kernel_size[2],
strides=conv_stride_size[2], padding='same',
name='block2_conv1'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block2_act1'))
model.add(Conv1D(filters=filter_num[2], kernel_size=kernel_size[2],
strides=conv_stride_size[2], padding='same',
name='block2_conv2'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block2_act2'))
model.add(MaxPooling1D(pool_size=pool_size[2], strides=pool_stride_size[3],
padding='same', name='block2_pool'))
model.add(Dropout(0.1, name='block2_dropout'))
# Block 3
model.add(Conv1D(filters=filter_num[3], kernel_size=kernel_size[3],
strides=conv_stride_size[3], padding='same',
name='block3_conv1'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block3_act1'))
model.add(Conv1D(filters=filter_num[3], kernel_size=kernel_size[3],
strides=conv_stride_size[3], padding='same',
name='block3_conv2'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block3_act2'))
model.add(MaxPooling1D(pool_size=pool_size[3], strides=pool_stride_size[3],
padding='same', name='block3_pool'))
model.add(Dropout(0.1, name='block3_dropout'))
# Block 4
model.add(Conv1D(filters=filter_num[4], kernel_size=kernel_size[4],
strides=conv_stride_size[4], padding='same',
name='block4_conv1'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block4_act1'))
model.add(Conv1D(filters=filter_num[4], kernel_size=kernel_size[4],
strides=conv_stride_size[4], padding='same',
name='block4_conv2'))
model.add(BatchNormalization())
model.add(Activation('relu', name='block4_act2'))
model.add(MaxPooling1D(pool_size=pool_size[4], strides=pool_stride_size[4],
padding='same', name='block4_pool'))
model.add(Dropout(0.1, name='block4_dropout'))
# FC #1
model.add(Flatten(name='flatten'))
model.add(Dense(512, kernel_initializer=glorot_uniform(seed=0), name='fc1'))
model.add(BatchNormalization())
model.add(Activation('relu', name='fc1_act'))
model.add(Dropout(0.7, name='fc1_dropout'))
#FC #2
model.add(Dense(512, kernel_initializer=glorot_uniform(seed=0), name='fc2'))
model.add(BatchNormalization())
model.add(Activation('relu', name='fc2_act'))
model.add(Dropout(0.5, name='fc2_dropout'))
# Classification
model.add(Dense(classes, kernel_initializer=glorot_uniform(seed=0), name='fc3'))
model.add(Activation('softmax', name="softmax"))
return model
Here are the results of model.summary() from the keras code
Layer (type) Output Shape Param #
=================================================================
block1_conv1 (Conv1D) (None, 5000, 32) 288
_________________________________________________________________
batch_normalization_1 (Batch (None, 5000, 32) 128
_________________________________________________________________
block1_adv_act1 (ELU) (None, 5000, 32) 0
_________________________________________________________________
block1_conv2 (Conv1D) (None, 5000, 32) 8224
_________________________________________________________________
batch_normalization_2 (Batch (None, 5000, 32) 128
_________________________________________________________________
block1_adv_act2 (ELU) (None, 5000, 32) 0
_________________________________________________________________
block1_pool (MaxPooling1D) (None, 1250, 32) 0
_________________________________________________________________
block1_dropout (Dropout) (None, 1250, 32) 0
_________________________________________________________________
block2_conv1 (Conv1D) (None, 1250, 64) 16448
_________________________________________________________________
batch_normalization_3 (Batch (None, 1250, 64) 256
_________________________________________________________________
block2_act1 (Activation) (None, 1250, 64) 0
_________________________________________________________________
block2_conv2 (Conv1D) (None, 1250, 64) 32832
_________________________________________________________________
batch_normalization_4 (Batch (None, 1250, 64) 256
_________________________________________________________________
block2_act2 (Activation) (None, 1250, 64) 0
_________________________________________________________________
block2_pool (MaxPooling1D) (None, 313, 64) 0
_________________________________________________________________
block2_dropout (Dropout) (None, 313, 64) 0
_________________________________________________________________
block3_conv1 (Conv1D) (None, 313, 128) 65664
_________________________________________________________________
batch_normalization_5 (Batch (None, 313, 128) 512
_________________________________________________________________
block3_act1 (Activation) (None, 313, 128) 0
_________________________________________________________________
block3_conv2 (Conv1D) (None, 313, 128) 131200
_________________________________________________________________
batch_normalization_6 (Batch (None, 313, 128) 512
_________________________________________________________________
block3_act2 (Activation) (None, 313, 128) 0
_________________________________________________________________
block3_pool (MaxPooling1D) (None, 79, 128) 0
_________________________________________________________________
block3_dropout (Dropout) (None, 79, 128) 0
_________________________________________________________________
block4_conv1 (Conv1D) (None, 79, 256) 262400
_________________________________________________________________
batch_normalization_7 (Batch (None, 79, 256) 1024
_________________________________________________________________
block4_act1 (Activation) (None, 79, 256) 0
_________________________________________________________________
block4_conv2 (Conv1D) (None, 79, 256) 524544
_________________________________________________________________
batch_normalization_8 (Batch (None, 79, 256) 1024
_________________________________________________________________
block4_act2 (Activation) (None, 79, 256) 0
_________________________________________________________________
block4_pool (MaxPooling1D) (None, 20, 256) 0
_________________________________________________________________
block4_dropout (Dropout) (None, 20, 256) 0
_________________________________________________________________
flatten (Flatten) (None, 5120) 0
_________________________________________________________________
fc1 (Dense) (None, 512) 2621952
_________________________________________________________________
batch_normalization_9 (Batch (None, 512) 2048
_________________________________________________________________
fc1_act (Activation) (None, 512) 0
_________________________________________________________________
fc1_dropout (Dropout) (None, 512) 0
_________________________________________________________________
fc2 (Dense) (None, 512) 262656
_________________________________________________________________
batch_normalization_10 (Batc (None, 512) 2048
_________________________________________________________________
fc2_act (Activation) (None, 512) 0
_________________________________________________________________
fc2_dropout (Dropout) (None, 512) 0
_________________________________________________________________
fc3 (Dense) (None, 101) 51813
_________________________________________________________________
softmax (Activation) (None, 101) 0
=================================================================
Total params: 3,985,957
Trainable params: 3,981,989
Non-trainable params: 3,968
Here is what I have made in pytorch
class model(torch.nn.Module):
def __init__(self, input_channels, kernel_size, stride, pool_kernel, pool_stride, dropout_p, dropout_inplace=False):
super(model, self).__init__()
self.encoder = nn.Sequential(
BasicBlock1(input_channels, kernel_size, stride, pool_kernel, pool_stride, dropout_p),
BasicBlock(input_channels//4, kernel_size, stride, pool_kernel, pool_stride, dropout_p),
BasicBlock(input_channels//16, kernel_size, stride, pool_kernel, pool_stride, dropout_p),
BasicBlock(input_channels//16//4, kernel_size, stride, pool_kernel, pool_stride, dropout_p)
)
self.decoder = nn.Sequential(
nn.Linear(5120, 512),
nn.BatchNorm1d(512),
nn.ReLU(),
nn.Dropout(p=dropout_p, inplace=dropout_inplace),
nn.Linear(512, 512),
nn.BatchNorm1d(512),
nn.ReLU(),
nn.Dropout(p=dropout_p, inplace=dropout_inplace),
nn.Linear(512, 101),
nn.Softmax(dim=101)
)
def forward(self, x):
x = self.encoder(x)
x = x.view(x.size(0), -1) # flatten
x = self.decoder(x)
return x
def BasicBlock(input_channels, kernel_size, stride, pool_kernel, pool_stride, dropout_p, dropout_inplace=False):
return nn.Sequential(
nn.Conv1d(in_channels=input_channels, out_channels=input_channels, kernel_size=kernel_size, stride=stride,
padding=get_pad_size(input_channels, input_channels, kernel_size)),
nn.BatchNorm1d(32),
nn.ReLU(),
nn.Conv1d(in_channels=input_channels, out_channels=input_channels, kernel_size=kernel_size, stride=stride,
padding=get_pad_size(input_channels, input_channels, kernel_size)),
nn.BatchNorm1d(32),
nn.ReLU(),
nn.MaxPool1d(kernel_size=pool_kernel, stride=pool_stride,
padding=get_pad_size(input_channels, input_channels/4, kernel_size)),
nn.Dropout(p=dropout_p, inplace=dropout_inplace)
)
def BasicBlock1(input_channels, kernel_size, stride, pool_kernel, pool_stride, dropout_p, dropout_inplace=False):
return nn.Sequential(
nn.Conv1d(in_channels=1, out_channels=input_channels, kernel_size=kernel_size, stride=stride,
padding=get_pad_size(input_channels, input_channels, kernel_size)),
nn.BatchNorm1d(32),
nn.ReLU(),
nn.Conv1d(in_channels=input_channels, out_channels=input_channels, kernel_size=kernel_size, stride=stride,
padding=get_pad_size(input_channels, input_channels, kernel_size)),
nn.BatchNorm1d(32),
nn.ReLU(),
nn.MaxPool1d(kernel_size=pool_kernel, stride=pool_stride,
padding=get_pad_size(input_channels, input_channels/4, kernel_size)),
nn.Dropout(p=dropout_p, inplace=dropout_inplace)
)
def get_pad_size(input_shape, output_shape, kernel_size, stride=1, dilation=1):
"""
Gets the right padded needed to maintain same shape in the conv layers
BEWARE: works only on odd size kernel size
:param input_shape: the input shape to the conv layer
:param output_shape: the desired output shape of the conv layer
:param kernel_size: the size of the kernel window, has to be odd
:param stride: Stride of the convolution
:param dilation: Spacing between kernel elements
:return: the appropriate pad size for the needed configuration
:Author: Aneesh
"""
if kernel_size % 2 == 0:
raise ValueError(
"Kernel size has to be odd for this function to work properly. Current Value is %d." % kernel_size)
return (int((output_shape * stride - stride + kernel_size - input_shape + (kernel_size - 1) * (dilation - 1)) / 2))
Lastly here is the model summary for what my pytorch model creates
model(
(encoder): Sequential(
(0): Sequential(
(0): Conv1d(1, 5000, kernel_size=(7,), stride=(1,), padding=(3,))
(1): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv1d(5000, 5000, kernel_size=(7,), stride=(1,), padding=(3,))
(4): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
(6): MaxPool1d(kernel_size=8, stride=4, padding=-1872, dilation=1, ceil_mode=False)
(7): Dropout(p=0.1)
)
(1): Sequential(
(0): Conv1d(1250, 1250, kernel_size=(7,), stride=(1,), padding=(3,))
(1): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv1d(1250, 1250, kernel_size=(7,), stride=(1,), padding=(3,))
(4): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
(6): MaxPool1d(kernel_size=8, stride=4, padding=-465, dilation=1, ceil_mode=False)
(7): Dropout(p=0.1)
)
(2): Sequential(
(0): Conv1d(312, 312, kernel_size=(7,), stride=(1,), padding=(3,))
(1): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv1d(312, 312, kernel_size=(7,), stride=(1,), padding=(3,))
(4): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
(6): MaxPool1d(kernel_size=8, stride=4, padding=-114, dilation=1, ceil_mode=False)
(7): Dropout(p=0.1)
)
(3): Sequential(
(0): Conv1d(78, 78, kernel_size=(7,), stride=(1,), padding=(3,))
(1): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv1d(78, 78, kernel_size=(7,), stride=(1,), padding=(3,))
(4): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
(6): MaxPool1d(kernel_size=8, stride=4, padding=-26, dilation=1, ceil_mode=False)
(7): Dropout(p=0.1)
)
)
(decoder): Sequential(
(0): Linear(in_features=5120, out_features=512, bias=True)
(1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Dropout(p=0.1)
(4): Linear(in_features=512, out_features=512, bias=True)
(5): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU()
(7): Dropout(p=0.1)
(8): Linear(in_features=512, out_features=101, bias=True)
(9): Softmax()
)
)
| I think your fundamental problem is that you confuse in_channels and out_channels with Keras shapes. Let's just take the first convolutional layer as an example. In Keras you have:
Conv1D(filters=32, kernel_size=8, input_shape=(5000,1), strides=1, padding='same')
The PyTorch equivalent should be (changing the kernel size to 7 like you did, we'll come back to it later):
nn.Conv1d(in_channels=1, out_channels=32, kernel_size=7, stride=1, padding=3) # different kernel size
Note that you don't need to give the shape of your input sequence for pytorch. Now let's see how it compares to what you did:
nn.Conv1d(in_channels=1, out_channels=5000, kernel_size=7, stride=1, padding=0) # note padding
You just created a huge network. While the correct implementation produces an output of [b, 32, 5000] where b is the batch size, your output is [b, 5000, 5000].
Hope this example helps you to correct the rest of your implementation.
Finally, some notes on replicating same padding in pytorch. With even kernel sizes, to preserve the size of your input you need asymmetric padding. This I think might not be available when you create the layer. I see you instead changed the kernel size to 7, but it can actually be done with the original kernel size of 8. You can use padding in your forward() function to create the required asymmetric padding.
layer = nn.Conv1d(in_channels=1, out_channels=32, kernel_size=8, stride=1, padding=0) # layer without padding
x = torch.empty(1, 1, 5000).normal_() # random input
# forward run
x_padded = torch.nn.functional.pad(x, (3,4))
y = layer(x_padded).shape
print(y.shape) # torch.Size([1, 32, 5000])
| https://stackoverflow.com/questions/55636138/ |
Preventing PyTorch Dataset iteration from exceeding length of dataset | I am using a custom PyTorch Dataset with the following:
class ImageDataset(Dataset):
def __init__(self, input_dir, input_num, input_format, transform=None):
self.input_num = input_num
# etc
def __len__ (self):
return self.input_num
def __getitem__(self,idx):
targetnum = idx % self.input_num
# etc
However, when I iterate over this dataset, iteration loops back to the start of the dataset instead of terminating at the end of the dataset. This effectively becomes an infinite loop in the iterator, with the epoch print statement never occurring for subsequent epochs.
train_dataset=ImageDataset(input_dir = 'path/to/directory',
input_num = 300, input_format = "mask") # Size 300
num_epochs = 10
for epoch in range(num_epochs):
print("EPOCH " + str(epoch+1) + "\n")
num = 0
for data in train_dataset:
print(num, end=" ")
num += 1
# etc
Print output (... for values in between):
EPOCH 1
0 1 2 3 4 5 6 7 ... 298 299 300 301 302 303 304 305 ... 597 598 599 600 601 602 603 604 ...
Why is the basic iteration over the Dataset continuing past the defined __len__ of the DataSet, and how can I ensure that iteration over the dataset terminates after hitting the length of the dataset when using this method (or is manually iterating over the range of the dataset length the only solution)?
Thank you.
| Dataset class doesn't have implemented StopIteration signal.
The for loop listens for StopIteration. The purpose of the for statement is to loop over the sequence provided by an iterator and the exception is used to signal that the iterator is now done...
More: Why does next raise a 'StopIteration', but 'for' do a normal return? | The Iterator Protocol
| https://stackoverflow.com/questions/55637271/ |
Cuda Runtime/Driver incompatibility in docker container | I'm trying to run this simple line of code in a docker container that comes with Pytorch.
import torch
torch.cuda.set_device(0)
I get this error:
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at torch/csrc/cuda/Module.cpp:32
Running torch.cuda.is_available() returns False.
The host machine has the most up-to-date Nvidia drivers. Pytorch ships with Cuda, so there should be no incompatibility issues.
What could cause this problem?
Edit:
@Patel Sunil's answer to this question answers my question, but I didn't come across this question in my search because their question is broad, while my question is specific to the cuda runtime/driver error. I posted this as a separate question for those who come across this error but don't know what it is a symptom of (namely, forgetting to use nvidia-docker).
| The problem was that I was running the container with docker, not nvidia-docker. Running the docker container with nvidia-docker fixed the problem.
| https://stackoverflow.com/questions/55641418/ |
Compute Optical Flow corresponding to data in the torch.utils.data.DataLoader | I have built a CNN model for action recognition in videos in PyTorch. I'm loading the data for training using the torch dataloader module.
train_loader = torch.utils.data.DataLoader(
training_data,
batch_size=8,
shuffle=True,
num_workers=4,
pin_memory=True)
And then passing the train_loader for training the model.
train_epoch(i, train_loader, action_detect_model, criterion, optimizer, opt,
train_logger, train_batch_logger)
Now I want to add an additional path which will take the corresponding optical flow of the video frames. To calculate the optical flow I'm using cv2.calcOpticalFlowFarneback. But the problem is that I'm not sure how to get the images corresponding to the data in the train data loader tensor as they will be shuffled. I don't want to pre-compute the optical flow as the storage requirement will be huge (each frame takes 600 kBs).
| You have to use an own data loader class to compute optical flow on the fly. The idea is that this class get a list of filename tuples (curr image, next image) containing the current and next frame filenames of the video sequence instead of simple filename list. This allows to get the correct image pairs after suffling the filename list.
The following code gives you a very simple example implementaton:
from torch.utils.data import Dataset
import cv2
import numpy as np
class FlowDataLoader(Dataset):
def __init__(self,
filename_tuples):
random.shuffle(filename_tuples)
self.lines = filename_tuples
def __getitem__(self, index):
img_filenames = self.lines[index]
curr_img = cv2.cvtColor(cv2.imread(img_filenames[0]), cv2.BGR2GRAY)
next_img = cv2.cvtColor(cv2.imread(img_filenames[1]), cv2.BGR2GRAY)
flow = cv2.calcOpticalFlowFarneback(curr_img, next_img, ... [parameter])
# code for loading the class label
# label = ...
#
# this is a very simple data normalization
curr_img= curr_img.astype(np.float32) / 255
next_img = next_img .astype(np.float32) / 255
# you can return the image and flow seperatly
return curr_img, flow, label
# or stacked as follows
# return np.dstack((curr_img,flow)), label
# at this place you need a function that create a list of training sample filenames
# that look like this
training_filelist = [(img000.png, img001.png),
(img001.png, img002.png),
(img002.png, img003.png)]
training_data = FlowDataLoader(training_filelist)
train_loader = torch.utils.data.DataLoader(
training_data,
batch_size=8,
shuffle=True,
num_workers=4,
pin_memory=True)
This is only a simple example of the FlowDataLoader. Idealy this should be extended so that curr_image output contains normalized rgb values and the optical flow is normalized and clipped too.
| https://stackoverflow.com/questions/55651427/ |
How to iterate over a group of tensor and pass the elements from each group to a function? | Suppose you have 3 tensors of the same size:
a = torch.randn(3,3)
a = ([[ 0.1945, 0.8583, 2.6479],
[-0.1000, 1.2136, -0.3706],
[-0.0094, 0.4279, -0.6840]])
b = torch.randn(3, 3)
b = ([[-1.1155, 0.2106, -0.2183],
[ 1.6610, -0.6953, 0.0052],
[-0.8955, 0.0953, -0.7737]])
c = torch.randn(3, 3)
c = ([[-0.2303, -0.3427, -0.4990],
[-1.1254, 0.4432, 0.3999],
[ 0.2489, -0.9459, -0.5576]])
In Lua (torch7), they have this function:
[self] map2(tensor1, tensor2, function(x, xt1, xt2))
which applies the given function to all elements of self.
My questions are:
Is there any similar function in python (pytorch)?
Is there any pythonic method to iterate over the 3 tensors and get the respective elements of each tensor without using for loop and indices?
For example:
0.1945 -1.1155 -0.2303
0.8583 0.2106 -0.3427
2.6479 -0.2183 -0.4990
-0.1000 1.6610 -1.1254
...
Edit_1: I have also tried itertools.zip_longest and zip, but the results are not as I expected as mentioned above
| You can use Python's map function similar to what you have mentioned. Like this:
>>> tensor_list = [torch.tensor([i, i, i]) for i in range(3)]
>>> list(map(lambda x: x**2, tensor_list))
[tensor([0, 0, 0]), tensor([1, 1, 1]), tensor([4, 4, 4])]
>>>
EDIT: For a PyTorch only approach you can use torch.Tensor.apply_ (Note this does the changes in place and doesn't return a new tensor)
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> x.apply_(lambda y: y ** 2)
tensor([[ 1, 4, 9],
[16, 25, 36],
[49, 64, 81]])
>>>
| https://stackoverflow.com/questions/55652321/ |
Keras learning rate decay in pytorch | I have a question concerning learning rate decay in Keras. I need to understand how the option decay works inside optimizers in order to translate it to an equivalent PyTorch formulation.
From the source code of SGD I see that the update is done this way after every batch update:
lr = self.lr * (1. / (1. + self.decay * self.iterations))
Does this mean that after every batch update the lr is updated starting from its value from its previous update or from its initial value? I mean, which of the two following interpretation is the correct one?
lr = lr_0 * (1. / (1. + self.decay * self.iterations))
or
lr = lr * (1. / (1. + self.decay * self.iterations)),
where lr is the lr updated after previous iteration and lr_0 is always the initial learning rate.
If the correct answer is the first one, this would mean that, in my case, the learning rate would decay from 0.001 to just 0.0002 after 100 epochs, whereas in the second case it would decay from 0.001 at around 1e-230 after 70 epochs.
Just to give you some context, I'm working with a CNN for a regression problem from images and I just have to translate Keras code into Pytorch code. So far, with the second of the afore-mentioned interpretations I manage to only always predict the same value, disregarding of batch size and input at test time.
Thanks in advance for your help!
| Based on the implementation in Keras I think your first formulation is the correct one, the one that contain the initial learning rate (note that self.lr is not being updated).
However I think your calculation is probably not correct: since the denominator is the same, and lr_0 >= lr since you are doing decay, the first formulation has to result in a bigger number.
I'm not sure if this decay is available in PyTorch, but you can easily create something similar with torch.optim.lr_scheduler.LambdaLR.
decay = .001
fcn = lambda step: 1./(1. + decay*step)
scheduler = LambdaLR(optimizer, lr_lambda=fcn)
Finally, don't forget that you will need to call .step() explicitly on the scheduler, it's not enough to step your optimizer. Also, most often learning scheduling is only done after a full epoch, not after every single batch, but I see that here you are just recreating Keras behavior.
| https://stackoverflow.com/questions/55663375/ |
I change the expected object of scalar type float but still got Long in Pytorch | To do the binary class classification. I use binary cross entropy to be the loss function(nn.BCEloss()), and the units of last layer is one.
Before I put (input, target) into loss function, I cast target from Long to float. Only the final step of the DataLoader comes the error messages, and the error message is as bellows.
"RuntimeError: Expected object of scalar type Float but got scalar type Long for argument #2 'target'"
The DataLoader(I drop the last batch if the batch size is not match) is defined in the code, I'm not sure if there is a correlation with the error.
I have tried to print the type of the target and input(output of the Neural Network), and the type of both variable is float. I put the "type result" and the code bellow.
trainloader = torch.utils.data.DataLoader(trainset, batch_size=BATCH_SIZE,
shuffle=True, drop_last=True)
loss_func = nn.BCELoss()
# training
for epoch in range(EPOCH):
test_loss = 0
train_loss = 0
for step, (b_x, b_y) in enumerate(trainloader): # gives batch data
b_x = b_x.view(-1, TIME_STEP, 1) # reshape x to (batch, time_step, input_size)
print("step: ", step)
b_x = b_x.to(device)
print("BEFORE|b_y type: ",b_y.type())
b_y = b_y.to(device, dtype=torch.float)
print("AFTER|b_y type: ",b_y.type())
output = rnn(b_x) # rnn output
print("output type:", output.type())
loss = loss_func(output, b_y) # !!!error occurs when trainloader enumerate the final step!!!
train_loss = train_loss + loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
#### type result and the error message####
...
step: 6
BEFORE|b_y type: torch.LongTensor
AFTER|b_y type: torch.cuda.FloatTensor
output type: torch.cuda.FloatTensor
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-18-e028fcb6b840> in <module>
30 b_y = b_y.to(device)
31 output = rnn(b_x)
---> 32 loss = loss_func(output, b_y)
33 test_loss = test_loss + loss
34 rnn.train()
~/venvs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
~/venvs/tf1.12/lib/python3.5/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
502 @weak_script_method
503 def forward(self, input, target):
--> 504 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
505
506
~/venvs/tf1.12/lib/python3.5/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
2025
2026 return torch._C._nn.binary_cross_entropy(
-> 2027 input, target, weight, reduction_enum)
2028
2029
RuntimeError: Expected object of scalar type Float but got scalar type Long for argument #2 'target'
| It appears that the type is correctly being changed, as you state that you observe the change when printing the types and from Pytorch:
Returns a Tensor with the specified device and (optional) dtype. If
dtype is None it is inferred to be self.dtype. When non_blocking,
tries to convert asynchronously with respect to the host if possible,
e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor.
When copy is set, a new Tensor is created even when the Tensor already
matches the desired conversion.
and other methods like
b_y = b_y.to(device).float()
should not be measurably different as , again, .float() is equivalent to .to(..., torch.float32). and .float is equivalent to .float32. Can you verify the type of b_y right before the error is thrown and edit the question? (I would have made this a comment - but I wanted to add more detail. I will try to help when that is provided)
| https://stackoverflow.com/questions/55665689/ |
What's the fastest way to copy values from one tensor to another in PyTorch? | I am experimenting with dilation in convolution where I am trying to copy data from one 2D tensor to another 2D tensor using PyTorch. I'm copying values from tensor A to tensor B such that every element of A that is copied into B is surrounded by n zeros.
I have already tried using nested for loops, which is a very naive way. The performance, obviously is quite bad when I'm using large number of grayscale images as input.
for i in range(A.shape[0]):
for j in range(A.shape[1]):
B[n+i][n+j] = A[i][j]
Is there anything faster that doesn't need the usage of loops?
| If I understand your question correctly, here is a faster alternative, without any loops:
# sample `n`
In [108]: n = 2
# sample tensor to work with
In [102]: A = torch.arange(start=1, end=5*4 + 1).view(5, -1)
In [103]: A
Out[103]:
tensor([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12],
[13, 14, 15, 16],
[17, 18, 19, 20]])
# our target tensor where we will copy values
# we need to multiply `n` by 2 since there are two axes
In [104]: B = torch.zeros(A.shape[0] + 2*n, A.shape[1] + 2*n)
# copy the values, at the center of the grid
# leaving `n` positions on the surrounding
In [106]: B[n:-n, n:-n] = A
# check whether we did it correctly
In [107]: B
Out[107]:
tensor([[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 1., 2., 3., 4., 0., 0.],
[ 0., 0., 5., 6., 7., 8., 0., 0.],
[ 0., 0., 9., 10., 11., 12., 0., 0.],
[ 0., 0., 13., 14., 15., 16., 0., 0.],
[ 0., 0., 17., 18., 19., 20., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.]])
Another case where n=3
In [118]: n = 3
# we need to multiply `n` by 2 since there are two axes
In [119]: B = torch.zeros(A.shape[0] + 2*n, A.shape[1] + 2*n)
# copy the values, at the center of the grid
# leaving `n` positions on the surrounding
In [120]: B[n:-n, n:-n] = A
In [121]: B
Out[121]:
tensor([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 2., 3., 4., 0., 0., 0.],
[ 0., 0., 0., 5., 6., 7., 8., 0., 0., 0.],
[ 0., 0., 0., 9., 10., 11., 12., 0., 0., 0.],
[ 0., 0., 0., 13., 14., 15., 16., 0., 0., 0.],
[ 0., 0., 0., 17., 18., 19., 20., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
sanity check with your loop based solution:
In [122]: n = 2
In [123]: B = torch.zeros(A.shape[0] + 2*n, A.shape[1] + 2*n)
In [124]: for i in range(A.shape[0]):
...: for j in range(A.shape[1]):
...: B[n+i][n+j] = A[i][j]
...:
In [125]: B
Out[125]:
tensor([[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 1., 2., 3., 4., 0., 0.],
[ 0., 0., 5., 6., 7., 8., 0., 0.],
[ 0., 0., 9., 10., 11., 12., 0., 0.],
[ 0., 0., 13., 14., 15., 16., 0., 0.],
[ 0., 0., 17., 18., 19., 20., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.]])
timings:
# large sized input tensor
In [126]: A = torch.arange(start=1, end=5000*4 + 1).view(5000, -1)
In [127]: n = 2
In [132]: B = torch.zeros(A.shape[0] + 2*n, A.shape[1] + 2*n)
# loopy solution
In [133]: %%timeit
...: for i in range(A.shape[0]):
...: for j in range(A.shape[1]):
...: B[n+i][n+j] = A[i][j]
...:
92.1 ms ± 434 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# clear out `B` again by reinitializing it.
In [128]: B = torch.zeros(A.shape[0] + 2*n, A.shape[1] + 2*n)
In [129]: %timeit B[n:-n, n:-n] = A
49.6 µs ± 239 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
From the above timings, we can see that the vectorized approach is ~200x faster than loop based solution.
| https://stackoverflow.com/questions/55669625/ |
Why is an OOM happening on my model init()? | A single line in my model, tr.nn.Linear(hw_flat * num_filters*8, num_fc), is causing an OOM error on initialization of the model. Commenting it out removes the memory issue.
import torch as tr
from layers import Conv2dSame, Flatten
class Discriminator(tr.nn.Module):
def __init__(self, cfg):
super(Discriminator, self).__init__()
num_filters = 64
hw_flat = int(cfg.hr_resolution[0] / 2**4)**2
num_fc = 1024
self.model = tr.nn.Sequential(
# Channels in, channels out, filter size, stride, padding
Conv2dSame(cfg.num_channels, num_filters, 3),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters, num_filters, 3, 2),
tr.nn.BatchNorm2d(num_filters),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters, num_filters*2, 3),
tr.nn.BatchNorm2d(num_filters*2),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*2, num_filters*2, 3, 2),
tr.nn.BatchNorm2d(num_filters*2),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*2, num_filters*4, 3),
tr.nn.BatchNorm2d(num_filters*4),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*4, num_filters*4, 3, 2),
tr.nn.BatchNorm2d(num_filters*4),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*4, num_filters*8, 3),
tr.nn.BatchNorm2d(num_filters*8),
tr.nn.LeakyReLU(),
Conv2dSame(num_filters*8, num_filters*8, 3, 2),
tr.nn.BatchNorm2d(num_filters*8),
tr.nn.LeakyReLU(),
Flatten(),
tr.nn.Linear(hw_flat * num_filters*8, num_fc),
tr.nn.LeakyReLU(),
tr.nn.Linear(num_fc, 1),
tr.nn.Sigmoid()
)
self.model.apply(self.init_weights)
def forward(self, x_in):
x_out = self.model(x_in)
return x_out
def init_weights(self, layer):
if type(layer) in [tr.nn.Conv2d, tr.nn.Linear]:
tr.nn.init.xavier_uniform_(layer.weight)
This is strange, as hw_flat = 96*96 = 9216, and num_filters*8 = 512, so hw_flat * num_filters = 4718592, which is the number of parameters in that layer. I have confirmed this calculation as changing the layer to tr.nn.Linear(4718592, num_fc) results in the same output.
To me this makes no sense as dtype=float32, so the expected size of this would be 32*4718592 = 150,994,944 bytes. This is equivalent to about 150mb.
Error message is:
Traceback (most recent call last):
File "main.py", line 116, in <module>
main()
File "main.py", line 112, in main
srgan = SRGAN(cfg)
File "main.py", line 25, in __init__
self.discriminator = Discriminator(cfg).to(device)
File "/home/jpatts/Documents/ECE/ECE471-SRGAN/models.py", line 87, in __init__
tr.nn.Linear(hw_flat * num_filters*8, num_fc),
File "/home/jpatts/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 51, in __init__
self.weight = Parameter(torch.Tensor(out_features, in_features))
RuntimeError: $ Torch: not enough memory: you tried to allocate 18GB. Buy new RAM! at /pytorch/aten/src/TH/THGeneral.cpp:201
I am only running batch sizes of 1 as well (not that that affects this error), with overall input shape to the network being (1, 3, 1536, 1536), and shape after flatten layer being (1, 4718592).
Why is this happening?
| Your linear layer is quite large - it does, in fact, need at least 18GB of memory. (Your estimate is off for two reasons: (1) a float32 takes 4 bytes of memory, not 32, and (2) you didn't multiply by the output size.)
From the PyTorch documentation FAQs:
Don’t use linear layers that are too large. A linear layer nn.Linear(m, n) uses O(n*m)
memory: that is to say, the memory requirements of the weights scales quadratically with
the number of features. It is very easy to blow through your memory this way (and
remember that you will need at least twice the size of the weights, since you also need
to store the gradients.)
| https://stackoverflow.com/questions/55670244/ |
Loss is 'nan' all the time when training the neural network in PyTorch | I assigned different weight_decayfor the parameters, and the training loss and testing loss were all nan.
I printed the prediction_train,loss_train,running_loss_train,prediction_test,loss_test,and running_loss_test ,they were all nan.
And I have checked the data with numpy.any(numpy.isnan(dataset)), it returned False.
If I use optimizer = torch.optim.Adam(wnn.parameters()) rather than assigning different weight_decay for the parameters, there would be no problem.
Could you please tell me how to fix it? Here are the codes, I defined the activation function by myself. Thank you:)
class Morlet(nn.Module):
def __init__(self):
super(Morlet,self).__init__()
def forward(self,x):
x=(torch.cos(1.75*x))*(torch.exp(-0.5*x*x))
return x
morlet=Morlet()
class WNN(nn.Module):
def __init__(self):
super(WNN,self).__init__()
self.a1=torch.nn.Parameter(torch.randn(64,requires_grad=True))
self.b1=torch.nn.Parameter(torch.randn(64,requires_grad=True))
self.layer1=nn.Linear(30,64,bias=False)
self.out=nn.Linear(64,1)
def forward(self,x):
x=self.layer1(x)
x=(x-self.b1)/self.a1
x=morlet(x)
out=self.out(x)
return out
wnn=WNN()
optimizer = torch.optim.Adam([{'params': wnn.layer1.weight, 'weight_decay':0.01},
{'params': wnn.out.weight, 'weight_decay':0.01},
{'params': wnn.out.bias, 'weight_decay':0},
{'params': wnn.a1, 'weight_decay':0.01},
{'params': wnn.b1, 'weight_decay':0.01}])
criterion = nn.MSELoss()
for epoch in range(10):
prediction_test_list=[]
running_loss_train=0
running_loss_test=0
for i,(x1,y1) in enumerate(trainloader):
prediction_train=wnn(x1)
#print(prediction_train)
loss_train=criterion(prediction_train,y1)
#print(loss_train)
optimizer.zero_grad()
loss_train.backward()
optimizer.step()
running_loss_train+=loss_train.item()
#print(running_loss_train)
tr_loss=running_loss_train/train_set_y_array.shape[0]
for i,(x2,y2) in enumerate(testloader):
prediction_test=wnn(x2)
#print(prediction_test)
loss_test=criterion(prediction_test,y2)
#print(loss_test)
running_loss_test+=loss_test.item()
print(running_loss_test)
prediction_test_list.append(prediction_test.detach().cpu())
ts_loss=running_loss_test/test_set_y_array.shape[0]
print('Epoch {} Train Loss:{}, Test Loss:{}'.format(epoch+1,tr_loss,ts_loss))
test_set_y_array_plot=test_set_y_array*(dataset.max()-dataset.min())+dataset.min()
prediction_test_np=torch.cat(prediction_test_list).numpy()
prediction_test_plot=prediction_test_np*(dataset.max()-dataset.min())+dataset.min()
plt.plot(test_set_y_array_plot.flatten(),'r-',linewidth=0.5,label='True data')
plt.plot(prediction_test_plot,'b-',linewidth=0.5,label='Predicted data')
plt.legend()
plt.show()
print('Finish training')
The output was:
Epoch 1 Train Loss:nan, Test Loss:nan
And there was only the true data on the plot, as the picture shows.
| Weight decay applies L2 regularization to the learned parameters, taking a quick glance at your code, you are using the a1 weights as denomenators here x=(x-self.b1)/self.a1 with a weight decay of .01, this could lead to eliminating some of those a1 weights to be zero, and what are the results of a division by zero ?
| https://stackoverflow.com/questions/55671735/ |
Adam optimizer error: one of the variables needed for gradient computation has been modified by an inplace operation | I am trying to implement Actor-Critic learning atuomation algorithm that is not same as basic actor-critic algorithm, it's little bit changed.
Anyway, I used Adam optimizer and implemented with pytorch
when i backward TD-error for Critic first, there's no error.
However, i backward loss for Actor, the error occured.
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call
last) in
46 # update Actor Func
47 optimizer_M.zero_grad()
---> 48 loss.backward()
49 optimizer_M.step()
50
~\Anaconda3\lib\site-packages\torch\tensor.py in backward(self,
gradient, retain_graph, create_graph)
100 products. Defaults to False.
101 """
--> 102 torch.autograd.backward(self, gradient, retain_graph, create_graph)
103
104 def register_hook(self, hook):
~\Anaconda3\lib\site-packages\torch\autograd__init__.py in
backward(tensors, grad_tensors, retain_graph, create_graph,
grad_variables)
88 Variable._execution_engine.run_backward(
89 tensors, grad_tensors, retain_graph, create_graph,
---> 90 allow_unreachable=True) # allow_unreachable flag
91
92
RuntimeError: one of the variables needed for gradient computation has
been modified by an inplace operation
above is the content of error
I tried to find inplace operation, but I haven't found in my written code.
I think i don't know how to handle optimizer.
Here is main code:
for cur_step in range(1):
action = M_Agent(state, flag)
next_state, r = env.step(action)
# calculate TD Error
TD_error = M_Agent.cal_td_error(r, next_state)
# calculate Target
target = torch.FloatTensor([M_Agent.cal_target(TD_error)])
logit = M_Agent.cal_logit()
loss = criterion(logit, target)
# update value Func
optimizer_M.zero_grad()
TD_error.backward()
optimizer_M.step()
# update Actor Func
loss.backward()
optimizer_M.step()
Here is the agent network
# Actor-Critic Agent
self.act_pipe = nn.Sequential(nn.Linear(state, 128),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(128, 256),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(256, num_action),
nn.Softmax()
)
self.val_pipe = nn.Sequential(nn.Linear(state, 128),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(128, 256),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(256, 1)
)
def forward(self, state, flag, test=None):
temp_action_prob = self.act_pipe(state)
self.action_prob = self.cal_prob(temp_action_prob, flag)
self.action = self.get_action(self.action_prob)
self.value = self.val_pipe(state)
return self.action
I wanna update each network respectively.
and I wanna know that Basic TD Actor-Critic method uses TD error for loss??
or squared error between r+V(s') and V(s) ?
| I think the problem is that you zero the gradients right before calling backward, after the forward propagation. Note that for automatic differentiation you need the computation graph and the intermediate results that you produce during your forward pass.
So zero the gradients before your TD error and target calculations! And not after you are finished your forward propagation.
for cur_step in range(1):
action = M_Agent(state, flag)
next_state, r = env.step(action)
optimizer_M.zero_grad() # zero your gradient here
# calculate TD Error
TD_error = M_Agent.cal_td_error(r, next_state)
# calculate Target
target = torch.FloatTensor([M_Agent.cal_target(TD_error)])
logit = M_Agent.cal_logit()
loss = criterion(logit, target)
# update value Func
TD_error.backward()
optimizer_M.step()
# update Actor Func
loss.backward()
optimizer_M.step()
To answer your second question, the DDPG algorithm for example uses the squared error (see the paper).
Another recommendation. In many cases large parts of the value and policy networks are shared in deep actor-critic agents: you have the same layers up to the last hidden layer, and use a single linear output for value prediction and a softmax layer for the action distribution. This is especially useful if you have high dimensional visual inputs, as it act as sort of a multi-task learning, but nevertheless you can try. (As I see you have a low-dimensional state vector).
| https://stackoverflow.com/questions/55673412/ |
Should I use softmax as output when using cross entropy loss in pytorch? | I have a problem with classifying fully connected deep neural net with 2 hidden layers for MNIST dataset in pytorch.
I want to use tanh as activations in both hidden layers, but in the end, I should use softmax.
For the loss, I am choosing nn.CrossEntropyLoss() in PyTOrch, which (as I have found out) does not want to take one-hot encoded labels as true labels, but takes LongTensor of classes instead.
My model is nn.Sequential() and when I am using softmax in the end, it gives me worse results in terms of accuracy on testing data. Why?
import torch
from torch import nn
inputs, n_hidden0, n_hidden1, out = 784, 128, 64, 10
n_epochs = 500
model = nn.Sequential(
nn.Linear(inputs, n_hidden0, bias=True),
nn.Tanh(),
nn.Linear(n_hidden0, n_hidden1, bias=True),
nn.Tanh(),
nn.Linear(n_hidden1, out, bias=True),
nn.Softmax() # SHOULD THIS BE THERE?
)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.5)
for epoch in range(n_epochs):
y_pred = model(X_train)
loss = criterion(y_pred, Y_train)
print('epoch: ', epoch+1,' loss: ', loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
| As stated in the torch.nn.CrossEntropyLoss() doc:
This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
Therefore, you should not use softmax before.
| https://stackoverflow.com/questions/55675345/ |
Label Smoothing in PyTorch | I'm building a ResNet-18 classification model for the Stanford Cars dataset using transfer learning. I would like to implement label smoothing to penalize overconfident predictions and improve generalization.
TensorFlow has a simple keyword argument in CrossEntropyLoss. Has anyone built a similar function for PyTorch that I could plug-and-play with?
| The generalization and learning speed of a multi-class neural network can often be significantly improved by using soft targets that are a weighted average of the hard targets and the uniform distribution over labels. Smoothing the labels in this way prevents the network from becoming over-confident and label smoothing has been used in many state-of-the-art models, including image classification, language translation, and speech recognition.
Label Smoothing is already implemented in Tensorflow within the cross-entropy loss functions. BinaryCrossentropy, CategoricalCrossentropy. But currently, there is no official implementation of Label Smoothing in PyTorch. However, there is going an active discussion on it and hopefully, it will be provided with an official package. Here is that discussion thread: Issue #7455.
Here We will bring some available best implementation of Label Smoothing (LS) from PyTorch practitioner. Basically, there are many ways to implement the LS. Please refer to this specific discussion on this, one is here, and another here. Here we will bring implementation in 2 unique ways with two versions of each; so total 4.
Option 1: CrossEntropyLossWithProbs
In this way, it accepts the one-hot target vector. The user must manually smooth their target vector. And it can be done within with torch.no_grad() scope, as it temporarily sets all of the requires_grad flags to false.
Devin Yang: Source
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from torch.nn.modules.loss import _WeightedLoss
class LabelSmoothingLoss(nn.Module):
def __init__(self, classes, smoothing=0.0, dim=-1, weight = None):
"""if smoothing == 0, it's one-hot method
if 0 < smoothing < 1, it's smooth method
"""
super(LabelSmoothingLoss, self).__init__()
self.confidence = 1.0 - smoothing
self.smoothing = smoothing
self.weight = weight
self.cls = classes
self.dim = dim
def forward(self, pred, target):
assert 0 <= self.smoothing < 1
pred = pred.log_softmax(dim=self.dim)
if self.weight is not None:
pred = pred * self.weight.unsqueeze(0)
with torch.no_grad():
true_dist = torch.zeros_like(pred)
true_dist.fill_(self.smoothing / (self.cls - 1))
true_dist.scatter_(1, target.data.unsqueeze(1), self.confidence)
return torch.mean(torch.sum(-true_dist * pred, dim=self.dim))
Additionally, we've added an assertion checkmark on self. smoothing and added loss weighting support on this implementation.
Shital Shah: Source
Shital already posted the answer here. Here we're pointing out that this implementation is similar to Devin Yang's above implementation. However, here we're mentioning his code with minimizing a bit of code syntax.
class SmoothCrossEntropyLoss(_WeightedLoss):
def __init__(self, weight=None, reduction='mean', smoothing=0.0):
super().__init__(weight=weight, reduction=reduction)
self.smoothing = smoothing
self.weight = weight
self.reduction = reduction
def k_one_hot(self, targets:torch.Tensor, n_classes:int, smoothing=0.0):
with torch.no_grad():
targets = torch.empty(size=(targets.size(0), n_classes),
device=targets.device) \
.fill_(smoothing /(n_classes-1)) \
.scatter_(1, targets.data.unsqueeze(1), 1.-smoothing)
return targets
def reduce_loss(self, loss):
return loss.mean() if self.reduction == 'mean' else loss.sum() \
if self.reduction == 'sum' else loss
def forward(self, inputs, targets):
assert 0 <= self.smoothing < 1
targets = self.k_one_hot(targets, inputs.size(-1), self.smoothing)
log_preds = F.log_softmax(inputs, -1)
if self.weight is not None:
log_preds = log_preds * self.weight.unsqueeze(0)
return self.reduce_loss(-(targets * log_preds).sum(dim=-1))
Check
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from torch.nn.modules.loss import _WeightedLoss
if __name__=="__main__":
# 1. Devin Yang
crit = LabelSmoothingLoss(classes=5, smoothing=0.5)
predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0],
[0, 0.9, 0.2, 0.2, 1],
[1, 0.2, 0.7, 0.9, 1]])
v = crit(Variable(predict),
Variable(torch.LongTensor([2, 1, 0])))
print(v)
# 2. Shital Shah
crit = SmoothCrossEntropyLoss(smoothing=0.5)
predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0],
[0, 0.9, 0.2, 0.2, 1],
[1, 0.2, 0.7, 0.9, 1]])
v = crit(Variable(predict),
Variable(torch.LongTensor([2, 1, 0])))
print(v)
tensor(1.4178)
tensor(1.4178)
Option 2: LabelSmoothingCrossEntropyLoss
By this, it accepts the target vector and uses doesn't manually smooth the target vector, rather the built-in module takes care of the label smoothing. It allows us to implement label smoothing in terms of F.nll_loss.
(a). Wangleiofficial: Source - (AFAIK), Original Poster
(b). Datasaurus: Source - Added Weighting Support
Further, we slightly minimize the coding write-up to make it more concise.
class LabelSmoothingLoss(torch.nn.Module):
def __init__(self, smoothing: float = 0.1,
reduction="mean", weight=None):
super(LabelSmoothingLoss, self).__init__()
self.smoothing = smoothing
self.reduction = reduction
self.weight = weight
def reduce_loss(self, loss):
return loss.mean() if self.reduction == 'mean' else loss.sum() \
if self.reduction == 'sum' else loss
def linear_combination(self, x, y):
return self.smoothing * x + (1 - self.smoothing) * y
def forward(self, preds, target):
assert 0 <= self.smoothing < 1
if self.weight is not None:
self.weight = self.weight.to(preds.device)
n = preds.size(-1)
log_preds = F.log_softmax(preds, dim=-1)
loss = self.reduce_loss(-log_preds.sum(dim=-1))
nll = F.nll_loss(
log_preds, target, reduction=self.reduction, weight=self.weight
)
return self.linear_combination(loss / n, nll)
NVIDIA/DeepLearningExamples: Source
class LabelSmoothing(nn.Module):
"""NLL loss with label smoothing.
"""
def __init__(self, smoothing=0.0):
"""Constructor for the LabelSmoothing module.
:param smoothing: label smoothing factor
"""
super(LabelSmoothing, self).__init__()
self.confidence = 1.0 - smoothing
self.smoothing = smoothing
def forward(self, x, target):
logprobs = torch.nn.functional.log_softmax(x, dim=-1)
nll_loss = -logprobs.gather(dim=-1, index=target.unsqueeze(1))
nll_loss = nll_loss.squeeze(1)
smooth_loss = -logprobs.mean(dim=-1)
loss = self.confidence * nll_loss + self.smoothing * smooth_loss
return loss.mean()
Check
if __name__=="__main__":
# Wangleiofficial
crit = LabelSmoothingLoss(smoothing=0.3, reduction="mean")
predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0],
[0, 0.9, 0.2, 0.2, 1],
[1, 0.2, 0.7, 0.9, 1]])
v = crit(Variable(predict),
Variable(torch.LongTensor([2, 1, 0])))
print(v)
# NVIDIA
crit = LabelSmoothing(smoothing=0.3)
predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0],
[0, 0.9, 0.2, 0.2, 1],
[1, 0.2, 0.7, 0.9, 1]])
v = crit(Variable(predict),
Variable(torch.LongTensor([2, 1, 0])))
print(v)
tensor(1.3883)
tensor(1.3883)
Update: Officially Added
torch.nn.CrossEntropyLoss(weight=None, size_average=None,
ignore_index=- 100, reduce=None,
reduction='mean', label_smoothing=0.0)
| https://stackoverflow.com/questions/55681502/ |
How downsample work in ResNet in pytorch code? | In this pytorch ResNet code example they define downsample as variable in line 44. and line 58 use it as function. How this downsample work here as CNN point of view and as python Code point of view.
code example : pytorch ResNet
i searched for if downsample is any pytorch inbuilt function. but it is not.
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, norm_layer=None):
super(BasicBlock, self).__init__()
if norm_layer is None:
norm_layer = nn.BatchNorm2d
if groups != 1:
raise ValueError('BasicBlock only supports groups=1')
# Both self.conv1 and self.downsample layers downsample the input when stride != 1
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = norm_layer(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = norm_layer(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
| In this ResNet example, Here when we define BasicBlock class we pass downsample as constructor parameter.
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, norm_layer=None):
if we pass nothing to class then downsample = None , as result identity will not changed.
When we pass downsample = "some convolution layer" as class constructor argument, It will downsample the identity via passed convolution layer to sucessfully perform addition. this layer will downsample the identity through code as mentioned
if self.downsample is not None:
identity = self.downsample(x)
| https://stackoverflow.com/questions/55688645/ |
Why does dim=1 return row indices in torch.argmax? | I am working on argmax function of PyTorch which is defined as:
torch.argmax(input, dim=None, keepdim=False)
Consider an example
a = torch.randn(4, 4)
print(a)
print(torch.argmax(a, dim=1))
Here when I use dim=1 instead of searching column vectors, the function searches for row vectors as shown below.
print(a) :
tensor([[-1.7739, 0.8073, 0.0472, -0.4084],
[ 0.6378, 0.6575, -1.2970, -0.0625],
[ 1.7970, -1.3463, 0.9011, -0.8704],
[ 1.5639, 0.7123, 0.0385, 1.8410]])
print(torch.argmax(a, dim=1))
tensor([1, 1, 0, 3])
As far as my assumption goes dim = 0 represents rows and dim =1 represent columns.
| It's time to correctly understand how the axis or dim argument work in PyTorch:
The following example should make sense once you comprehend the above picture:
|
v
dim-0 ---> -----> dim-1 ------> -----> --------> dim-1
| [[-1.7739, 0.8073, 0.0472, -0.4084],
v [ 0.6378, 0.6575, -1.2970, -0.0625],
| [ 1.7970, -1.3463, 0.9011, -0.8704],
v [ 1.5639, 0.7123, 0.0385, 1.8410]]
|
v
# argmax (indices where max values are present) along dimension-1
In [215]: torch.argmax(a, dim=1)
Out[215]: tensor([1, 1, 0, 3])
Note: dim (short for 'dimension') is the torch equivalent of 'axis' in NumPy.
| https://stackoverflow.com/questions/55691819/ |
How to create a Pytorch Dataset from .pt files? | I have transformed MNIST images saved as .pt files in a folder in Google drive. I'm writing my Pytorch code in Colab.
I would like to use these files, and create a Dataset that stores these images as Tensors. How can I do this?
Transforming images during training took too long. Hence, transformed them and saved them all as .pt files. I just want to load them back as a dataset and use them in my model.
| The approach you are following to save images is indeed a good idea. In such a case, you can simply write your own Dataset class to load the images.
from torch.utils.data import Dataset, DataLoader
from torch.utils.data.sampler import RandomSampler
class ReaderDataset(Dataset):
def __init__(self, filename):
# load the images from file
def __len__(self):
# return total dataset size
def __getitem__(self, index):
# write your code to return each batch element
Then you can create Dataloader as follows.
train_dataset = ReaderDataset(filepath)
train_sampler = RandomSampler(train_dataset)
train_loader = DataLoader(
train_dataset,
batch_size=args.batch_size,
sampler=train_sampler,
num_workers=args.data_workers,
collate_fn=batchify,
pin_memory=args.cuda,
drop_last=args.parallel
)
# args is a dictionary containing parameters
# batchify is a custom function that prepares each mini-batch
| https://stackoverflow.com/questions/55693363/ |
How to fix "TypeError: data type not understood" in numpy when creating transformer no peak mask | I'm trying to implement and train a transformer for NMT via a blog post, everything works except I can't create the no peaking mask as I get this error: "TypeError: data type not understood"
Code:
target_seq = batch.Python.transpose(0,1)
target_pad = PY_TEXT.vocab.stoi['<pad>']
target_msk = (target_seq != target_pad).unsqueeze(1)
size = target_seq.size(1) # get seq_len for matrix
nopeak_mask = np.triu(np.ones(1, size, size),
k=1).astype('uint8')
nopeak_mask = Variable(torch.from_numpy(nopeak_mask) == 0)
target_msk = target_msk & nopeak_mask
Error message:
TypeError Traceback (most recent call last)
<ipython-input-36-e19167b74ba0> in <module>()
4 target_msk = (target_seq != target_pad).unsqueeze(1)
5 size = target_seq.size(1) # get seq_len for matrix
----> 6 nopeak_mask = np.triu(np.ones(1, size, size),
7 k=1).astype('uint8')
8 nopeak_mask = Variable(torch.from_numpy(nopeak_mask) == 0)
~/.local/lib/python3.6/site-packages/numpy/core/numeric.py in ones(shape, dtype, order)
201
202 """
--> 203 a = empty(shape, dtype, order)
204 multiarray.copyto(a, 1, casting='unsafe')
205 return a
TypeError: data type not understood
| The first input to np.triu should be a tuple of desired sizes instead of a numpy array.
Try:
np.triu((1, size, size), k=1).astype("uint8")
| https://stackoverflow.com/questions/55694263/ |
How can I simplify a nested loop into torch tensor operations? | I'm trying to convert some code I have written in numpy which contains a nested-loop into tensor operations found in PyTorch. However, after trying to implement my own version I'm not getting the same value on the output. I have managed to do the same with a single loop, so I'm not entirely sure what I'm doing wrong.
#(Numpy Version)
#calculate Kinetic Energy
summation = 0.0
for i in range(0,len(k_values)-1):
summation += (k_values[i]**2.0)*wavefp[i]*(((self.hbar*kp_values[i])**2.0)/(2.0*self.mu))*wavef[i]
Ek = step*(4.0*np.pi)*summation
#(Numpy Version)
#calculate Potential Energy
summation = 0.0
for i in range(0,len(k_values)-1):
for j in range(0,len(kp_values)-1):
summation+= (k_values[i]**2.0)*wavefp[i]*(kp_values[j]**2.0)*wavef[j]*self.MTV[i,j]
Ep = (step**2.0)*(4.0*np.pi)*(2.0/np.pi)*summation
#####################################################
#(PyTorch Version)
#calcualte Kinetic Energy
Ek = step*(4.0*np.pi)*torch.sum( k_values.pow(2)*wavefp.mul(wavef)*((kp_values.mul(self.hbar)).pow(2)/(2.0*self.mu)) )
#(PyTorch Version)
#calculate Potential Energy
summation = 0.0
for i in range(0,len(k_values)-1):
summation += ((k_values[i].pow(2)).mul(wavefp[i]))*torch.sum( (kp_values.pow(2)).mul(wavef).mul(self.MTV[i,:]) )
Ep = (step**2.0)*(4.0*np.pi)*(2.0/np.pi)*summation
The arrays/tensors k_values, kp_values, wavef, and wavefp have dimensions of (1000,1). The values self.hbar, and self.mu, and step are scalars. The variable self.MTV is a matrix of size (1000,1000).
I would expect that both methods would give the same output but they don't. The code for calculating the Kinetic Energy (in both Numpy and PyTorch) give the same value. However, the potential energy calculation differ, and I'm not entirely sure why.
Many Thanks in advance!
| The problem is in the shapes. You have kp_values and wavef in (1000, 1) which needs to be converted to (1000, ) before the multiplications. The outcome of (kp_values.pow(2)).mul(wavef).mul(MTV[i,:]) is a matrix but you asummed it is a vector.
So, the following should work.
summation += ((k_values[i].pow(2)).mul(wavefp[i]))*torch.sum((kp_values.squeeze(1)
.pow(2)).mul(wavef.squeeze(1)).mul(MTV[i,:]))
And a loop-free Numpy and PyTorch solution would be:
step = 1.0
k_values = np.random.randint(0, 100, size=(1000, 1)).astype("float") / 100
kp_values = np.random.randint(0, 100, size=(1000, 1)).astype("float") / 100
wavef = np.random.randint(0, 100, size=(1000, 1)).astype("float") / 100
wavefp = np.random.randint(0, 100, size=(1000, 1)).astype("float") / 100
MTV = np.random.randint(0, 100, size=(1000, 1000)).astype("float") / 100
# Numpy solution
term1 = k_values**2.0 * wavefp # 1000 x 1
temp = kp_values**2.0 * wavef # 1000 x 1
term2 = np.matmul(temp.transpose(1, 0), MTV).transpose(1, 0) # 1000 x 1000
summation = np.sum(term1 * term2)
print(summation)
# PyTorch solution
term1 = k_values.pow(2).mul(wavefp) # 1000 x 1
term2 = kp_values.pow(2).mul(wavef).transpose(0, 1).matmul(MTV) # 1000 x 1000
summation = torch.sum(term2.transpose(0, 1).mul(term1)) # 1000 x 1000
print(summation.item())
Output
12660.407492918514
12660.407492918514
| https://stackoverflow.com/questions/55694676/ |
What do * and mean stand for in this PyTorch expression? | I do not understand how to evaluate this expression:
x.view(*(x.shape[:-2]),-1).mean(-1)`,
if x.shape == (N, C, H, W).
What does the asterisk * stand for? And what is mean(-1) ?
|
What is *?
For .view() pytorch expects the new shape to be provided by individual int arguments (represented in the doc as *shape). The asterisk (*) can be used in python to unpack a list into its individual elements, thus passing to view the correct form of input arguments it expects.
So, in your case, x.shape is (N, C, H, W), if you were to pass x.shape[:-2] without the asterisk, you would get x.view((N, C), -1) - which is not what view() expects. Unpacking (N, C) using the asterisk results with view receiving view(N, C, -1) arguments as it expects. The resulting shape is (N, C, H*W) (a 3D tensor instead of 4).
What is mean(-1)?
Simply look at the documentation of .mean(): the first argument is a dim argument. That is x.mean(-1) applies mean along the last dimension. In your case, since keepdim=False by default, your output will be a (N, C) sized tensor where each element correspond to the mean value along both spatial dimensions.
This is equivalent to
x.mean(-1).mean(-1)
| https://stackoverflow.com/questions/55718119/ |
Creating a Simple 1D CNN in PyTorch with Multiple Channels | The dimensionality of the PyTorch inputs are not what the model expects, and I am not sure why.
To my understanding...
in_channels is first the number of 1D inputs we would like to pass to the model, and is the previous out_channel for all subsequent layers.
out_channels is the desired number of kernels (filters).
kernel_size is the number of parameters per filter.
Therefore, we would expect, as data passed to forward, a dataset with 7 1D channels (i.e. a 2D input).
However, the following code throws an error that is not consistent with what I expect, where this code:
import numpy
import torch
X = numpy.random.uniform(-10, 10, 70).reshape(-1, 7)
# Y = np.random.randint(0, 9, 10).reshape(-1, 1)
class Simple1DCNN(torch.nn.Module):
def __init__(self):
super(Simple1DCNN, self).__init__()
self.layer1 = torch.nn.Conv1d(in_channels=7, out_channels=20, kernel_size=5, stride=2)
self.act1 = torch.nn.ReLU()
self.layer2 = torch.nn.Conv1d(in_channels=20, out_channels=10, kernel_size=1)
def forward(self, x):
x = self.layer1(x)
x = self.act1(x)
x = self.layer2(x)
log_probs = torch.nn.functional.log_softmax(x, dim=1)
return log_probs
model = Simple1DCNN()
print(model(torch.tensor(X)).size)
Throws the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-5-eca5856a2314> in <module>()
21
22 model = Simple1DCNN()
---> 23 print(model(torch.tensor(X)).size)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
<ipython-input-5-eca5856a2314> in forward(self, x)
12 self.layer2 = torch.nn.Conv1d(in_channels=20, out_channels=10, kernel_size=1)
13 def forward(self, x):
---> 14 x = self.layer1(x)
15 x = self.act1(x)
16 x = self.layer2(x)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
185 def forward(self, input):
186 return F.conv1d(input, self.weight, self.bias, self.stride,
--> 187 self.padding, self.dilation, self.groups)
188
189
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [20, 7, 5], but got 2-dimensional input of size [10, 7] instead
Edit: See below for solution, motivated by Shai.
import numpy
import torch
X = numpy.random.uniform(-10, 10, 70).reshape(1, 7, -1)
# Y = np.random.randint(0, 9, 10).reshape(1, 1, -1)
class Simple1DCNN(torch.nn.Module):
def __init__(self):
super(Simple1DCNN, self).__init__()
self.layer1 = torch.nn.Conv1d(in_channels=7, out_channels=20, kernel_size=5, stride=2)
self.act1 = torch.nn.ReLU()
self.layer2 = torch.nn.Conv1d(in_channels=20, out_channels=10, kernel_size=1)
def forward(self, x):
x = self.layer1(x)
x = self.act1(x)
x = self.layer2(x)
log_probs = torch.nn.functional.log_softmax(x, dim=1)
return log_probs
model = Simple1DCNN().double()
print(model(torch.tensor(X)).shape)
| You are forgetting the "minibatch dimension", each "1D" sample has indeed two dimensions: the number of channels (7 in your example) and length (10 in your case). However, pytorch expects as input not a single sample, but rather a minibatch of B samples stacked together along the "minibatch dimension".
So a "1D" CNN in pytorch expects a 3D tensor as input: BxCxT. If you only have one signal, you can add a singleton dimension:
out = model(torch.tensor(X)[None, ...])
| https://stackoverflow.com/questions/55720464/ |
TypeError: can't convert np.ndarray of type numpy.object_ | How to convert a numpy array of dtype=object to torch Tensor?
array([
array([0.5, 1.0, 2.0], dtype=float16),
array([4.0, 6.0, 8.0], dtype=float16)
], dtype=object)
| It is difficult to answer properly since you do not show us how you try to do it. From your error message I can see that you try to convert a numpy array containing objects to a torch tensor. This does not work, you will need a numeric data type:
import torch
import numpy as np
# Your test array without 'dtype=object'
a = np.array([
np.array([0.5, 1.0, 2.0], dtype=np.float16),
np.array([4.0, 6.0, 8.0], dtype=np.float16),
])
b = torch.from_numpy(a)
print(a.dtype) # This should not be 'object'
print(b)
Output
float16
tensor([[0.5000, 1.0000, 2.0000],
[4.0000, 6.0000, 8.0000]], dtype=torch.float16)
| https://stackoverflow.com/questions/55724123/ |
Pytorch: multiple datasets with multiple losses | I am using multiple datasets. I have multiple losses, each of which must be evaluated on a subset of these datasets. I want to generate a batch from each dataset, and evaluate each loss on all of its appropriate batches. Some of the losses are pairwise (need to load pairs of corresponding datapoints) whereas others are computed on single datapoints. I need to design this in such a way that is open to easily adding new datasets. Is there any pytorch builtin that would help with this? What is the best way to design this in pytorch? Thanks in advance.
| It's not clear from your question what exactly your settings are.
However, you can have multiple Datasets instances, one for each of your datasets.
On top of your datasets, you can implement a "tagged dataset", a dataset that adds a "tag" for all samples:
class TaggedDataset(data.Dataset):
def __init__(dataset, tag):
super(TaggedDataset, self).__init__()
self.ds_ = dataset
self.tag_ = tag
def __len__(self):
return len(self.ds_)
def __getitem__(self, index):
return self.ds_[index], self.tag_
Give a different tag to each dataset, concat all of them into a single ConcatDataset, and wrap a regular DataLoader around it.
Now, in your training code
for input, label, tag in my_tagged_loader:
# process each input according to the dataset tag it got.
| https://stackoverflow.com/questions/55725798/ |
Which python deep learning libraries compile at runtime? | I am trying to wrap my head around C-optimized code in python. I have read a couple of times now that python achieves high-speed computing through C-extensions. In other words, whenever I work with libraries such as numpy, it basically calls a C-extension that calculates the result and returns it.
C-extensions using numpy
Say I want to add two numbers using np.add(x,y). If I understand it correctly, libraries such as numpy do not compile the python code but instead already come with executables that will simply take the values x and y and return the result. Is that correct?
Theano, Tensorflow, and PyTorch
In particular, I am wondering if this is also true for deep learning libraries. According to the official documentation of Theano, it requires g++ and gcc (at least they are highly recommended). Does this mean that Theano will compile C (or C++) code at runtime of the python script? If so, is it the same for PyTorch and Tensorflow?
I hope that someone can solve my confusion here! Thanks a lot!
| C extensions in python
numpy uses C-extensions a lot. For instance, you can take a look at the C implementation of the sort() function [1] here [2].
[1] https://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html
[2] https://github.com/numpy/numpy/blob/master/numpy/core/src/npysort/quicksort.c.src
Deep learning libraries
Deep learning libraries use C-extensions for a large part of their backend, as well as CUDA and CUDNN. Code can be compiled at runtime:
[3] http://deeplearning.net/software/theano/extending/pipeline.html#compilation-of-the-computation-graph
[4] https://www.tensorflow.org/xla/jit
[5] https://pytorch.org/blog/the-road-to-1_0/#production--pain-for-researchers
To answer your question, theano will compile C/C++ code at runtime of the python script. The graph compilation time at runtime is extremely slow for theano: I advise you to focus on pytorch or tensorflow rather than theano.
If you're new to deep learning, you may take a quick look at [6] too.
[6] https://github.com/google/jax
| https://stackoverflow.com/questions/55733736/ |
Stretch the values of a pytorch tensor | I wish to 'stretch' the last two dimensions of a pytorch tensor to increase the spatial resolution of a (batch, channels, y, x) tensor.
Minimal example (I need 'new_function')
a = torch.tensor([[1, 2], [3, 4]])
b = new_function(a, (2, 3))
print(b)
tensor([[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[3, 3, 3, 4, 4, 4],
[3, 3, 3, 4, 4, 4]])
One way of doing this (for the real problem):
a = torch.ones((2, 256, 2, 2)) # my original data.
b = torch.zeros((2, 256, 80, 96)) # The output I need
b[:, :, :40, :48] = a[:, :, 0, 0]
b[:, :, 40:, :48] = a[:, :, 1, 0]
b[:, :, :40, 48:] = a[:, :, 0, 1]
b[:, :, 40:, 48:] = a[:, :, 1, 1]
| Use torch.nn.functional.interpolate (thanks to Shai)
torch.nn.functional.interpolate(input_tensor.float(), size=(4, 6))
My original idea was to use a variety of view and repeat methods:
def stretch(e, sdims):
od = e.shape
return e.view(od[0], od[1], -1, 1).repeat(1, 1, 1, sdims[-1]).view(od[0], od[1], od[2], -1).repeat(1, 1, 1, sdims[-2]).view(od[0], od[1], od[2] * sdims[0], od[3] * sdims[1])
torch.Size([2, 2, 4, 6])
| https://stackoverflow.com/questions/55734651/ |
What is the desired behavior of average pooling with padding? | Recently I've trained a neural network using pytorch and there is an average pooling layer with padding in it. And I'm confused about the behavior of it as well as the definition of average pooling with padding.
For example, if we have a input tensor:
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
When padding is one and kernel size 3, the input to the first kernel should be:
0, 0, 0
0, 1, 2
0, 4, 5
The output from the pytorch is 12/4 = 3 (ignoring padded 0), but I think it should be 12/9 = 1.333
Can anyone explain this to me?
Much appreciated.
| It's basically up to you to decide how you want your padded pooling layer to behave.
This is why pytorch's avg pool (e.g., nn.AvgPool2d) has an optional parameter count_include_pad=True:
By default (True) Avg pool will first pad the input and then treat all elements the same. In this case the output of your example would indeed be 1.33.
On the other hand, if you set count_include_pad=False the pooling layer will ignore the padded elements and the result in your example would be 3.
| https://stackoverflow.com/questions/55738420/ |
Getting gradient of vectorized function in pytorch | I am brand new to PyTorch and want to do what I assume is a very simple thing but am having a lot of difficulty.
I have the function sin(x) * cos(x) + x^2 and I want to get the derivative of that function at any point.
If I do this with one point it works perfectly as
x = torch.autograd.Variable(torch.Tensor([4]),requires_grad=True)
y = torch.sin(x)*torch.cos(x)+torch.pow(x,2)
y.backward()
print(x.grad) # outputs tensor([7.8545])
However, I want to be able to pass in a vector as x and for it to evaluate the derivative element-wise. For example:
Input: [4., 4., 4.,]
Output: tensor([7.8545, 7.8545, 7.8545])
But I can't seem to get this working.
I tried simply doing
x = torch.tensor([4., 4., 4., 4.], requires_grad=True)
out = torch.sin(x)*torch.cos(x)+x.pow(2)
out.backward()
print(x.grad)
But I get the error "RuntimeError: grad can be implicitly created only for scalar outputs"
How do I adjust this code for vectors?
Thanks in advance,
| Here you can find relevant discussion about your error.
In essence, when you call backward() without arguments it is implicitly converted to backward(torch.Tensor([1])), where torch.Tensor([1]) is the output value with respect to which gradients are calculated.
If you pass 4 (or more) inputs, each needs a value with respect to which you calculate gradient. You can pass torch.ones_like explicitly to backward like this:
import torch
x = torch.tensor([4.0, 2.0, 1.5, 0.5], requires_grad=True)
out = torch.sin(x) * torch.cos(x) + x.pow(2)
# Pass tensor of ones, each for each item in x
out.backward(torch.ones_like(x))
print(x.grad)
| https://stackoverflow.com/questions/55749202/ |
Extract elements from .npy file, convert them to PyTorch tensors | I read a .npy file that contains just the labels for images. The labels are stored in dictionary format. I need to convert this to an array of Tensors. But I'm unable to extract elements one of the other from the object the file returns, which is numpy.ndarray type.
import numpy as np
data = np.load('/content/drive/My Drive/targets.npy')
print(data.item())
{0: array(5), 1: array(0), 2: array(4), 3: array(1), 4: array(9), 5: array(2), 6: array(1), 7: array(3)}
print(data[()].values())
dict_values([array(5), array(0), array(4), array(1), array(9), array(2), array(1), array(3)])
I would like to create an array of tensors instead.
Thanks in advance.
| The below worked for me, with guidance by @kmario23
import numpy as np
data = np.load('/content/drive/My Drive/targets.npy')
print(data.item())
{0: array(5), 1: array(0), 2: array(4), 3: array(1), 4: array(9), 5: array(2), 6: array(1), 7: array(3)}
# data is a 0-d numpy.ndarray that contains a dictionary.
print(list(data[()].values()))
[array(5),
array(0),
array(4),
array(1),
array(9),
array(2),
array(1),
array(3),
array(1),
array(4),
array(3)]
# torch.Tensor(5) gives tensor([2.0581e-35, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00])
# torch.tensor(5) gives 5
# unsure of why the difference exists..
Labels = torch.stack([torch.tensor(i) for i in list_of_labels_array_form])
print(Labels)
tensor([5, 0, 4, ..., 2, 5, 0])
| https://stackoverflow.com/questions/55754400/ |
Replicate subtensors in PyTorch | I have a tensor “image_features” having shape torch.Size([100, 1024, 14, 14]). I need to replicate each subtensor (1024, 14, 14) 10 times, obtaining a tensor having shape torch.Size([1000, 1024, 14, 14]).
Basically, the first ten rows of the resulting tensor should correspond to the first row of the original one, the following ten rows of the resulting tensor should correspond to the second row of the original one, and so on. If possible, I don’t want to create a copy (each replicated subtensor can share the memory with the tensor it is replicated from), but it is ok to create a copy if there isn’t any other way.
How can I do it?
Thank you very much.
| Another approach that would solve your problem is:
orig_shape = (100, 1024, 14, 14)
new_shape = (100, 10, 1024, 14, 14)
input = torch.randn(orig_shape) # [100, 1024, 14, 14]
input = input.unsqueeze(1) # [100, 1, 1024, 14, 14]
input = input.expand(*new_shape) # [100, 10, 1024, 14, 14]
input = input.transpose(0, 1).contiguous() # [10, 100, 1024, 14, 14]
input = input.view(-1, *orig_shape[1:]) # [1000, 1024, 14, 14]
We can verify it.
orig_shape = (2, 3, 4)
new_shape = (2, 5, 3, 4)
input = torch.randn(orig_shape)
print(input)
input = input.unsqueeze(1)
input = input.expand(*new_shape)
input = input.transpose(0, 1).contiguous()
input = input.view(-1, *orig_shape[1:])
print(input)
The code snippet results in:
tensor([[[-1.1728, 1.0421, -1.0716, 0.6456],
[-1.2214, 1.1484, -0.1436, 1.2353],
[-0.4395, -0.9473, -0.1382, -0.9357]],
[[-0.4735, -1.4329, -0.0025, -0.6384],
[ 0.5102, 0.7813, 1.2810, -0.6013],
[ 0.6152, 1.1734, -0.4591, -1.7447]]])
tensor([[[-1.1728, 1.0421, -1.0716, 0.6456],
[-1.2214, 1.1484, -0.1436, 1.2353],
[-0.4395, -0.9473, -0.1382, -0.9357]],
[[-0.4735, -1.4329, -0.0025, -0.6384],
[ 0.5102, 0.7813, 1.2810, -0.6013],
[ 0.6152, 1.1734, -0.4591, -1.7447]],
[[-1.1728, 1.0421, -1.0716, 0.6456],
[-1.2214, 1.1484, -0.1436, 1.2353],
[-0.4395, -0.9473, -0.1382, -0.9357]],
[[-0.4735, -1.4329, -0.0025, -0.6384],
[ 0.5102, 0.7813, 1.2810, -0.6013],
[ 0.6152, 1.1734, -0.4591, -1.7447]]])
| https://stackoverflow.com/questions/55757255/ |
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at torch/csrc/cuda/Module.cpp:51 | When I try to load a pytorch checkpoint:
checkpoint = torch.load(pathname)
I see:
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at torch/csrc/cuda/Module.cpp:51
I created the checkpoint with a GPU available, but now only have CPU available.
How do I load the checkpoint?
| Load the checkpoint data to the best currently available location:
if torch.cuda.is_available():
map_location=lambda storage, loc: storage.cuda()
else:
map_location='cpu'
checkpoint = torch.load(pathname, map_location=map_location)
| https://stackoverflow.com/questions/55759311/ |
Expected object of scalar type Long but got scalar type Byte for argument #2 'target' | I am running a nn on colab and came across this error which was not there when i ran the same code on my local system. I have tried with reduced batch size too but the error still persists.
Loading dataset
Start training
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-4-37432f9d142a> in <module>()
70 start_epoch=start_epoch, log=log_interval,
71 checkpoint_path=os.path.join(dataset_dir, "cnn_block_frame_flow"),
---> 72 validate=True, resume=False, flow=True, use_cuda=cuda)
73
74 #model = models.model()
/content/KTH-Action-Recognition/main/train_helper.py in train(model, num_epochs, train_set, dev_set, lr, batch_size, start_epoch, log, checkpoint_path, validate, resume, flow, use_cuda)
107 outputs = get_outputs(model, samples["instance"], flow=flow,
108 use_cuda=use_cuda)
--> 109 loss = criterion(outputs, labels)
110 loss.backward()
111 optimizer.step()
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
902 def forward(self, input, target):
903 return F.cross_entropy(input, target, weight=self.weight,
--> 904 ignore_index=self.ignore_index, reduction=self.reduction)
905
906
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
1968 if size_average is not None or reduce is not None:
1969 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 1970 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
1971
1972
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
1788 .format(input.size(0), target.size(0)))
1789 if dim == 2:
-> 1790 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
1791 elif dim == 4:
1792 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Expected object of scalar type Long but got scalar type Byte for argument #2 'target'
Can someone tell me what is causing this error? thank you
| The title of your question is telling what is causing this error. The target should have type torch.LongTensor, but it is instead torch.ByteTensor. Before calling nll_loss do:
target = target.type(torch.LongTensor)
| https://stackoverflow.com/questions/55762581/ |
How to load pretrained googlenet model in pytorch | I'm trying to finetune a GoogleNet network over a specific dataset but I'm having trouble loading it. What I try now is:
model = torchvision.models.googlenet(pretrained=True)
However I get an error:
AttributeError: module 'torchvision.models' has no attribute 'googlenet'
I have the latest version of torchvision but reinstalled just to be sure, the error is still there.
| You can instead use the GoogLeNet inception_v3 model ("Rethinking the Inception Architecture for Computer Vision"):
import torchvision
google_net = torchvision.models.inception_v3(pretrained=True)
| https://stackoverflow.com/questions/55762706/ |
Got Very Different Scores After Translating Simple Test Model from Keras to PyTorch | I'm trying to transition from Keras to PYTorch.
After reading tutorials and similar questions, I came up with the following simple models to test. However, the two models below gives me very different scores: Keras (0.9), PyTorch (0.03).
Could someone give me guidance?
Basically my dataset has 120 features and multilabels with 3 classes that look like below.
[
[1,1,1],
[0,1,1],
[1,0,0],
...
]
def score(true, pred):
lrl = label_ranking_loss(true, pred)
lrap = label_ranking_average_precision_score(true, pred)
print('LRL:', round(lrl), 'LRAP:', round(lrap))
#Keras:
model= Sequential()
model.add(Dense(60, activation="relu", input_shape=(120,)))
model.add(Dense(30, activation="relu"))
model.add(Dense(3, activation="sigmoid"))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=32, epochs=100)
pred = model.predict(x_test)
score(y_test, pred)
#PyTorch
model = torch.nn.Sequential(
torch.nn.Linear(120, 60),
torch.nn.ReLU(),
torch.nn.Linear(60, 30),
torch.nn.ReLU(),
torch.nn.Linear(30, 3),
torch.nn. Sigmoid())
loss_fn = torch.nn. BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
epochs = 100
batch_size = 32
n_batch = int(x_train.shape[0]/batch_size)
for epoch in range(epochs):
avg_cost = 0
for i in range(n_batch):
x_batch = x_train[i*batch_size:(i+1)*batch_size]
y_batch = y_train[i*batch_size:(i+1)*batch_size]
x, y = Variable(torch.from_numpy(x_batch).float()), Variable(torch.from_numpy(y_batch).float(), requires_grad=False)
pred = model(x)
loss = loss_fn(pred, y)
loss.backward()
optimizer.step()
avg_cost += loss.item()/n_batch
print(epoch, avg_cost)
x, y = Variable(torch.from_numpy(x_test).float()), Variable(torch.from_numpy(y_test).float(), requires_grad=False)
pred = model(x)
score(y_test, pred.data.numpy())
| You need to call optimizer.zero_grad() at the start of each iteration, otherwise the gradients from different batches just keep getting accumulated.
| https://stackoverflow.com/questions/55763984/ |
Pytorch custom activation functions? | I'm having issues with implementing custom activation functions in Pytorch, such as Swish. How should I go about implementing and using custom activation functions in Pytorch?
| There are four possibilities depending on what you are looking for. You will need to ask yourself two questions:
Q1) Will your activation function have learnable parameters?
If yes, you have no choice but to create your activation function as an nn.Module class because you need to store those weights.
If no, you are free to simply create a normal function, or a class, depending on what is convenient for you.
Q2) Can your activation function be expressed as a combination of existing PyTorch functions?
If yes, you can simply write it as a combination of existing PyTorch function and won't need to create a backward function which defines the gradient.
If no you will need to write the gradient by hand.
Example 1: SiLU function
The SiLU function f(x) = x * sigmoid(x) does not have any learned weights and can be written entirely with existing PyTorch functions, thus you can simply define it as a function:
def silu(x):
return x * torch.sigmoid(x)
and then simply use it as you would have torch.relu or any other activation function.
Example 2: SiLU with learned slope
In this case you have one learned parameter, the slope, thus you need to make a class of it.
class LearnedSiLU(nn.Module):
def __init__(self, slope = 1):
super().__init__()
self.slope = slope * torch.nn.Parameter(torch.ones(1))
def forward(self, x):
return self.slope * x * torch.sigmoid(x)
Example 3: with backward
If you have something for which you need to create your own gradient function, you can look at this example: Pytorch: define custom function
| https://stackoverflow.com/questions/55765234/ |
Neural networks fails to approximate simple multiplication and division | I am trying to fit simple feedforward neural networks on simple data where my goal is to just approximate (abc)/d
max_a=2
max_b = 3000
max_c=10
max_d=1
def generate_data(no_elements=10000):
a = np.random.uniform(0,max_a,no_elements)
b = np.random.uniform(1,max_b,no_elements)
c=np.random.uniform(0.001,max_c,no_elements)
d=np.random.uniform(0.00001,max_d,no_elements)
df=pd.DataFrame({"a":a,"b":b,"c":c,"d":d})
e=(df.a*df.b*df.c)/df.d
df["e"]=e
return(df)
this is how i am generating data
then I did data normalization
df = generate_data(5000)
np_df=df.iloc[:,:4]
means=np.mean(np_df,axis=0,keepdims=True)
stds=np.std(np_df,axis=0,keepdims=True)
x_train = (np_df-means)/stds
y_train = np_df[:,4]
and I have built a simple pytorch network for regression so as it has to predict 'e'
class network_Regression(nn.Module):
def __init__(self,layers):
super(network_Regression, self).__init__()
self.linear = nn.ModuleList()
self.relu = nn.ModuleList()
self.layers = layers
for i in range(len(layers)-1):
self.linear.append(nn.Linear(layers[i],layers[i+1]))
if i+1 != len(layers)-1:
self.relu.append(nn.ReLU())
def forward(self,out):
for i in range(len(self.relu)):
out = self.linear[i](out)
out = self.relu[i](out)
out = self.linear[-1](out)
return out
model = network_Regression([4,10,10,1])
criterion= nn.MSELoss()
optimizer=optim.Adam(model.parameters())
but when I tried to train these networks tried epochs from [1000 to 0.5M]
still, it isn't able to find a simple formula ((abc)/d)=e
I tried to change various hidden layer levels but loss been around 9 digits
model.train()
num_epochs = 1000
loss_list=[]
for epoch in range(num_epochs):
for batch_idx, (data, target) in enumerate(data_loader):
#print(batch_idx)
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data.float())
loss = criterion(output, target.float())
#print(batch_idx, loss.data[0])
loss.backward()
optimizer.step()
if epoch >2:
if batch_idx % 200 == 0:
loss_list.append(loss.data.item())
if batch_idx % 400 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(data_loader.dataset),
100. * batch_idx / len(data_loader), loss.data.item()))
| Looks like neural networks are bad in multiplication and division.
check out this for details.
So basically I have to log transform my data, in the above case to approximate ((abc)/d)=e neural network has to figure out simple addition and subtraction.As per this question complicated multiplication and division becomes ln(a) + ln(b)+ln(c)-ln(d) =ln(e) and just after take inverse log of ln(e) this idea works well.
| https://stackoverflow.com/questions/55774902/ |
Size mismatch for DNN for the MNIST dataset in pytorch | I have to find a way to create a neural network model and train it on the MNIST dataset. I need there to be 5 layers, with 100 neurons each. However, when I try to set this up I get an error that there is a size mismatch. Can you please help? I am hoping that I can train on the model below:
class Mnist_DNN(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(784, 100)
self.layer2 = nn.Linear(100, 100)
self.layer3 = nn.Linear(100, 100)
self.layer4 = nn.Linear(100, 100)
self.layer5 = nn.Linear(100, 10)
def forward(self, xb):
xb = xb.view(-1, 1, 28, 28)
xb = F.relu(self.layer1(xb))
xb = F.relu(self.layer2(xb))
xb = F.relu(self.layer3(xb))
xb = F.relu(self.layer4(xb))
xb = F.relu(self.layer5(xb))
return self.layer5(xb)
| You setup your layer to get a batch of 1D vectors of dim 784 (=28*28). However, in your forward function you view the input as a batch of 2D matrices of size 28*28.
try viewing the input as a batch of 1D signals:
xb = xb.view(-1, 784)
| https://stackoverflow.com/questions/55777588/ |
PyTorch: apply mapping over singleton dimension of tensor | I'm afraid the title is not very descriptive but I could not think of a better one. Essentially my problem is the following:
I have a pytorch tensor of shape (n, 1, h, w) for arbitrary integers n, h and w (in my specific case this array represents a batch of grayscale images of dimension h x w).
I also have another tensor of shape (m, 2) which maps every possible value in the first array (i.e. the first array can contain values from 0 to m - 1) to some tuple of values. I would like to "apply" this mapping to the first array so that I obtain an array of shape (n, 2, h, w).
I hope this is somewhat clear, I find this hard to express in words, here's a code example (but note that that is not super intuitive either due to the four dimensional arrays involved):
import torch
m = 18
# could also be arbitrary tensor with this shape with values between 0 and m - 1
a = torch.arange(m).reshape(2, 1, 3, 3)
# could also be arbitrary tensor with this shape
b = torch.LongTensor(
[[11, 17, 9, 6, 5, 4, 2, 10, 3, 13, 14, 12, 7, 1, 15, 16, 8, 0],
[11, 8, 4, 14, 13, 12, 16, 1, 5, 17, 0, 10, 7, 15, 9, 6, 2, 3]]).t()
# I probably have to do this and the permute/reshape, but how?
c = b.index_select(0, a.flatten())
# ...
# another approach that I think works (but I'm not really sure why, I found this
# more or less by trial and error). I would ideally like to find a 'nicer' way
# of doing this
c = torch.stack([
b.index_select(0, a_.flatten()).reshape(3, 3, 2).permute(2, 0, 1)
for a_ in a
])
# the end result should be:
#[[[[11, 17, 9],
# [ 6, 5, 4],
# [ 2, 10, 3]],
#
# [[11, 8, 4],
# [14, 13, 12],
# [16, 1, 5]]],
#
#
# [[[13, 14, 12],
# [ 7, 1, 15],
# [16, 8, 0]],
#
# [[17, 0, 10],
# [ 7, 15, 9],
# [ 6, 2, 3]]]]
How can I perform this transformation in an efficient manner? (Ideally not using any additional memory). In numpy this could easily be achieved with np.apply_along_axis but there seems to be no pytorch equivalent to that.
| Here is one way using slicing, stacking, and view-based reshape:
In [239]: half_way = b.shape[0]//2
In [240]: upper_half = torch.stack((b[:half_way, :][:, 0], b[:half_way, :][:, 1]), dim=0).view(-1, 3, 3)
In [241]: lower_half = torch.stack((b[half_way:, :][:, 0], b[half_way:, :][:, 1]), dim=0).view(-1, 3, 3)
In [242]: torch.stack((upper_half, lower_half))
Out[242]:
tensor([[[[11, 17, 9],
[ 6, 5, 4],
[ 2, 10, 3]],
[[11, 8, 4],
[14, 13, 12],
[16, 1, 5]]],
[[[13, 14, 12],
[ 7, 1, 15],
[16, 8, 0]],
[[17, 0, 10],
[ 7, 15, 9],
[ 6, 2, 3]]]])
Some caveats are that this would work only for n=2. However, this is 1.7x faster than your loop based approach, but involves more code.
Here is a more generalized approach, which scales to any positive integer n:
In [327]: %%timeit
...: block_size = b.shape[0]//a.shape[0]
...: seq_of_tensors = [b[block_size*idx:block_size*(idx+1), :].permute(1, 0).flatten().reshape(2, 3, 3).unsqueeze(0) for idx in range(a.shape[0])]
...: torch.cat(seq_of_tensors)
...:
23.5 µs ± 460 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
You can also use a view instead of reshape:
block_size = b.shape[0]//a.shape[0]
seq_of_tensors = [b[block_size*idx:block_size*(idx+1), :].permute(1, 0).flatten().view(2, 3, 3).unsqueeze(0) for idx in range(a.shape[0])]
torch.cat(seq_of_tensors)
# outputs
tensor([[[[11, 17, 9],
[ 6, 5, 4],
[ 2, 10, 3]],
[[11, 8, 4],
[14, 13, 12],
[16, 1, 5]]],
[[[13, 14, 12],
[ 7, 1, 15],
[16, 8, 0]],
[[17, 0, 10],
[ 7, 15, 9],
[ 6, 2, 3]]]])
Note: please observe that I still use a list comprehension since we've to evenly divide our tensor b to permute, flatten, reshape, unsqueeze, and then concatenate/stack along dimension 0. It's still marginally faster than my above solution.
| https://stackoverflow.com/questions/55778000/ |
What does "RuntimeError: CUDA error: device-side assert triggered" in PyTorch mean? | I have seen a lot of specific posts to particular case-specific problems, but no fundamental motivating explanation. What does this error:
RuntimeError: CUDA error: device-side assert triggered
mean? Specifically, what is the assert that is being triggered, why is the assert there, and how do we work backwards to debug the problem?
As-is, this error message is near useless in diagnosing any problem because of the generality that it seems to say "some code somewhere that touches the GPU" has a problem. The documentation of Cuda also does not seem helpful in this regard, though I could be wrong.
https://docs.nvidia.com/cuda/cuda-gdb/index.html
| When a device-side error is detected while CUDA device code is running, that error is reported via the usual CUDA runtime API error reporting mechanism. The usual detected error in device code would be something like an illegal address (e.g. attempt to dereference an invalid pointer) but another type is a device-side assert. This type of error is generated whenever a C/C++ assert() occurs in device code, and the assert condition is false.
Such an error occurs as a result of a specific kernel. Runtime error checking in CUDA is necessarily asynchronous, but there are probably at least 3 possible methods to start to debug this.
Modify the source code to effectively convert asynchronous kernel launches to synchronous kernel launches, and do rigorous error-checking after each kernel launch. This will identify the specific kernel that has caused the error. At that point it may be sufficient simply to look at the various asserts in that kernel code, but you could also use step 2 or 3 below.
Run your code with cuda-memcheck. This is a tool something like "valgrind for device code". When you run your code with cuda-memcheck, it will tend to run much more slowly, but the runtime error reporting will be enhanced. It is also usually preferable to compile your code with -lineinfo. In that scenario, when a device-side assert is triggered, cuda-memcheck will report the source code line number where the assert is, and also the assert itself and the condition that was false. You can see here for a walkthrough of using it (albeit with an illegal address error instead of assert(), but the process with assert() will be similar.
It should also be possible to use a debugger. If you use a debugger such as cuda-gdb (e.g. on linux) then the debugger will have back-trace reports that will indicate which line the assert was, when it was hit.
Both cuda-memcheck and the debugger can be used if the CUDA code is launched from a python script.
At this point you have discovered what the assert is and where in the source code it is. Why it is there cannot be answered generically. This will depend on the developers intention, and if it is not commented or otherwise obvious, you will need some method to intuit that somehow. The question of "how to work backwards" is also a general debugging question, not specific to CUDA. You can use printf in CUDA kernel code, and also a debugger like cuda-gdb to assist with this (for example, set a breakpoint prior to the assert, and inspect machine state - e.g. variables - when the assert is about to be hit).
With newer GPUs, instead of cuda-memcheck you will probably want to use compute-sanitizer. It works in a similar fashion.
| https://stackoverflow.com/questions/55780923/ |
How to free gpu memory by deleting tensors? | Suppose I create a tensor and put it on the GPU and don't need it later and want to free the GPU memory allocated to it; How do I do it?
import torch
a=torch.randn(3,4).cuda() # nvidia-smi shows that some mem has been allocated.
# do something
# a does not exist and nvidia-smi shows that mem has been freed.
I have tried:
del a
del a; torch.cuda.empty_cache()
But none of them work.
| Running del tensor frees the memory from the GPU but does not return it to the device which is why the memory still being shown as used on nvidia-smi. You can create a new tensor and that would reuse that memory.
Sources
https://discuss.pytorch.org/t/how-to-delete-pytorch-objects-correctly-from-memory/947
https://discuss.pytorch.org/t/about-torch-cuda-empty-cache/34232
| https://stackoverflow.com/questions/55788093/ |
In pytorch data parallel mode, how to use the global tensor? | In this example, I wish the z_proto could be global for different GPUs. However, in the data parallel mode, it is split into different GPUs as well. How to solve such a problem? Thank you.
class SequencePrototypeTokenClassification(nn.Module):
def __init__(self,seq_model, label_num):
super(SequencePrototypeTokenClassification, self).__init__()
self.seq_model = seq_model
self.label_num = label_num
def forward(self, input_ids, token_type_ids, attention_mask, labels, z_proto, n_query, target_inds):
z, _ = self.seq_model(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
z_dim = z.size(-1)
zq = z.squeeze().view(-1, z_dim)
dists = euclidean_dist(zq, z_proto)
log_p_y = F.log_softmax(-dists, dim=1).view(-1, self.label_num)
loss_val = -log_p_y.gather(1, self.target_inds).squeeze().view(-1).mean()
_, y_hat = log_p_y.max(1)
return loss_val, y_hat
| It turns out the DataParallel would only replicate the nn.Parameter of the nn.Module. So I random initialized a nn.Parameter named z_proto in the module and copy the value of tensor z_proto into the parameter. Then the parameter is replicated into 4 GPUs.
| https://stackoverflow.com/questions/55792837/ |
TypeError: view() takes at most 2 arguments (3 given) | I try to use view() in pytorch but i can't input 3 arguments.I don't know why it keep giving this error? Can anyone help me with this?
def forward(self, input):
lstm_out, self.hidden = self.lstm(input.view(len(input), self.batch_size, -1))
| It looks like your input is a numpy array, not torch tensor. You need to convert it first, like input = torch.Tensor(input).
| https://stackoverflow.com/questions/55805242/ |
Common variable name acronym for Pytorch or Tensorflow? | I have seen var name like ninp (num_input), nhid (num_hidden), emsize (embedding size) in pytorch example github repo. What are some of other common acronyms and their meaning/context?
| These are common terminologies used in Sequence Models (e.g. RNNs, LSTMs, GRUs etc.,) Here is a description of what those terms mean:
ninp (num_input) : Dimension of the vectors in the embedding matrix
emsize (embedding size): Dimension of the vectors in the embedding matrix
nhid (num_hidden): how many "hidden" units that we want to have in each hidden layer
Pictorial description might help to understand it better. Below is a nice illustration. (Credits: Killian Levacher)
In the above figure emsize, is the embedding size (i.e. dimensionality of the embedding vector). This depends on the model architecture but most people would use something like 300.
In the above figure, we have five neurons in each "hidden" layer. Hence, the nhid value is 5. Output layer would have the dimensionality equal to the vocabulary size, so that a probability distribution is generated over all the tokens in the vocabulary.
| https://stackoverflow.com/questions/55806201/ |
How can I run pytorch with multiple graphic cards? | I have 4 graphic cards which I want to utilize to pytorch.
I have this net:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
How can I use them on this net?
| You may use torch.nn.DataParallel to distribute your model among many workers.
Just pass your network (torch.nn.Module) to it's constructor and use forward as you would normally. You may also specify on which GPUs it is supposed to run by providing device_ids with List[int] or torch.device.
Just for the sake of code:
import torch
# Your network
network = Net()
torch.nn.DataParallel(network)
# Use however you wish
network.forward(data)
| https://stackoverflow.com/questions/55812514/ |
How to sum based off index vector | I have 3 vectors - a sum vector, a contribution vector, and a value vector. I want to sum the value vectors according to their contribution vector and place them in their corresponding index in the sum vector. An example is:
A = [0;0] (sum vector), B = [0,0,1,1] (contribution vector) C=[20,30,40,10] (value vector)
Output:
A = [20+30;40+10]
Such that the B vector is the same length as C and their corresponding index tell us what position in A to be added to.
I am able to achieve this by a for loop as such:
for index,value in enumerate(C):
A[B[index]]+=value
However, as this will be part of my NN model forward loop it will cause significant performance issue. Specifically I was looking for a vector/matrix sorting approach that will be more efficient. In the example above, something that worked efficiently for me was:
A=torch.zeros(2,1)
C=C.reshape(2,2)
sum=torch.sum(C,1).reshape(2,1)
A += sum
However, I run into issues as it is not always the case that the indexes of A have the same contribution. For example - the case such that B = [0,0,0,1,1] and C=[20,30,40,10,50]. Is there a function or a strategic way to do this for general cases? Thanks!
| You are looking for index_add_()
A.index_add_(0, B, C)
Note that B should be of type torch.long (it is an index vector), and C should be of type torch.float, same as the type of A.
Moreover, you can use the first dim argument to do this summation along different dimensions in case A and C are multi-dimensional tensors.
| https://stackoverflow.com/questions/55819027/ |
Fixing the seed for torch random_split() | Is it possible to fix the seed for torch.utils.data.random_split() when splitting a dataset so that it is possible to reproduce the test results?
| You can use torch.manual_seed function to seed the script globally:
import torch
torch.manual_seed(0)
See reproducibility documentation for more information.
If you want to specifically seed torch.utils.data.random_split you could "reset" the seed to it's initial value afterwards. Simply use torch.initial_seed() like this:
torch.manual_seed(torch.initial_seed())
AFAIK pytorch does not provide arguments like seed or random_state (which could be seen in sklearn for example).
| https://stackoverflow.com/questions/55820303/ |
Log-likelihood function in NumPy | I followed this tutorial
and I was confused with the part where the author defines the negative-loglikelihood lost function.
def nll(input, target):
return -input[range(target.shape[0]), target].mean()
loss_func = nll
Here, target.shape[0] is 64 and target is a vector with length 64
tensor([5, 0, 4, 1, 9, 2, 1, 3, 1, 4, 3, 5, 3, 6, 1, 7, 2, 8, 6, 9, 4, 0, 9, 1, 1, 2, 4, 3, 2, 7, 3, 8, 6, 9, 0, 5, 6, 0, 7, 6, 1, 8, 7, 9, 3, 9, 8, 5, 9, 3, 3, 0, 7, 4, 9, 8, 0, 9, 4, 1, 4, 4, 6, 0]).
How does that numpy indexing result in the loss function? Moreover, what should the output of a numpy array be when there are range() and another array inside the square bracket?
| In the tutorial, both input and target are torch.tensor.
The negative log likelihood loss is computed as below:
nll = -(1/B) * sum(logPi_(target_class)) # for all sample_i in the batch.
Where:
B: The batch size
C: The number of classes
Pi: of shape [num_classes,] the probability vector of prediction for sample i. It is obtained by the softmax value of logit vector for sample i.
logPi: logarithm of Pi, we can simply get it by F.log_softmax(logit_i).
Let's break it down for an easy example:
input is expected as the log_softmax values, of shape [B, C].
target is expected as the ground truth classes, of shape [B, ].
For less cluttering, let's take B = 4, and C = 3.
import torch
B, C = 4, 3
input = torch.randn(B, C)
"""
>>> input
tensor([[-0.5043, 0.9023, -0.4046],
[-0.4370, -0.8637, 0.1674],
[-0.5451, -0.5573, 0.0531],
[-0.6751, -1.0447, -1.6793]])
"""
target = torch.randint(low=0, high=C, size=(B, ))
"""
>>> target
tensor([0, 2, 2, 1])
"""
# The unrolled version
nll = 0
nll += input[0][target[0]] # add -0.5043
nll += input[1][target[1]] # add -0.1674
nll += input[2][target[2]] # add 0.0531
nll += input[3][target[3]] # add -1.0447
nll *= (-1/B)
print(nll)
# tensor(0.3321)
# The compact way using numpy indexing
_nll = -input[range(0, B), target].mean()
print(_nll)
# tensor(0.3321)
Two ways of computing are similar. Hope this helps.
| https://stackoverflow.com/questions/55820628/ |
Pytorch 1.0: what does net.to(device) do in nn.DataParallel? | The following code from the tutorial to pytorch data paraleelism reads strange to me:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)
model.to(device)
According to my best knowledge, mode.to(device) copy the data to GPU.
DataParallel splits your data automatically and sends job orders to multiple models on several GPUs. After each model finishes their job, DataParallel collects and merges the results before returning it to you.
If the DataParallel does the job of copying, what does the to(device) do here?
| They add few lines in the tutorial to explain nn.DataParallel.
DataParallel splits your data automatically, and send job orders to multiple models on different GPUs using the data. After each model finishes their job, DataParallel collects and merges the results for you.
The above quote can be understood that nn.DataParallel is just a wrapper class to inform model.cuda() should make a multiple copies to GPUs.
In my case, I don't have any GPU on my laptop. I still call nn.DataParallel() without any problem.
import torch
import torchvision
model = torchvision.models.alexnet()
model = torch.nn.DataParallel(model)
# No error appears if I don't move the model to `cuda`
| https://stackoverflow.com/questions/55828687/ |
How to run perticular code in gpu using PyTorch? | I am using an image processing code in python opencv. Since that process is taking a lot of time to process say 30 images. I tried to process these image parallel using Multiprocessing. The multiprocessing part is working good in CPU but I want to use that multiprocessing thing in GPU(cuda).
I use torch.multiprocessing for running task in parallel. So I am using torch.device('cuda') for our class to run whole thing in to this perticular device. When I run the code it's showing device using "cuda" but not using any GPU processing.
import cv2
import numpy as np
import torch
import torch.nn as nn
from torch.multiprocessing import Process, Pool, Manager, set_start_method
import sys
import os
class RoadShoulderWidth(nn.Module):
def __init__(self):
super(RoadShoulderWidth, self).__init__()
pass
// Want to run below method in parallel for 30 images.
@staticmethod
def get_dim(image, road_shoulder_width_list):
..... code
def get_road_shoulder_width(self, _root_dir, _img_path_list):
manager = Manager()
road_shoulder_width_list = manager.list()
processes = []
for img_path in img_path_list[:30]:
img = cv2.imread(_root_dir + '/' + img_path)
img = img[72 * 5:72 * 6, 0:1280]
# Do work
p = Process(target=self.get_dim,args=(img,road_shoulder_width_list))
p.start()
processes.append(p)
for p in processes:
p.join()
return road_shoulder_width_list
Use below set of code to run your class
if __name__ == '__main__':
root_dir = '/home/nikhil_m/r'
img_path_list = os.listdir(root_dir)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
dataloader_kwargs = {'pin_memory': True}
set_start_method('fork')
obj = RoadShoulderWidth().to(device)
val = obj.get_road_shoulder_width(str(root_dir), img_path_list)
print(val)
print(torch.cuda.is_available())
Can anybody suggest me how to fix this?
| Your class RoadShoulderWidth is a nn.Module subclass which lets you use .to(device). This only means that all other nn.Module objects or nn.Parameters that are members of your RoadShoulderWidth object are moved to the device. As from your example, there are none, so nothing happens.
In general PyTorch does not move code to GPU but data. If all data of a pytorch operation are on the GPU (e.g. a + b, a and b are on GPU) then the operation is executed on the GPU. You can move the data with a.to(device), given a is a torch.Tensor object.
PyTorch can only execute its own operations on GPU. It's not able to execute OpenCV code on GPU.
| https://stackoverflow.com/questions/55830960/ |
GPU out of memory when initializing network | I am trying to initialize a CNN and then put it on my GPU for training. When I put it on GPU I get the error: (CUDA error: out of memory). I have run similar networks with no such problems. This is the only thing in cuda as I have not loaded any images as of yet. Any ideas as to what is going wrong?
I am using pytorch version 0.4.1 on a GTX 1070ti 8GB.
| NVIDIA-SMI 410.104 Driver Version: 410.104 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 107... Off | 00000000:01:00.0 On | N/A |
| 0% 43C P2 39W / 180W | 8024MiB / 8111MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1129 G /usr/lib/xorg/Xorg 36MiB |
| 0 1164 G /usr/bin/gnome-shell 57MiB |
| 0 1415 G /usr/lib/xorg/Xorg 200MiB |
| 0 1548 G /usr/bin/gnome-shell 90MiB |
| 0 6323 C /usr/bin/python3 525MiB |
| 0 9521 C /usr/bin/python3 1827MiB |
| 0 18821 C /usr/bin/python3 4883MiB |
| 0 27137 G ...uest-channel-token=16389326112703159917 45MiB |
| 0 29161 C /usr/bin/python3 355MiB |
I have tried reducing the size of the linear layers with no luck.
net = piccnn()
net.to(device)
| This issue happened to me once when a GPU driver was out of date. My GPU was a 1070 4 gig. I'd recommend a reinstall of drivers and restart.
| https://stackoverflow.com/questions/55836293/ |
TypeError when adding cuda device | I'm running a simple demo of Pytorch 1.0, and get stuck when trying cuda settings.(vscode 1.33.1, Python 3.6)
My pytorch code is as followed.
import torch
from torch import cuda
if cuda.is_available():
devic=cuda.device(0)
layer=torch.rand([5,3,2],requires_grad=True)
Everything worked fine...But when I tried to add cuda device
layer=torch.rand([5,3,2],requires_grad=True,device=devic)
There raised a TypeError
Traceback (most recent call last):
File "c:\Users\H\.vscode\extensions\ms-python.python-2019.4.11987\pythonFiles\ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "c:\Users\H\.vscode\extensions\ms-python.python-2019.4.11987\pythonFiles\lib\python\ptvsd\__main__.py", line 410, in main
run()
File "c:\Users\H\.vscode\extensions\ms-python.python-2019.4.11987\pythonFiles\lib\python\ptvsd\__main__.py", line 291, in run_file
runpy.run_path(target, run_name='__main__')
File "D:\ProgramData\Anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\ProgramData\Anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Users\H\Desktop\pth_test\tutorial1.py", line 25, in <module>
layer1=torch.rand([5,3,2],requires_grad=True,device=devic)
TypeError: rand() received an invalid combination of arguments - got (list, requires_grad=bool, device=device), but expected one of:
* (tuple of ints size, torch.Generator generator, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
* (tuple of ints size, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool requires_grad)
Changing rand() to randn() affected nothing, While empty() and zeros() raised another TypeError
Traceback (most recent call last):
File "c:\Users\H\.vscode\extensions\ms-python.python-2019.4.11987\pythonFiles\ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "c:\Users\H\.vscode\extensions\ms-python.python-2019.4.11987\pythonFiles\lib\python\ptvsd\__main__.py", line 410, in main
run()
File "c:\Users\H\.vscode\extensions\ms-python.python-2019.4.11987\pythonFiles\lib\python\ptvsd\__main__.py", line 291, in run_file
runpy.run_path(target, run_name='__main__')
File "D:\ProgramData\Anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\ProgramData\Anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Users\H\Desktop\pth_test\tutorial1.py", line 25, in <module>
layer1=torch.empty([5,3,2],requires_grad=True,device=devic)
TypeError: empty(): argument 'device' must be torch.device, not device
Things are out of control :( Any help will be appreciated
| Just exchange devic=cuda.device(0) to devic=torch.device('cuda:0').
The - confusing - reason that torch.device is what's used to allocate a tensor to a physical device, while torch.cuda.device is a context manager to tell torch on which gpu to compute stuff.
so if you do
torch.zeros(1, device=torch.device('cuda:0'))
all will be good; if, however, you do
torch.zeros(1, device=torch.cuda.device(0))
you'll get the same error as you did
TypeError: zeros(): argument 'device' must be torch.device, not device
| https://stackoverflow.com/questions/55852727/ |
Tensor reduction based off index vector | As an example, I have 2 tensors: A = [1;2;3;4;5;6;7] and B = [2;3;2]. The idea is that I want to reduce A based off B - such that B's values represent how to sum A's values- such that B = [2;3;2] means the reduced A shall be the sum of the first 2 values, next 3, and last 2: A' = [(1+2);(3+4+5);(6+7)]. It is apparent that the sum of B shall always be equal to the length of A. I'm trying to do this as efficiently as possible - preferably specific functions or matrix operations contained within pytorch/python. Thanks!
| Here is the solution.
First, we create an array of indices B_idx with the same size of A.
Then, accumulate (add) all elements in A based on the indices B_idx using index_add_.
A = torch.arange(1, 8)
B = torch.tensor([2, 3, 2])
B_idx = [idx.repeat(times) for idx, times in zip(torch.arange(len(B)), B)]
B_idx = torch.cat(B_idx) # tensor([0, 0, 1, 1, 1, 2, 2])
A_sum = torch.zeros_like(B)
A_sum.index_add_(dim=0, index=B_idx, source=A)
print(A_sum) # tensor([ 3, 12, 13])
| https://stackoverflow.com/questions/55854761/ |
Computational graph vs (computer algebra) symbolic expression | I was reading Baydin et al, Automatic Differentiation in Machine Learning: a Survey, 2018 (Arxiv), which differentiates between symbolic differentiation and automatic differentiation (AD). It then says:
AD Is Not Symbolic Differentiation.
Symbolic differentiation is the automatic manipulation of [symbolic] expressions.
AD can be thought of as performing a non-standard interpretation of a computer program
where this interpretation involves augmenting the standard computation with the calculation of various derivatives.
Evaluation traces form the basis of the AD techniques.
[A computational graph (Bauer, 1974) visualizes dependency relations of (input, working, output) variables in evaluation traces.]
It then goes on by describing how to compute the derivative with AD (in forward or backward mode). The description is basically transforming the evaluation trace / computational graph.
Autograd, Chainer, and PyTorch provide general-purpose reverse mode AD.
It also discusses Theano, TensorFlow, and others, but it basically compares define-and-run / static computational graph (Theano, TF) vs define-by-run / dynamic computational graph (PyTorch, TF Eager).
(This would be orthogonal in my understanding to the question of how AD is performed, or would mostly just change how AD is implemented, but not so much the concept of AD.)
Theano is a computational graph optimizer and compiler [...] and it currently handles
derivatives in a highly optimized form of symbolic differentiation. The result can be interpreted as a
hybrid of symbolic differentiation and reverse mode AD, but Theano does not use the general-purpose
reverse accumulation as we describe in this paper. (Personal communication with the authors.)
I'm not sure if the authors imply that Theano/TF do not provide general-purpose reverse mode AD (which would be wrong in my understanding).
I don't exactly understand how Theano does not use the general-purpose reverse accumulation.
Also, I don't understand how symbolic differentiation is different from AD, given this definition.
Or: How are symbolic expressions different from computational graphs?
Related is also differentiable programming
differentiable directed graphs assembled from functional blocks
where I again do not see the difference to a computational graph.
And backpropagation (BP):
The resulting algorithm is essentially equivalent to transforming the network evaluation function composed with the objective function under reverse mode AD, which, as we
shall see, actually generalizes the backpropagation idea.
I don't see how reverse mode AD is more general than backpropagation. Is it? How?
Schmidhuber, Deep Learning in Neural Networks: An Overview, 2014 (section 5.5) (also) states:
BP is also known as the reverse mode of automatic differentiation (Griewank, 2012).
| This is a nice question, which gets at some fundamental differences in AD and also some fundamental design differences between big ML libraries like PyTorch and TensorFlow. In particular, I think understanding the difference between define-by-run and define-and-run AD is confusing and takes some time to appreciate.
Backpropagation versus Reverse-Mode AD?
You can see a stack overflow question here, and my answer to it. Basically, the difference is whether you want the gradient of a scalar-valued function R^n -> R or the vector-Jacobian product of a vector-valued function R^n -> R^m. Backpropagation assumes you want the gradient of a scalar loss function, and is a term most commonly used in the machine learning community to talk about neural network training.
Reverse-mode AD is therefore more general than backpropagation.
How is symbolic differentiation different from AD?
Symbolic differentiation acts on symbols which represent inputs, while AD computes a numerical value of the derivative for a given input.
For example: suppose I have the function y = x^2. If I were to compute the symbolic derivative of y, I would get the value 2x as the symbolic derivative. Now, for any value of x, I immediate know the value of the derivative at that x. But if I were to perform automatic differentiation, I would first set the value of x, say x=5, my AD tool would tell me the derivative is 2*5, but it wouldn't know anything about the derivative at x=4 since it only computed the derivative at x=5, not a symbolic expression for the derivative.
Difference between define-and-run / static computational graph and define-by-run / dynamic computational graph?
As you point out, TF1 and Theano are define-and-run, while Pytorch, Autograd, and TF2 are define-by-run. What is the difference?
In TensorFlow 1, you told TensorFlow what you were going to do, and then TensorFlow prepared to perform those computations on some data by building the static computational graph, and then finally you received the data and performed the calculations. So step 1 was telling TensorFlow what you were going to do, and step 2 was performing that calculation once TensorFlow got some data.
In Autograd, you don't tell it what you are going to do before you do it. Autograd, unlike TF1, finds out what you are going to do to your data after it receives the data. If it receives a vector, it has no idea what computations are going to be performed on the vector, because it has no static computational graph ahead of time. It "builds the graph" by recording operations on each variable as the code executes, and then at the end of your computation you have a list of the operations which were performed which you can traverse backwards. This allows you to easily include control flow like if statements. Handling control flow in a define-and-run framework is much more difficult.
Why does Theano and TF1 not provide general purpose reverse-mode AD?
Theano and TF1 do not provide general-purpose AD because they don't allow for control flow. Actually, TF1 did, but it was a mess.
Difference between differentiable programming and computational graph?
From Wikipedia:
"Differentiable programming is a programming paradigm in which the programs can be differentiated throughout, usually via automatic differentiation."
So differentiable programming is a paradigm for designing programs. A computational graph, on the other hand, is an abstraction used in the AD field for understanding the computations performed by a differentiable computer program. One is a programming paradigm, one is a programming abstraction.
| https://stackoverflow.com/questions/55868135/ |
Does Google-Colab continue running the script when "Runtime disconnected"? | I am training a neural network for Neural Machine Traslation on Google Colaboratory. I know that the limit before disconnection is 12 hrs, but I am frequently disconnected before (4 or 6 hrs). The amount of time required for the training is more then 12 hrs, so I add some savings each 5000 epochs.
I don't understand if when I am disconnected from Runtime (GPU is used) the code is still execute by Google on the VM? I ask it because I can easily save the intermediate models on Drive, and so continue the train also if I am disconnected.
Does anyone know it?
| Yes, for ~1.5 hours after you close the browser window.
To keep things running longer, you'll need an active tab.
| https://stackoverflow.com/questions/55874473/ |
tuple object not callable when building a CNN in Pytorch | I am new to neural networks and currently trying to build a CNN with 2 conv layers.
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 16, kernel_size = 3, stride = 1, padding = 1),
self.maxp1 = nn.MaxPool2d(2),
self.conv2 = nn.Conv2d(in_channels = 16, out_channels = 16, kernel_size = 3, stride = 1, padding = 1),
self.fc1 = nn.Linear(16, 64),
self.fc2 = nn.Linear(64, 10)
def forward(self, x):
x = nn.ReLU(self.maxp1(self.conv1(x)))
x = nn.ReLU(self.maxp2(self.conv1(x)))
x = x.view(x.size(0), -1)
x = nn.ReLu(self.fc1(x))
return self.fc2
What I was trying to do was ConvLayer- ReLu activation - Max Pooling 2x2 - ConvLayer - ReLu activation - Flatten Layer - Fully Connect - ReLu - Fully Connected
However, this gives me TypeError: 'tuple' object is not callable on x = nn.ReLU(self.maxp1(self.conv1(x)))
How can I fix this?
|
You may change nn.ReLU to F.relu.
If you want to use nn.ReLU(), you may better to declare it as part of __init__ method, and call it later in forward():
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 16, kernel_size = 3, stride = 1, padding = 1),
self.maxp1 = nn.MaxPool2d(2),
self.conv2 = nn.Conv2d(in_channels = 16, out_channels = 16, kernel_size = 3, stride = 1, padding = 1),
self.fc1 = nn.Linear(16, 64),
self.fc2 = nn.Linear(64, 10)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
x = self.relu(self.maxp1(self.conv1(x)))
x = self.relu(self.maxp2(self.conv1(x)))
x = x.view(x.size(0), -1)
x = self.relu(self.fc1(x))
return self.fc2
| https://stackoverflow.com/questions/55874539/ |
How to get an output dimension for each layer of the Neural Network in Pytorch? | class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.net = nn.Sequential(
nn.Conv2d(in_channels = 3, out_channels = 16),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(in_channels = 16, out_channels = 16),
nn.ReLU(),
Flatten(),
nn.Linear(4096, 64),
nn.ReLU(),
nn.Linear(64, 10))
def forward(self, x):
return self.net(x)
I have created this model without a firm knowledge in Neural Network and I just fixed parameters until it worked in the training. I am not sure how to get the output dimension for each layer (e.g. output dimension after the first layer).
Is there an easy way to do this in Pytorch?
| A simple way is:
Pass the input to the model.
Print the size of the output after passing every layer.
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.net = nn.Sequential(
nn.Conv2d(in_channels = 3, out_channels = 16),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(in_channels = 16, out_channels = 16),
nn.ReLU(),
Flatten(),
nn.Linear(4096, 64),
nn.ReLU(),
nn.Linear(64, 10))
def forward(self, x):
for layer in self.net:
x = layer(x)
print(x.size())
return x
model = Model()
x = torch.randn(1, 3, 224, 224)
# Let's print it
model(x)
But be careful with the input size because you are using nn.Linear in your net. It would cause incompatible input size for nn.Linear if your input size is not 4096.
| https://stackoverflow.com/questions/55875279/ |
Pytorch tensor indexing: How to gather rows by tensor containing indices | I have the tensors:
ids: shape (7000,1) containing indices like [[1],[0],[2],...]
x: shape(7000,3,255)
ids tensor encodes the index of bold marked dimension of x which should be selected.
I want to gather the selected slices in a resulting vector:
result: shape (7000,255)
Background:
I have some scores (shape = (7000,3)) for each of the 3 elements and want only to select the one with the highest score. Therefore, I used the function
ids = torch.argmax(scores,1,True)
giving me the maximum ids. I already tried to do it with gather function:
result = x.gather(1,ids)
but that didn't work.
| Here is a solution you may look for
ids = ids.repeat(1, 255).view(-1, 1, 255)
An example as below:
x = torch.arange(24).view(4, 3, 2)
"""
tensor([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17]],
[[18, 19],
[20, 21],
[22, 23]]])
"""
ids = torch.randint(0, 3, size=(4, 1))
"""
tensor([[0],
[2],
[0],
[2]])
"""
idx = ids.repeat(1, 2).view(4, 1, 2)
"""
tensor([[[0, 0]],
[[2, 2]],
[[0, 0]],
[[2, 2]]])
"""
torch.gather(x, 1, idx)
"""
tensor([[[ 0, 1]],
[[10, 11]],
[[12, 13]],
[[22, 23]]])
"""
| https://stackoverflow.com/questions/55881002/ |
How to generate burst of images from a single image by adding misalignment? | I'm learning about image denoising and Pytorch.I want to get burst of images generated from a single image. For example, I have an image, then random crop a patch of specific size from it. Then I want to add an 1 or 2 pixels shift on it to get a new image with tiny difference. What could I do? Is it better to use some techniques in PIL or others?
| You should use the transforms to do some image augmentation for your problem.
As I read your comment, you can restrict translate = (a, b) to do some tiny random shifts in both dimensions.
import torchvision.transforms as transforms
transform = transforms.RandomAffine(degrees, translate=None, scale=None, shear=None, resample=False, fillcolor=0)
img = PIL.Image.open('path/img')
new_img = transform(img)
If you want to perform more transforms like Crop as well, group all the transform into one big transform using transforms.Compose. Here is your reference
| https://stackoverflow.com/questions/55881517/ |
How can I optimize gradient flow in LSTM with Pytorch? | I'm working in lstm with time-series data and I've observed a problem in the gradients of my network. I've one layer of 121 lstm cells. For each cell I've one input value and I get one output value. I work with a batch size of 121 values and I define lstm cell with batch_first = True, so my outputs are [batch,timestep,features].
Once I've the outputs (tensor of size [121,121,1]), I calculate the loss using MSELoss() and I backpropagate it. And here appears the problem. Looking the gradients of each cell, I notice that the gradients of first 100 cells (more or less) are null.
In theory, if I'm not wrong, when I backpropagate the error I calculate a gradient for each output, so I have a gradient for each cell. If that is true, I can't understand why in the first cells they are zero.
Does somebody knows what is happening?
Thank you!
PS.: I show you the gradient flow of the last cells:
Update:
As I tried to ask before, I still have a question about LSTM backpropagation. As you can see from the image below, in one cell, apart from the gradients that come from other cells, I think there’s also another gradient form itself.
For example, let’s look at the cell 1. I get the output y1 and I calculate the loss E1. I do the same with other cells. So, when I backpropagate in cell 1, I get dE2/dy2 * dy2/dh1 * dh1/dw1 + ... which are the gradients related to following cells in the network (BPTT) as @kmario23 and @DavidNg explained. And I also have the gradient related to E1 (dE1/dy1 * dy1/dw1). The first gradients can vanish during the flow, but this one not.
So, to sum up, although having a long layer of lstm cells, to my understand I have a gradient related only to each cell, therefore I don’t understand why I have gradients equal to zero. What does it happens with the error related to E1? Why is only bptt calculated?
| I have been dealing with these problems several times. And here is my advice:
Use smaller number of timesteps
The hidden output of the previous timestep is passed to the current steps and multiplied by the weights. When you multiply several times, the gradient will explode or vanish exponentially with the number of timesteps.
Let's say:
# it's exploding
1.01^121 = 101979 # imagine how large it is when the weight is not 1.01
# or it's vanishing
0.9^121 = 2.9063214161987074e-06 # ~ 0.0 when we init the weight smaller than 1.0
For less cluttering, I take an example of simple RNNCell - with weights W_ih and W_hh with no bias. And in your case, W_hh is just a single number but the case might generalize for any matrix W_hh. We use the indentity activation as well.
If we unroll the RNN along all the time steps K=3, we get:
h_1 = W_ih * x_0 + W_hh * h_0 (1)
h_2 = W_ih * x_1 + W_hh * h_1 (2)
h_3 = W_ih * x_2 + W_hh * h_2 (3)
Therefore, when we need to update the weights W_hh, we have to accummulate all the gradients in the step (1), (2), (3).
grad(W_hh) = grad(W_hh at step 1) + grad(W_hh at step 2) + grad(W_hh at step 3)
# step 3
grad(W_hh at step3) = d_loss/d(h_3) * d(h_3)/d(W_hh)
grad(W_hh at step3) = d_loss/d(h_3) * h_2
# step 2
grad(W_hh at step2) = d_loss/d(h_2) * d(h_2)/d(W_hh)
grad(W_hh at step2) = d_loss/d(h_3) * d_(h_3)/d(h_2) * d(h_2)/d(W_hh)
grad(W_hh at step2) = d_loss/d(h_3) * d_(h_3)/d(h_2) * h_1
# step 1
grad(W_hh at step1) = d_loss/d(h_1) * d(h_1)/d(W_hh)
grad(W_hh at step1) = d_loss/d(h_3) * d(h_3)/d(h_2) * d(h_2)/d(h_1) * d(h_1)/d(W_hh)
grad(W_hh at step1) = d_loss/d(h_3) * d(h_3)/d(h_2) * d(h_2)/d(h_1) * h_0
# As we also:
d(h_i)/d(h_i-1) = W_hh
# Then:
grad(W_hh at step3) = d_loss/d(h_3) * h_2
grad(W_hh at step2) = d_loss/d(h_3) * W_hh * h_1
grad(W_hh at step1) = d_loss/d(h_3) * W_hh * W_hh * h_0
Let d_loss/d(h_3) = v
# We accumulate all gradients for W_hh
grad(W_hh) = v * h_2 + v * W_hh * h_1 + v * W_hh * W_hh * h_0
# If W_hh is initialized too big >> 1.0, grad(W_hh) explode quickly (-> infinity).
# If W_hh is initialized too small << 1.0, grad(W_hh) vanishes quickly (-> 0), since h_2, h_1 are vanishing after each forward step (exponentially)
Although LSTM cell has different gates (like forget gate reduce irrelevantly lengthy dependency in timestep) to mitigate these problems, it will be affected by long nummber of timesteps. It still a big question for sequential data about how to design network architecture for learning long dependency.
To avoid the problems, just reduce the number of timesteps (seq_len) into subsequences.
bs = 121
seq_len = 121
new_seq_len = seq_len // k # k = 2, 2.5 or anything to experiment
X (of [bs,seq_len, 1]) -> [ X1[bs, new_seq_len, 1], X2[bs, new_seq_len, 1],...]
Then, you pass each small batch Xi into the model, such that the initial hidden is h_(i-1) which is the hidden output of previous batch `X(i-1)
h_i = model(Xi, h_(i-1))
So it will help the model to learn some long dependency as the model of 121 timesteps.
| https://stackoverflow.com/questions/55883197/ |
The `device` argument should be set by using `torch.device` or passing a string as an argument | My data iterator currently runs on the CPU as device=0 argument is deprecated. But I need it to run on the GPU with the rest of the model etc.
Here is my code:
pad_idx = TGT.vocab.stoi["<blank>"]
model = make_model(len(SRC.vocab), len(TGT.vocab), N=6)
model = model.to(device)
criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, smoothing=0.1)
criterion = criterion.to(device)
BATCH_SIZE = 12000
train_iter = MyIterator(train, device, batch_size=BATCH_SIZE,
repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
batch_size_fn=batch_size_fn, train=True)
valid_iter = MyIterator(val, device, batch_size=BATCH_SIZE,
repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
batch_size_fn=batch_size_fn, train=False)
#model_par = nn.DataParallel(model, device_ids=devices)
The above code gives this error:
The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu.
The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu.
I have tried passing in 'cuda' as an argument instead of device=0 but I receive this error:
<ipython-input-50-da3b1f7ed907> in <module>()
10 train_iter = MyIterator(train, 'cuda', batch_size=BATCH_SIZE,
11 repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
---> 12 batch_size_fn=batch_size_fn, train=True)
13 valid_iter = MyIterator(val, 'cuda', batch_size=BATCH_SIZE,
14 repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
TypeError: __init__() got multiple values for argument 'batch_size'
I have also tried passing in device as an argument. Device being defined as device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
But receive the same error as just above.
Any suggestions would be much appreciated, thanks.
| pad_idx = TGT.vocab.stoi["<blank>"]
model = make_model(len(SRC.vocab), len(TGT.vocab), N=6)
model = model.to(device)
criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, smoothing=0.1)
criterion = criterion.to(device)
BATCH_SIZE = 12000
train_iter = MyIterator(train, batch_size=BATCH_SIZE, device = torch.device('cuda'),
repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
batch_size_fn=batch_size_fn, train=True)
valid_iter = MyIterator(val, batch_size=BATCH_SIZE, device = torch.device('cuda'),
repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),
batch_size_fn=batch_size_fn, train=False)
After lot's of trial and error I managed to set device as device = torch.device('cuda') instead of device=0
| https://stackoverflow.com/questions/55883389/ |
pytorch compute pairwise difference: Incorrect result in NumPy vs PyTorch and different PyTorch versions | Suppose I have two arrays, and I want to calculate row-wise differences between every two rows of two matrices of the same shape as follows. This is how the procedure looks like in numpy, and I want to replicate the same thing in pytorch.
>>> a = np.array([[1,2,3],[4,5,6]])
>>> b = np.array([[3,4,5],[5,3,2]])
>>> c = a[np.newaxis,:,:] - b[:,np.newaxis,:]
>>> print(c)
[[[-2 -2 -2]
[ 1 1 1]]
[[-4 -1 1]
[-1 2 4]]]
BTW, I tried the same thing using pytorch, but it does not work. Is there anyway we could accomplish the same thing in pytorch
>>> import torch
>>> a = torch.from_numpy(a)
>>> b = torch.from_numpy(b)
>>> c1 = a[None,:,:]
>>> c2 = b[:,None,:]
>>> diff = c1 - c2
>>> print(diff.size())
torch.Size([1, 2, 3])
I was actually looking for torch.Size([2,2,3]). (P.S. I also tried unsqueeze from pytorch, but it doesn't work).
| The issue arises because of using PyTorch 0.1. If using PyTorch 1.0.1, the same operation of NumPy generalize to PyTorch without any modifications and issues. Here is a snapshot of the run in Colab.
As we can see, we indeed get the same results.
Here is an attempt to reproduce the error you faced of getting incorrect result:
>>> t1 = torch.from_numpy(a)
>>> t2 = torch.from_numpy(b)
>>> t1[np.newaxis, ...] - t2[:, np.newaxis, ...]
(0 ,.,.) =
-2 -2 -2
-1 2 4
[torch.LongTensor of size 1x2x3]
>>> torch.__version__
'0.1.12_1'
So, please upgrade your PyTorch version to 1.0.1!
Digging more into for details:
The main reason why it didn't work in PyTorch version 0.1 is that broadcasting was not completely implemented then. Basically, the tensor promotion to 3D, followed by a subtraction can be achieved in two steps as in (in version 1.0.1):
>>> t1[:1, ] - t2
>>> tensor([[-2, -2, -2], # t1_r1
[-4, -1, 1]]) # t1_r2
>>> t1[1:, ] - t2
>>> tensor([[ 1, 1, 1], # t2_r1
[-1, 2, 4]]) # t2_r2
The results of above two operations put together by stacking rows in the order (t1_r1, t2_r1, t1_r2, t2_r2), after each of the rows being a 2D would give us the shape (2, 2, 3).
Now, try doing the above two steps in version 0.1, it would throw the error:
RuntimeError: inconsistent tensor size at /opt/conda/conda-bld/pytorch_1501971235237/work/pytorch-0.1.12/torch/lib/TH/generic/THTensorMath.c:831
| https://stackoverflow.com/questions/55884299/ |
pytorch: how can I use picture as label in dataloader? | I want to do some image reconstruction using autoencoders in pytorch, however, I didn't find a way to use image as label for an input image.(the label image is different from original ones)
I've tried the image folder method, but I think that's for classfication and I am currently unable to come up with one solution. Should I create a custom dataset for this...
Thanks in advance!
| Write your custom Dataset, below is a simple example.
import torch.utils.data.Dataset as Dataset
class CustomDataset(Dataset):
def __init__(self, input_imgs, label_imgs, transform):
self.input_imgs = input_imgs
self.label_imgs = label_imgs
self.transform = transform
def __len__(self):
return len(self.input_imgs)
def __getitem__(self, idx):
input_img, label_img = self.input_imgs[idx], self.label_imgs[idx]
return self.transform(input_img), self.transform(label_img)
And then, pass it to Dataloader:
dataloader = DataLoader(CustomDataset)
| https://stackoverflow.com/questions/55886306/ |
Understanding PyTorch einsum | I'm familiar with how einsum works in NumPy. A similar functionality is also offered by PyTorch: torch.einsum(). What are the similarities and differences, either in terms of functionality or performance? The information available at PyTorch documentation is rather scanty and doesn't provide any insights regarding this.
| Since the description of einsum is skimpy in torch documentation, I decided to write this post to document, compare and contrast how torch.einsum() behaves when compared to numpy.einsum().
Differences:
NumPy allows both small case and capitalized letters [a-zA-Z] for the "subscript string" whereas PyTorch allows only the small case letters [a-z].
NumPy accepts nd-arrays, plain Python lists (or tuples), list of lists (or tuple of tuples, list of tuples, tuple of lists) or even PyTorch tensors as operands (i.e. inputs). This is because the operands have only to be array_like and not strictly NumPy nd-arrays. On the contrary, PyTorch expects the operands (i.e. inputs) strictly to be PyTorch tensors. It will throw a TypeError if you pass either plain Python lists/tuples (or its combinations) or NumPy nd-arrays.
NumPy supports lot of keyword arguments (for e.g. optimize) in addition to nd-arrays while PyTorch doesn't offer such flexibility yet.
Here are the implementations of some examples both in PyTorch and NumPy:
# input tensors to work with
In [16]: vec
Out[16]: tensor([0, 1, 2, 3])
In [17]: aten
Out[17]:
tensor([[11, 12, 13, 14],
[21, 22, 23, 24],
[31, 32, 33, 34],
[41, 42, 43, 44]])
In [18]: bten
Out[18]:
tensor([[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3],
[4, 4, 4, 4]])
1) Matrix multiplication
PyTorch: torch.matmul(aten, bten) ; aten.mm(bten)
NumPy : np.einsum("ij, jk -> ik", arr1, arr2)
In [19]: torch.einsum('ij, jk -> ik', aten, bten)
Out[19]:
tensor([[130, 130, 130, 130],
[230, 230, 230, 230],
[330, 330, 330, 330],
[430, 430, 430, 430]])
2) Extract elements along the main-diagonal
PyTorch: torch.diag(aten)
NumPy : np.einsum("ii -> i", arr)
In [28]: torch.einsum('ii -> i', aten)
Out[28]: tensor([11, 22, 33, 44])
3) Hadamard product (i.e. element-wise product of two tensors)
PyTorch: aten * bten
NumPy : np.einsum("ij, ij -> ij", arr1, arr2)
In [34]: torch.einsum('ij, ij -> ij', aten, bten)
Out[34]:
tensor([[ 11, 12, 13, 14],
[ 42, 44, 46, 48],
[ 93, 96, 99, 102],
[164, 168, 172, 176]])
4) Element-wise squaring
PyTorch: aten ** 2
NumPy : np.einsum("ij, ij -> ij", arr, arr)
In [37]: torch.einsum('ij, ij -> ij', aten, aten)
Out[37]:
tensor([[ 121, 144, 169, 196],
[ 441, 484, 529, 576],
[ 961, 1024, 1089, 1156],
[1681, 1764, 1849, 1936]])
General: Element-wise nth power can be implemented by repeating the subscript string and tensor n times.
For e.g., computing element-wise 4th power of a tensor can be done using:
# NumPy: np.einsum('ij, ij, ij, ij -> ij', arr, arr, arr, arr)
In [38]: torch.einsum('ij, ij, ij, ij -> ij', aten, aten, aten, aten)
Out[38]:
tensor([[ 14641, 20736, 28561, 38416],
[ 194481, 234256, 279841, 331776],
[ 923521, 1048576, 1185921, 1336336],
[2825761, 3111696, 3418801, 3748096]])
5) Trace (i.e. sum of main-diagonal elements)
PyTorch: torch.trace(aten)
NumPy einsum: np.einsum("ii -> ", arr)
In [44]: torch.einsum('ii -> ', aten)
Out[44]: tensor(110)
6) Matrix transpose
PyTorch: torch.transpose(aten, 1, 0)
NumPy einsum: np.einsum("ij -> ji", arr)
In [58]: torch.einsum('ij -> ji', aten)
Out[58]:
tensor([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44]])
7) Outer Product (of vectors)
PyTorch: torch.ger(vec, vec)
NumPy einsum: np.einsum("i, j -> ij", vec, vec)
In [73]: torch.einsum('i, j -> ij', vec, vec)
Out[73]:
tensor([[0, 0, 0, 0],
[0, 1, 2, 3],
[0, 2, 4, 6],
[0, 3, 6, 9]])
8) Inner Product (of vectors)
PyTorch: torch.dot(vec1, vec2)
NumPy einsum: np.einsum("i, i -> ", vec1, vec2)
In [76]: torch.einsum('i, i -> ', vec, vec)
Out[76]: tensor(14)
9) Sum along axis 0
PyTorch: torch.sum(aten, 0)
NumPy einsum: np.einsum("ij -> j", arr)
In [85]: torch.einsum('ij -> j', aten)
Out[85]: tensor([104, 108, 112, 116])
10) Sum along axis 1
PyTorch: torch.sum(aten, 1)
NumPy einsum: np.einsum("ij -> i", arr)
In [86]: torch.einsum('ij -> i', aten)
Out[86]: tensor([ 50, 90, 130, 170])
11) Batch Matrix Multiplication
PyTorch: torch.bmm(batch_tensor_1, batch_tensor_2)
NumPy : np.einsum("bij, bjk -> bik", batch_tensor_1, batch_tensor_2)
# input batch tensors to work with
In [13]: batch_tensor_1 = torch.arange(2 * 4 * 3).reshape(2, 4, 3)
In [14]: batch_tensor_2 = torch.arange(2 * 3 * 4).reshape(2, 3, 4)
In [15]: torch.bmm(batch_tensor_1, batch_tensor_2)
Out[15]:
tensor([[[ 20, 23, 26, 29],
[ 56, 68, 80, 92],
[ 92, 113, 134, 155],
[ 128, 158, 188, 218]],
[[ 632, 671, 710, 749],
[ 776, 824, 872, 920],
[ 920, 977, 1034, 1091],
[1064, 1130, 1196, 1262]]])
# sanity check with the shapes
In [16]: torch.bmm(batch_tensor_1, batch_tensor_2).shape
Out[16]: torch.Size([2, 4, 4])
# batch matrix multiply using einsum
In [17]: torch.einsum("bij, bjk -> bik", batch_tensor_1, batch_tensor_2)
Out[17]:
tensor([[[ 20, 23, 26, 29],
[ 56, 68, 80, 92],
[ 92, 113, 134, 155],
[ 128, 158, 188, 218]],
[[ 632, 671, 710, 749],
[ 776, 824, 872, 920],
[ 920, 977, 1034, 1091],
[1064, 1130, 1196, 1262]]])
# sanity check with the shapes
In [18]: torch.einsum("bij, bjk -> bik", batch_tensor_1, batch_tensor_2).shape
12) Sum along axis 2
PyTorch: torch.sum(batch_ten, 2)
NumPy einsum: np.einsum("ijk -> ij", arr3D)
In [99]: torch.einsum("ijk -> ij", batch_ten)
Out[99]:
tensor([[ 50, 90, 130, 170],
[ 4, 8, 12, 16]])
13) Sum all the elements in an nD tensor
PyTorch: torch.sum(batch_ten)
NumPy einsum: np.einsum("ijk -> ", arr3D)
In [101]: torch.einsum("ijk -> ", batch_ten)
Out[101]: tensor(480)
14) Sum over multiple axes (i.e. marginalization)
PyTorch: torch.sum(arr, dim=(dim0, dim1, dim2, dim3, dim4, dim6, dim7))
NumPy: np.einsum("ijklmnop -> n", nDarr)
# 8D tensor
In [103]: nDten = torch.randn((3,5,4,6,8,2,7,9))
In [104]: nDten.shape
Out[104]: torch.Size([3, 5, 4, 6, 8, 2, 7, 9])
# marginalize out dimension 5 (i.e. "n" here)
In [111]: esum = torch.einsum("ijklmnop -> n", nDten)
In [112]: esum
Out[112]: tensor([ 98.6921, -206.0575])
# marginalize out axis 5 (i.e. sum over rest of the axes)
In [113]: tsum = torch.sum(nDten, dim=(0, 1, 2, 3, 4, 6, 7))
In [115]: torch.allclose(tsum, esum)
Out[115]: True
15) Double Dot Products / Frobenius inner product (same as: torch.sum(hadamard-product) cf. 3)
PyTorch: torch.sum(aten * bten)
NumPy : np.einsum("ij, ij -> ", arr1, arr2)
In [120]: torch.einsum("ij, ij -> ", aten, bten)
Out[120]: tensor(1300)
| https://stackoverflow.com/questions/55894693/ |
How to load tfrecord in pytorch? | How to use tfrecord with pytorch?
I have downloaded "Youtube8M" datasets with video-level features, but it is stored in tfrecord.
I tried to read some sample from these file to convert it to numpy and then load in pytorch. But it failed.
reader = YT8MAggregatedFeatureReader()
files = tf.gfile.Glob("/Data/youtube8m/train*.tfrecord")
filename_queue = tf.train.string_input_producer(
files, num_epochs=5, shuffle=True)
training_data = [
reader.prepare_reader(filename_queue) for _ in range(1)
]
unused_video_id, model_input_raw, labels_batch, num_frames = tf.train.shuffle_batch_join(
training_data,
batch_size=1024,
capacity=1024 * 5,
min_after_dequeue=1024,
allow_smaller_final_batch=True ,
enqueue_many=True)
with tf.Session() as sess:
label_numpy = labels_batch.eval()
print(type(label_numpy))
But this step have no result, just stuck for a long while without any response.
| Maybe this can help you: TFRecord reader for PyTorch
| https://stackoverflow.com/questions/55896083/ |
Pytorch equivalent of `tf.reverse_sequence`? | I would like to do backward-direction LSTM on a padded sequence, which requires reversing the input sequence without the padding.
For a batch like this (where _ stands for padding):
a b c _ _ _
d e f g _ _
h i j k l m
if would like to get:
c b a _ _ _
g f e d _ _
m l k j i h
TensorFlow has a function tf.reverse_sequence that takes the input tensor and lengths of the sequences in the batch and returns the reversed batch. Is there an easy way of doing it in Pytorch?
| Unfortunately, there is no direct equivalent yet, although it has been requested.
I also looked into the whole PackedSequence object, but it has no .flip() operation defined on it. Assuming you already have the necessary data to provide the lengths, as you suggested, you could implement it with this function:
def flipBatch(data, lengths):
assert data.shape[0] == len(lengths), "Dimension Mismatch!"
for i in range(data.shape[0]):
data[i,:lengths[i]] = data[i,:lengths[i]].flip(dims=[0])
return data
Unfortunately, this only works if your sequence is two-dimensional (with batch_size x sequence), but you could easily extend this for your specific input requirements. This already more or less covers the proposal in the above link, but I updated it to today's standard.
| https://stackoverflow.com/questions/55904997/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.