id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st81368 | You should do self.threshold = nn.Parameter(torch.rand(1)).
All parameters of a nn.Module must be nn.Parameters otherwise they won’t appear when you call .parameters() and won’t move when you call .cuda() (which is your problem here). |
st81369 | class DDReLU(nn.Module):
def __init__(self):
super(DDReLU, self).__init__()
self.threshold = nn.Parameter(torch.rand(1), requires_grad=True)
self.register_backward_hook(lambda module, grad_i, grad_o: (grad_i[0], grad_i[1]*0.01))
#self.threshold.data.fill_(0.1)
self.ReLU = nn.ReLU(True)
def forward(self, x):
print(self.threshold.data[0])
return self.ReLU(x + self.threshold) - self.threshold
#return self.ReLU(x) + self.threshold
Is the code above fine to change the relative learning rate of the new parameter?
By relative learning rate, I mean: The parameter created has a learning rate that is 0.01 times the one used to the other model’s parameters. |
st81370 | I’ve been experimenting with learning the threshold parameters for expressions like .clamp(min=lower) where ‘lower’ is a Module Parameter.
Here is a function for accomplishing it for clamping to zero or negative values:
def Clamp(x, minval):
"""
Clamps Variable x to minval.
minval <= 0.0
"""
return x.clamp(max=0.0).sub(minval).clamp(min=0.0).add(minval) + x.clamp(min=0.0)
With some extra work the same could be done for .clamp(max=upper) . |
st81371 | While tinkering with the official code example for Variational Autoencoders 11, I experienced some unexpected behaviour with regard to the Binary Cross-Entropy loss. When I use F.binary_cross_entropy in combination with the sigmoid function, the model trains as expected on MNIST. However, when changing to the F.binary_cross_entropy_with_logits function, the loss suddenly becomes arbitrarily small during training and the model no longer produces meaningful results.
# For this loss function the loss becomes arbitrarily small
BCE = F.binary_cross_entropy_with_logits(recon_x, x.view(-1, 784), reduction='sum') / x.shape[0]
# For this loss function, the training works as expected
BCE = F.binary_cross_entropy(torch.sigmoid(recon_x), x.view(-1, 784), reduction='sum') / x.shape[0]
To my understanding, the only difference between the two approaches should be numerical stability. Am I missing something? |
st81372 | Solved by KFrank in post #6
Hi Simon!
Okay, this makes more sense.
The input and target passed to binary_cross_entropy()
are both supposed to be probabilities, that is, numbers between
0 and 1 (with singularities occurring at 0 and 1).
Your target contains negative numbers, which are not valid
probabilities.
(Because y… |
st81373 | Hello Simon!
smonsays:
While tinkering with the official code example for Variational Autoencoders, I experienced some unexpected behaviour with regard to the Binary Cross-Entropy loss. When I use F.binary_cross_entropy in combination with the sigmoid function, the model trains as expected on MNIST. However, when changing to the F.binary_cross_entropy_with_logits function, the loss suddenly becomes arbitrarily small during training and the model no longer produces meaningful results.
# For this loss function the loss becomes arbitrarily small
BCE = F.binary_cross_entropy_with_logits(recon_x, x.view(-1, 784), reduction='sum') / x.shape[0]
# For this loss function, the training works as expected
BCE = F.binary_cross_entropy(torch.sigmoid(recon_x), x.view(-1, 784), reduction='sum') / x.shape[0]
To my understanding, the only difference between the two approaches should be numerical stability. Am I missing something?
I agree with your understanding, and looking at the two lines of
code you posted, I don’t see anything suspicious (although I miss
things all the time …).
If I had to guess, I would guess that you have a typo somewhere
else in your code that causes the two runs to differ.
However, we can start by checking out some basics:
Here I run the example given in the documentation for
torch.nn.functional.binary_cross_entropy. (Please note
that I am running this test, for whatever reason, with pytorch 0.3.0.)
Here is the script:
import torch
print (torch.__version__)
torch.manual_seed (2019)
input = torch.autograd.Variable (torch.randn ((3, 2)))
print (input)
target = torch.autograd.Variable (torch.rand ((3, 2)))
print (target)
loss_plain = torch.nn.functional.binary_cross_entropy (torch.sigmoid (input), target)
print (loss_plain)
loss_logits = torch.nn.functional.binary_cross_entropy_with_logits (input, target)
print (loss_logits)
print (loss_plain - loss_logits)
And here is the output:
>>> import torch
>>> print (torch.__version__)
0.3.0b0+591e73e
>>> torch.manual_seed (2019)
<torch._C.Generator object at 0x00000207759A60F0>
>>> input = torch.autograd.Variable (torch.randn ((3, 2)))
>>> print (input)
Variable containing:
-0.1187 0.2110
0.7463 -0.6136
-0.1186 1.5565
[torch.FloatTensor of size 3x2]
>>> target = torch.autograd.Variable (torch.rand ((3, 2)))
>>> print (target)
Variable containing:
0.7628 0.0721
0.2208 0.3979
0.6338 0.1922
[torch.FloatTensor of size 3x2]
>>> loss_plain = torch.nn.functional.binary_cross_entropy (torch.sigmoid (input), target)
>>> print (loss_plain)
Variable containing:
0.8868
[torch.FloatTensor of size 1]
>>> loss_logits = torch.nn.functional.binary_cross_entropy_with_logits (input, target)
>>> print (loss_logits)
Variable containing:
0.8868
[torch.FloatTensor of size 1]
>>> print (loss_plain - loss_logits)
Variable containing:
1.00000e-08 *
-5.9605
[torch.FloatTensor of size 1]
And, indeed, the two expressions are the same (up to floating-point
precision).
Is there any way you can capture a specific instance of your
recon_x and x.view() so that you can pump what you know
are the same values into your two cross-entropy expressions?
Is there any way you track your loss (your BCE) on a step-by-step
basis so you can see when they first diverge?
The thing that is noteworthy to me is that you say that the less
numerically stable version (regular bce) works, while the more
stable version (bce_with_logits) quits working at some point.
This is backwards of what one might expect.
Perhaps there is something about your model that puts it on
the edge of being poorly behaved. In such a case it could be
plausible that, by happenstance, the bce version stays in a
well-behaved region, but small differences due to using the
bce_with_logits version cause it to drift into a poorly-behaved
region.
If this were the case I would expect that other perturbations
such as starting with different initial weights, or using a different
optimization algorithm or learning rate could cause the training
to “randomly” end up being well behaved or poorly behaved.
So:
I would
proofread the code to make sure there isn’t some outright error
check that your model and data are reasonably well behaved
and stable with respect to perturbations of the details of your
training
track your bce and bce_logits step by step to find out where
they first diverge, and drill down with the values that immediately
precede the divergence.
Have fun!
K. Frank |
st81374 | Hello KFrank,
thank you for your elaborate response. I have uploaded an instance of tensors for which the two losses diverge here 1. This is the resulting output for it:
import torch
x_view = torch.load("x_view.pt")
x_recon = torch.load("x_recon.pt")
BCE1 = torch.nn.functional.binary_cross_entropy_with_logits(x_recon, x_view, reduction='sum')
BCE2 = torch.nn.functional.binary_cross_entropy(torch.sigmoid(x_recon), x_view, reduction='sum')
print("BCE loss with logits: ", BCE1) # -2.3662e+08
print("BCE loss with sigmoid: ", BCE2) # -379848.2500
print("Loss difference: ", BCE1-BCE2) # -2.3624e+08
The x_recon values are all very small (around -1.0e+05). However, I am not sure why the version without logits behaves so differently in this regime. Do you have any insights in this regard?
In the mean time I’ll try to figure out the minimal changes to the official example that are necessary to reproduce the odd behaviour and post the code here. |
st81375 | Hi Simon!
smonsays:
I have uploaded an instance of tensors for which the two losses diverge
# ...
print("BCE loss with logits: ", BCE1) # -2.3662e+08
print("BCE loss with sigmoid: ", BCE2) # -379848.2500
print("Loss difference: ", BCE1-BCE2) # -2.3624e+08
The x_recon values are all very small (around -1.0e+05). However, I am not sure why the version without logits behaves so differently in this regime. Do you have any insights in this regard?
First, unfortunately, I’m not able to load your sample data with
my creaking, wheezing, 0.3.0-version of pytorch. If they’re not
too large, could you post them as text files?
You say, “The x_recon values are all very small (around -1.0e+05).”
You give a numerical value of -10,000. I would call this “a rather
large negative number.” (“Small,” to me, has the connotation of
“close to zero.”)
Anyway, let’s go with -10,000. That is, your logits (inputs)
are rather large negative numbers. So your probabilities
(sigmoid (logit)) are all (positive) numbers quite close to zero.
But (with 32-bit floating-point) they underflow and become
exactly zero. (32-bit sigmoid underflows to zero somewhere
around sigmoid (-90.0).)
Given this, I’m surprised you’re not getting NaNs (from the
log (0.0) inside of binary_cross_entropy()).
Anyway, could you tell us the shape of x_recon and x_view,
as well as the (algebraic) minima and maxima of x_recon
and x_view?
Assuming that your x_recon really have become something
like -10,000, it’s not surprising that you’re getting weird results
(at least for plain bce) – you’ve long since passed into the
range where sigmoid() underflows. (This still doesn’t explain
why you’re getting seeming good results with plain bce, but
things break down with bce_with_logits.)
Best.
K. Frank |
st81376 | KFrank:
You say, “The x_recon values are all very small (around -1.0e+05).”
You give a numerical value of -10,000. I would call this “a rather
large negative number.” (“Small,” to me, has the connotation of
“close to zero.”)
Ah sorry about that, you are of course right, the x_recon values are large negative numbers. Only the sigmoid is very small.
KFrank:
Anyway, could you tell us the shape of x_recon and x_view ,
as well as the (algebraic) minima and maxima of x_recon
and x_view ?
Both have dimensions torch.Size([100, 784]). The first dimension is the batch size, the second is the number of pixels in a MNIST training image. The extrema are:
torch.min(x_recon) # -16971.4434
torch.max(x_recon) # 15807.5469
torch.min(x_view) # -0.4242
torch.max(x_view) # 2.8215)
So it turns out there are also very big entries in x_recon that saturate the sigmoid towards 1.
smonsays:
In the mean time I’ll try to figure out the minimal changes to the official example that are necessary to reproduce the odd behaviour and post the code here.
I created a demo with minimal changes to the original example that reproduces the odd behaviour here 4. The relevant change was the normalization of the data to zero mean and unit variance (i.e. transform = transforms.Normalize((0.1307,), (0.3081,))). |
st81377 | Hi Simon!
smonsays:
Both have dimensions torch.Size([100, 784]). The first dimension is the batch size, the second is the number of pixels in a MNIST training image. The extrema are:
torch.min(x_recon) # -16971.4434
torch.max(x_recon) # 15807.5469
torch.min(x_view) # -0.4242
torch.max(x_view) # 2.8215)
So it turns out there are also very big entries in x_recon that saturate the sigmoid towards 1.
Okay, this makes more sense.
The input and target passed to binary_cross_entropy()
are both supposed to be probabilities, that is, numbers between
0 and 1 (with singularities occurring at 0 and 1).
Your target contains negative numbers, which are not valid
probabilities.
(Because you pass your x_recon through sigmoid(), your input
will always contain valid probabilities, although they will sometimes
saturate at the singular 0 and 1.)
So my guess is that your bogus target values are causing your
training to drive your inputs to the large values that saturate
sigmoid(). (Why you’re not getting NaNs, I don’t know.)
Given your bogus target, I’m not surprised that you’re getting
weird results. (I can come up with plausible speculations about
why plain bce and bce_with_logits differ, but that’s not really the
point.)
Figure out how to fix your inputs to binary_cross_entropy(),
specifically x_view, and see if that cleans things up, or at
least improves the situation.
I created a demo with minimal changes to the original example that reproduces the odd behaviour here.
(I haven’t tried to run your demo, because, among other reasons,
it likely won’t run with my decrepit pytorch 0.3.0.)
Good luck.
K. Frank |
st81378 | Okay, so normalization to zero mean, unit variance should be removed in this case as it violates the probabilistic interpretation of the images. That solves my issue, thank you! |
st81379 | Since I started to use the pytorch dataloader, I´ve got runtime problems with .cuda(). My dataset consists of 250000 .npy-files each containing a numpy array with the shape 33x27. I’m using the dataloader the following way:
# list containing all file paths
train_file_paths = getPaths(self.dir_training)
trainDataSet = IterDataset(feature_path=train_file_paths)
train_loader = utils.DataLoader(dataset=trainDataSet,batch_size=32,shuffle=False,num_workers=16,pin_memory=True)
My training loop calls each time a new batch (batch-size: 32) and stores it to the GPU via .cuda(). The model is stored to the GPU at the beginning of the script.
for i,(feature,labels) in enumerate(train_loader):
feature = Variable(feature.cuda(), requires_grad=True)
labels = Variable(labels.cuda(), requires_grad=True)
outputs = model(feature.float())
loss = criterion(outputs,labels.long())
optimizer.zero_grad()
loss.backward()
optimizer.step()
My Dataclass looks like the following (the first column of the ndarray is the label):
class IterDataset(Dataset):
def __init__(self, feature_path):
self.feature_path = feature_path
def __len__(self):
return len(self.feature_path)
def __getitem__(self, index):
feature = np.load(self.feature_path[index])
X = feature[:,1:]
y = feature[0,0]
# checking for NAN
if np.isnan(X).any():
print('NAN'+ self.feature_path[index])
return X, y
The cProfile for the first 1000 batches:
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
2023 13.745 0.007 13.745 0.007 {method 'cuda' of 'torch._C._TensorBase' objects}
1000 1.931 0.002 1.931 0.002 {method 'run_backward' of 'torch._C._EngineBase' objects}
Hardware / Software I’m using:
Cuda Version: 10.1
GPU: 2x GeForce GTX 1080
So if anybody has an idea why .cuda() takes so much time, I would appreciate it. |
st81380 | Since CUDA operations are asynchronous, the host to device copy via .cuda() could create a synchronization point and thus accumulate the timing from the actual forward and backward pass.
If you would like to profile the code manually, you could add manual synchronization points via torch.cuda.synchronize(). |
st81381 | I am trying to make a Custom Relu function that just doesn’t apply the relu to the gradient. The code is as follows:
class reluForward(torch.autograd.Function):
def forward(self, inp):
#option1:
#return = inp * (inp>0).float()
#option2:
#return F.relu(inp).data
#option3:
return F.relu(inp)
def backward(self, grad_out):
return grad_out
Option 1 and 2 both perform the correct function but are significantly slower than the built in nn.relu function (actually starting ok and then getting slower and slower as it runs) while option 3 tells me “data must be a Tensor.” Looking for any ideas for the slowdown or a different way to do this!
Edit: I have also now tried with return inp.clamp(min=0) which also works with the slowdown. |
st81382 | Got it! Don’t know why this would cause a problem, but I was declaring self.reluForwarder = reluForward() in the Module’s init. Moving it to just be reluForwarder = reluForward() in the forward function seems to make it work at the same speed as regular relu. |
st81383 | To avoid this mistake, you should keep in mind that a Function is not an nn.Module and should be used only once.
Also, for better performance, you should use the new style functions as follow:
class reluForward(torch.autograd.Function):
@staticmethod
def forward(self, inp):
#option1:
#return = inp * (inp>0).float()
#option2:
#return F.relu(inp).data
#option3:
return F.relu(inp)
@staticmethod
def backward(self, grad_out):
return grad_out
# To use it:
inp = Variable(torch.rand(10, 10))
out = reluForward.apply(inp) # Use the class here, not an instance of it ! |
st81384 | I am running NN on the Summit. I am using ImageNet. I am having problem with the data loader
DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,
batch_sampler=None, num_workers=0, collate_fn=None,
pin_memory=False, drop_last=False, timeout=0,
worker_init_fn=None)
It works well with 0 workers. However, when I use more than zero workers, it gives me a segmentation error. I googled this problem. It seems that this works with many people but I see many posts which complain the same error which I am getting. |
st81385 | I am using PyTorch 1.0. Can anybody comments on this problem.
I am just testing
from torch.utils.data import Dataloader
Traceback (most recent call last):
File “”, line 1, in
ImportError: cannot import name ‘Dataloader’
Thanks |
st81386 | I was following the instructions on
https://pytorch.org/tutorials/beginner/ptcheat.html# 248
There is a typo. “Dataloader” should be “DataLoader”. |
st81387 | Good catch! Would you mind creating an issue 59 so that we can fix this typo? If you don’t want to fix it yourself, could you please tag me? |
st81388 | ammalikwaterloo:
I was following the instructions on
https://pytorch.org/tutorials/beginner/ptcheat.html#
There is a typo. “Dataloader” should be “DataLoader”.
Sorry for the delay… Just submitted. https://github.com/pytorch/pytorch/issues/26278 150 |
st81389 | I have a custom transformer-like model which can receive a sequence of various length.
I’d like to parralelize across gpu’s. so i wish to pack multiple sequences of various length in one tensor so i can use DataParallel.
I am looking for something like PackedSequence but that i can conveniently unpack at the model, and get the sequences after the padding was removed.
Any existing implementations or tips how to do this? |
st81390 | In this issue 32 @ezyang references an implementation of convolutions that uses the Toeplitz matrix.
I have a state_dict (and also a nn.Module class) from a network and explicitly need these Toeplitz matrices for further calculations but I admittedly have not a strong grasp on the things going on in ATen and how I could use that directly in Python. Is there a way to do this? |
st81391 | Hi McLawrence,
Please refer to the nn.Unfold module 291. This would enable you to generate the toeplitz matrices(column matrix).
Hope this helps! |
st81392 | @Mazhar_Shaikh Using unfold, I can create a matrix from the input and do a matrix multiplication with the kernel vector.
However, what I need would be a matrix from the kernel, not the input. Which seems not possible with unfold (as the kernel is smaller than the input). Do you know of an alternative? |
st81393 | I cannot get the gradient (SGD) to work in the below example. Similar posts and autograd documentation have not helped. In the below L.grad is always none. Your help is appreciated.
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
def NN( w=np.random.rand(135), x=np.random.rand(4) ):
X = torch.Tensor(x.reshape(4,1))
M1 = torch.Tensor(np.array(w[0:36]).reshape(9,4))
b1 = torch.Tensor(np.array(w[36:45]).reshape(9,1))
M2 = torch.Tensor(np.array(w[45:126]).reshape(9,9))
b2 = torch.Tensor(np.array(w[126:135]).reshape(9,1))
nIn = 4
nH1 = 9
nH2 = 9
W1 = Variable(M1, requires_grad=True)
B1 = Variable(b1, requires_grad=True)
Y1 = torch.mm(W1, X) + B1
Y1 = F.relu(Y1)
W2 = Variable(M2, requires_grad=True)
B2 = Variable(b2, requires_grad=True)
Y2 = torch.mm(W2, Y1) + B2
Y2 = F.relu(Y2)
print(Y2.shape)
print(Y2)
return [W1,B1,W2,B2], Y2
def loss(Y, T):
with torch.enable_grad():
diff = Y.reshape(-1)-T.reshape(-1)
print('loss type', type(diff))
return diff.dot(diff)
pars, Y = NN()
T = torch.randn(9,1)
optimizer = optim.SGD(pars, lr=0.1, momentum=0.9)
for j in range(1):
optimizer.zero_grad()
L = loss(Y, T)
L.backward(retain_graph=True)
print(‘grad’, L.grad)
optimizer.step() |
st81394 | The gradient in the loss won’t be retained by default.
By calling loss.backward() you are passing a gradient of 1 by default, since dL/dL = 1.
To print the gradient in L, you can use the following code:
L.retain_grad()
L.backward(retain_graph=True)
print('grad', L.grad) |
st81395 | Thank you, that works. But I was looking for the derivative with respect to the weights dY/dw_i. I have since found the answer to this from another post I believe as follows:
for f in pars[0]:
print(‘data is’)
print(f.data)
print(‘grad is’)
print(f.grad)
Please correct me if this is wrong… |
st81396 | I try to extract image features by InceptionA (part of GoogLeNet). When there is no optimizer.step(), it works even with the batch size 128. But when there is optimizer.step(), it will Error: CUDA out of memory.
Here is the code:
model = InceptionA(pool_features=2)
model.to(device)
optimizer = optim.Adam(model.parameters())
criterion = nn.BCELoss(reduction=‘mean’)
for epoch in range(100):
for i, (batch_input, label) in enumerate(data_loader):
optimizer.zero_grad()
output = model(batch_input)
loss = criterion(output, label)
loss.backward()
optimizer.step() # Error here
How can I fix this error? |
st81397 | Solved by ptrblck in post #5
The easiest way would be to lower your batch size. If that’s not possible (e.g. if your batch size is already 1), you could have a look at torch.utils.checkpoint to trade compute for memory. |
st81398 | I have already solved the problem. The code is here:
1) optimizer = optim.SGD(model.parameters(), lr=0.0001)
and
2) loss = criterion(torch.sigmoid(output), label)
the reason of 2) is
BCELoss accepts only inputs that have all elements in range [0; 1]
But I don’t know why 1) have to use optim.SGD() and optim.Adam() can’t |
st81399 | Adam uses internal running estimates and thus uses more memory than e.g. SGD.
If your GPU is almost full and you call step on your Adam optimizer, these running estimates will be created and might thus yield an out of memory error. |
st81400 | Is there any solution or PyTorch function to solve the problem? Even work at a slow speed.
Or the only way to solve it is to use a better GPU or multiple GPUs, is that right? |
st81401 | The easiest way would be to lower your batch size. If that’s not possible (e.g. if your batch size is already 1), you could have a look at torch.utils.checkpoint 299 to trade compute for memory. |
st81402 | While the former defines nn.Module classes, the latter uses a functional (stateless) approach.
To dig a bit deeper: nn.Modules are defined as Python classes and have attributes, e.g. a nn.Conv2d module will have some internal attributes like self.weight. F.conv2d however just defines the operation and needs all arguments to be passed (including the weights and bias). Internally the modules will usually call their functional counterpart in the forward method somewhere.
That being said, it depends also on your coding style how you would like to work with your modules/parameters etc. While modules might be good enough in most use cases, the functional API might give you additional flexibility which is needed sometimes.
We’ve have a similar discussion recently in this thread 1.7k. |
st81403 | how does gradients flow in the case of nn.functional ? i am a little confused. How do the weights get trained in case of nn.functional ? |
st81404 | Each operation is tracked by Autograd, if parameters are involved with require gradients.
The output of such operations get a .grad_fn attribute, which points to the backward function for the last operation:
x = torch.randn(1, 1)
w = nn.Parameter(torch.randn(1, 1))
output = x * w
print(output)
> tensor([[2.5096]], grad_fn=<MulBackward0>)
The backward call uses these grad_fns to calculate the gradient and store it in the .grad attribute of the parameters:
output.backward()
print(w.grad)
> tensor([[1.1757]]) |
st81405 | thanks for the reply
but what are the fundamental differences between torch.nn.Conv1d and torch.nn.functional.conv1d ?
i guess nn.Conv1d initializes the kernel weights automatically and nn.functional.conv1d needs an input kernel…
My doubts…
does gradient calculation and back-prop work in the same way for both of the above mentioned methods?
where would i want to use nn over nn.functional and vice-versa ? (what is the need for nn.functional.conv1d when you already have nn.Conv1d ? ) |
st81406 | Have a look at this post 889 for some more information and my point of view.
TLDR: the modules (nn.Module) use internally the functional API.
There is no difference as long as you store the parameters somewhere (manually if you prefer the functional API or in an nn.Module “automatically”).
Having the nn.Module containers as an abstraction layer makes development easy and keeps the flexibility to use the functional API. |
st81407 | Hello everyone.
Today I faced something strange. When using torchvision.utils.make_grid(), I noticed when ever I display the resulting image in matplotib, the image is washedout! but when I save it using save_img , it turns out just fine.
This is how it looks when I display the output of make_grid() in matplotlib :
make_grid_img1.png2111×899 109 KB
and this is how it got saved to the disk :
and this is the snippet I wrote :
from` torchvision.utils` import save_image, make_grid
fig = plt.figure(figsize=(28, 28))
for i in range(5):
grid_imgs = make_grid(torch.from_numpy(img_pairs[i]),
nrow=5,
normalize=True)
save_image(grid_imgs,f'results/imgs_{i}.jpg')
ax = fig.add_subplot(5, 1, i+1, xticks=[], yticks=[])
ax.imshow(grid_imgs.numpy().transpose(1,2,0),cmap='Greys_r')
normalizing and not doesnt affect anything.
What am I missing here ?
Your kind help is greatly appreciated |
st81408 | Solved by Shisho_Sama in post #3
Thanks thats not the case, since I have several images to display I needed as many axes so I need to use that.
Thanks to dear God, I found the cause!
This was caused by save_image() method! I moved the save_image() after showing the image and all was fine!
This seems like a bug to me. this should… |
st81409 | I think removing ax = … and using plt.imshow(…) instead of ax.imshow(…) could work. |
st81410 | Thanks thats not the case, since I have several images to display I needed as many axes so I need to use that.
Thanks to dear God, I found the cause!
This was caused by save_image() method! I moved the save_image() after showing the image and all was fine!
This seems like a bug to me. this should not happen at all! @smth |
st81411 | I was running some data ffmpeg to torch (thru pipes) and noticed that I was doing something very naive. So I profiled, with single process, conversion between npuint to float32
The difference in CPU can be almost one order of magnitude. GPU tightens everything.
If anyone is interested.
gist.github.com
https://gist.github.com/xvdp/149e8c7f532ffb58f29344e5d2a1bee0 36
npuint8_torchfloat32.py
""" I was writing a dataloader from a video stream. I ran some numbers.
# in a nutshell.
-> np.transpose() or torch.permute() is faster as uint8, no difference between torch and numpy
-> np.uint8/number results in np.float64, never do it, if anything cast as np.float32
-> convert to pytorch before converting uint8 to float32
-> contiguous() is is faster in torch than numpy
-> contiguous() is faster for torch.float32 than for torch.uint8
-> convert to CUDA in the numpy to pytorch conversion, if you can.
When loading a dataset a quite typical operation is to load the data - which may come thru numpy -
This file has been truncated. show original |
st81412 | Thanks for sharing the code!
Be a bit careful about the CUDA numbers, since you didn’t synchronize the calls.
CUDA calls will be executed asynchronously, which means the CPU can continue executing the code while the CUDA kernels are busy until a synchronization point is reached.
These points can be added manually using torch.cuda.synchronize() are are automatically reached, e.g. if a result from the CUDA operation is needed.
In your profiling script you might in fact just time the kernel launch times, when the .cuda() call is the first op on the tensor.
To proper time CUDA calls, you could have to synchronize before starting and stopping the timer:
torch.cuda.synchronize()
t0 = time.time()
...
torch.cuda.synchronize()
t1 = time.time() |
st81413 | Thank you ptrblck! You are right I didnt sync the cuda. Ill update the gist when I get to this. I also wonder whether this profiling is at all significant when using multiprocess multi gpu. I need to update my hw setup anyway which as you see is from the paleozoic.
I wrote this gist mainly because I had ndarrayuint8/255 -> torch (float32 cuda) peppered all over my code which is a waste of resources and time. I figured someone else will be doing the same stupid mistake. |
st81414 | Sure and thanks again for posting!
I just skimmed through your code and will dig in it a bot later, as I’m interested in the results |
st81415 | great - if you look at this, there are a few things I didn’t post; float64 or float16 torch data, but it varies wildly. I dont have a good setup to deal with double or half floats so the comparison isnt valid.
I found some funny numbers to do with contiguity which I also didn’t post, because first it worked one way then another. But it went something like this,
tensor.permute(2,1,0).contiguous()
# vs
w,h,c = tensor.shape
out = torch.zeros([1,c,h,w]])
out[0] = tensor.permute(2,1,0)
# both return a contiguous tensor
in some instances i got the latter to be 2x the speed of the ‘proper way’
although, maybe this is meaningless. Because the test wasn’t consistent.
At any rate if one is serious about the speed one should probably be working in cpp. Which I thought, maybe the way to get rid of all this uncertainty is load from buffer to a torch tensor of shape dtype device that is contiguous, directly in cpp… But thats another day. |
st81416 | I don’t think these operation see any significant speedup in C++, if the tensors are “large enough” (i.e. not tiny).
Usually you would just see the Python overhead, which is negligible if you have some workload on the operations.
Both methods should yield a similar timing and I would assume the second approach to be slower.
At least, that’s the result from my profiling:
x = torch.randn(64, 128, 64, 64, device='cuda')
nb_iters = 1000
# warmup
for _ in range(100):
y = x.permute(0, 3, 2, 1).contiguous()
torch.cuda.synchronize()
t0 = time.time()
for _ in range(nb_iters):
y = x.permute(0, 3, 2, 1).contiguous()
torch.cuda.synchronize()
t1 = time.time()
print((t1 -t0)/nb_iters)
# warmup
for _ in range(100):
out = torch.zeros(64, 64, 64, 128, device='cuda')
out[:] = x.permute(0, 3, 2, 1)
torch.cuda.synchronize()
t0 = time.time()
for _ in range(nb_iters):
out = torch.zeros(64, 64, 64, 128, device='cuda')
out[:] = x.permute(0, 3, 2, 1)
torch.cuda.synchronize()
t1 = time.time()
print((t1 -t0)/nb_iters)
Permute + contiguous: 0.00072478 s
Permute + copy: 0.00093216 s |
st81417 | I used both BERT_base_cased and BERT_large_Cased model for multi class text classification. With BERT_base_cased, I got satisfactory results. When I tried with BERT_large_cased model, the accuracy is same for all the epochs
bert-error1.PNG926×636 34.9 KB
bert-error2.PNG925×636 36 KB
With BERT_base_cased , there is no such problem. But with BERT_large_cased , why accuracy is same in all the epochs? Any help is really appreciated… |
st81418 | This problem is not closely related to the language property of Pytorch. If it is not proper to ask this question here, I will delete it right away.
However, I couldn’t think of anywhere that will have so many experts who are familiar with NN structure and design.
Recently, I tried to implement VINet[1] a visual inertial odometry system that built with the neural network, and I open source it to GitHub HTLife/VINet 21
I already complete whole network structure, but the network can’t converge properly in training.
c21c66fe-70d4-45a2-ba59-49f6d6e36196.jpg827×480 46.8 KB
How could I fix this problem?
Possible problems & solutions
The dataset is too challenging:
I’m using the EuRoC MAV dataset, which is more challenging than the KITTI VO Dataset used by the DeepVO, Vinet(because the KITTI vehicle image does not shake up and down). NN cannot learn camera movement correctly.
Loss function:
L1 loss is been used and identical to the design in [1]. (I’m not very confident about whether I understand the loss design in [1] currently.) Related code
Other hyperparameter problems
References
[1] Clark, Ronald, et al. “VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem.” AAAI. 2017. |
st81419 | I transformed the pytorch model to onnx :
%79 : Tensor = onnx::Unsqueeze[axes=[0]](%75)
%80 : Tensor = onnx::Unsqueeze[axes=[0]](%77)
%81 : Tensor = onnx::Concat[axis=0](%78, %79, %80)
%82 : Float(2, 1, 256) = onnx::ConstantOfShape[value={0}](%81), scope: CRNN/Sequential[rnn]/BidirectionalLSTM[0]/LSTM[rnn]
%83 : Tensor? = prim::Constant(), scope: CRNN/Sequential[rnn]/BidirectionalLSTM[0]/LSTM[rnn]
Does %83:prim::Constant() belong to onnx? I did not find that op in onnx 3. |
st81420 | Prim::Constant() gives you a “typed None” here, so in PyTorch terms, this is a Optional[Tensor] (aka Tensor?) that is None / not present.
One little known difference between Python and TorchScript – at least I didn’t know it until I submitted a PR that needed to be corrected to do the right thing – is that while None has its own type in Python, it is always typed (of some optional type) in TorchScript.
(For those pedantic about internals, there is a NoneType defined in jit_types.h, but it’s not for use “inside” TorchScript and a None passed in to a scripted function is converted to the corresponding typed None.)
Best regards
Thomas |
st81421 | Hi, I’m also curious about prim:Constant() in onnx, how did you solve this problem? |
st81422 | Dear friends,
How I can apply a right-to-left seq2seq model and a left-to-right seq2seq model using Pytorch for GEC Task. |
st81423 | X = (x1, x2, x3, …, xn), Y=(y1, y2, y3,…ym), ^Y=(ym, …, y3, y2, y1)
seq2seq is left-to-right when u input the (X, Y) ;
seq2seq is right-to-left when u input the (X, ^Y) ; |
st81424 | Hey everyone, I have a Keras LSTM code, that I want to transfer in LSTM in PyTorch for Music Generation. This is my code for Keras LSTM -
def built_model(batch_size, seq_length, unique_chars):
model = Sequential()
model.add(Embedding(input_dim = unique_chars, output_dim = 512, batch_input_shape = (batch_size, seq_length), name = "embd_1"))
model.add(LSTM(512, return_sequences = True, stateful = True, name = "lstm_first"))
model.add(Dropout(0.4, name = "drp_1"))
model.add(LSTM(512, return_sequences = True, stateful = True))
model.add(Dropout(0.4))
model.add(LSTM(512, return_sequences = True, stateful = True))
model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(unique_chars)))
model.add(Activation("softmax"))
return model
And is the one that I am trying to do in PyTorch from my Sentiment Analysis model-
import torch.nn as nn
class SentimentRNN(nn.Module):
"""
The RNN model that will be used to perform Sentiment analysis.
"""
def __init__(self, batch_size, seq_length, unique_chars, embedding_dim, output_size, hidden_dim, n_layers, drop_prob=0.5):
"""
Initialize the model by setting up the layers.
"""
super(SentimentRNN, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(unique_chars, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=drop_prob, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(0.3)
# linear and sigmoid layer
self.fc = nn.Linear(hidden_dim, unique_chars)
self.sig = nn.Softmax()
def forward(self, x, hidden):
"""
Perform a forward pass of our model on some input and hidden state.
"""
batch_size = x.size(0)
# embeddings and lstm_out
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# sigmoid function
sig_out = self.sig(out)
# reshape to be batch_size first
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1] # get last batch of labels
# return last sigmoid output and hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if(train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
I am trying to do this on my own. But I am clueless, where to start, can anyone please help to find one? Thanks. |
st81425 | start by trying tutorials in pytorch.org 32 and read docs for further explanations |
st81426 | I would try to narrow down the problem a bit and e.g. start with a single layer.
Once you get the same outputs (up to floating point precision) for the embedding layer, I would try to match the output of the next LSTM layer and so on. |
st81427 | Is this possible?
I’m asking this becasue for example today I had couple of models and for each of them I’d like to use a different batch_size, I initially created a dataloader with lets say batch_size of 32, and now I want to increase its size to lets say 128. but I dont want to create a new dataloader.
Is this possible?
I would appreciate any kind of help in this regard. |
st81428 | This shouldn’t be allowed in the current version anymore and you’ll get a ValueError:
ValueError: batch_size attribute should not be set after DataLoader is initialized
Creating a new DataLoader should be cheap, so I would recommend to initialize a new DataLoader. |
st81429 | I’m training a simple self-attention model and I’m obtaining some good results on the validation set (in terms of accuracy, MCC, recall and precision). I’ve done this doing a train/test split several times. The only problem is that the validation loss is extremely large compared to the training loss. I’m attaching an example but they more or less all look the same:
Screenshot from 2019-09-14 18-00-50.png1184×430 70.6 KB
If I train it for like 500 epochs, the validation loss keep decreasing nicely (while the training loss oscillates more) but it’s always much larger. Has anyone seen something similar before?
I’m also attaching the training and validation loop:
training_loss = []
val_loss = []
for epoch in range(1, num_epochs+1):
#print(f'EPOCH: {epoch}...')
model.train()
avg_loss = 0.0
for idx, batch in enumerate(train_loader):
smiles, labels = batch[0].to(device), batch[1].to(device)
# Fit
optimizer.zero_grad()
out = model(smiles)
loss = criterion(out, labels)
avg_loss =+ loss.item() * smiles.size(0)
loss.backward()
optimizer.step()
training_loss.append(avg_loss / len(train_loader))
# Validation
model.eval()
with torch.no_grad():
avg_loss = 0.0
y_pred = []
y_val = []
for idx, batch in enumerate(val_loader):
smiles, labels = batch[0].to(device), batch[1].to(device)
out = model(smiles)
y_val.extend(list(labels.detach().cpu().numpy()))
y_pred.extend(list(torch.argmax(out, dim=1).detach().cpu().numpy()))
loss = criterion(out, labels)
avg_loss =+ loss.item() * smiles.size(0)
val_loss.append(avg_loss / len(val_loader)) |
st81430 | Solved by ptrblck in post #3
It seems you are multiplying by the batch size, which would thus accumulate the loss of all samples in avg_loss.
If that’s the case, you should divide by the number of samples to get the average loss, bot the length of the DataLoader, which will return the number of batches. |
st81431 | I believe I found the error: I did =+ rather than += when computing the average loss. Now I get:
Screenshot from 2019-09-14 18-37-54.png1175×426 38.8 KB
Nonetheless, they are now both very large! The scores are always the same, obviously. So now the question is: does it matter if the loss is high (I’m using cross-entropy)? |
st81432 | It seems you are multiplying by the batch size, which would thus accumulate the loss of all samples in avg_loss.
If that’s the case, you should divide by the number of samples to get the average loss, bot the length of the DataLoader, which will return the number of batches. |
st81433 | Hi,
Sorry if a similar topic already exists. I am trying sample the data in each batch during training so that each class represented in the batch has at least n numbers of each class when possible. So say I have 1000 classes if the batch size is 16 and I want n to be 4 I want the batch to only contain 4 different classes in each batch |
st81434 | Does having Multiple optimizers assigned each for different module in the main model could have different learning path than having one optimizer for the full model ? for example Adam |
st81435 | it’s a interesting problem. and it need some experiments.
i think the learning path maybe different, because the optimizers compute the different formula when updating the parameter |
st81436 | @falmasri @DoubtWang
If you use Adam optimizer with indentical hyperparameters and calling the optimization step at the same time you will not see any numerical difference.
Adam optimizations step is done per parameter (moving averages are per parameter/independent).
For each parameter we have one state. |
st81437 | @spanev sorry, I misread it. my understand is that multiple optimizers in text refer to the different optimizers, e.g., SGD and Adam. |
st81438 | Sure, the question is a little bit ambiguous but this:
falmasri:
for example Adam
lead me to think that @falmasri was referring to same optimizer instances.
Having different mathematical optimizers remains a interesting problem! And it should be addressed with empirical observations, as you said. |
st81439 | this is good point about adam parameters I didn’t pay attention.
I initiated 3 Adam optimizers and assigned each of them to different module and training was sequentially. this is that first module was run first and the updated using its assigned optimizers then the second and the third. It achieved a little bit enhancement in overfitting model. |
st81440 | I want to create a custom convolution operation by overriding, if possible, the torch.nn.functional.conv2d(). For example, I want to “add” the network weights instead of “multiplying” them, just for example.
I don’t want to write the forward propagation from scratch as that won’t allow me to use the backward(). In that case I’d have to calculate the gradients and all those stuffs by myself. |
st81441 | you can write a new nn.module like this
class CustomConv(nn.module):
def __init__(self):
weights = nn.Parameter(torch.randn(3, 1))
# initialize module
def forward(self, input):
# custom convolution here
and use it like this
conv = CustomConv()
features = conv(input)
and it will work with .backward()
but it will work like torch.nn.conv2d rather than torch.nn.functional.conv2d()
and you need to use nn.Parameter() for weights |
st81442 | First of all thanks for your response.
If I do it the way you said, where should I save all the filters and the activation maps so that it could be used in backward()? |
st81443 | you save the filters like this
class CustomConv(nn.module):
def __init__(self):
filters = nn.Parameter(# filter tensor)
so autograd registers the filter tensor as a parameter that needs to be optimized.
And you should return the activation map in the forward method like this
def forward(self, input):
# custom convolutions
return activation_map |
st81444 | This article https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255 284 suggest to do gradient accumulation in the following way
model.zero_grad() # Reset gradients tensors
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad() # Reset gradients tensors
I have two doubts.
What is the need of this step , loss=loss/accumulation_steps?
Suppose, there are 960 training instances and the memory can accommodate a maximum of 64. So, number of batches =15. So, if i choose accumulation_steps=2, the parameters are not updated for the last batch. Doesn’t it affect the performance of the model? |
st81445 | loss gradients are added(accumulated) by loss.backward() and loss / accumulation_steps divides the loss in advance to average the accumulated loss gradients.
First, because batches that aren’t accumulated are wasted, you should make sure batches are divisible by accumulation_steps. Second, the last batch actually gets accumulated since the first batch gets accumulated. And I think (i + 1) should be I because of this. |
st81446 | I’m working on a task where the input is a sequence of images (5 in my case) and the output should be a set of sentences, one per image. I’m using an encoder-decoder architecture where I would like to use the sentences I already generated as input for generating the next one but this requires me to loop for each instance (for loop) inside the forward pass.
As already discussed here, each call inside the loop duplicates the computation graph therefore increasing the memory utilization.
Is there any way to get around the need to use the loop? Or alternatively, is there an explicit way to avoid duplicating the graph?
Thanks |
st81447 | How can I implement this snippet from keras that is used to generate image-morphs from a latent vector z by a VAE? (the main article is here 5):
# display a 2D manifold of the digits
n = 15 # figure with 15x15 digits
digit_size = 28
# linearly spaced coordinates on the unit square were transformed
# through the inverse CDF (ppf) of the Gaussian to produce values
# of the latent variables z, since the prior of the latent space
# is Gaussian
z1 = norm.ppf(np.linspace(0.01, 0.99, n))
z2 = norm.ppf(np.linspace(0.01, 0.99, n))
z_grid = np.dstack(np.meshgrid(z1, z2))
x_pred_grid = decoder.predict(z_grid.reshape(n*n, latent_dim)) \
.reshape(n, n, digit_size, digit_size)
plt.figure(figsize=(10, 10))
plt.imshow(np.block(list(map(list, x_pred_grid))), cmap='gray')
plt.show()
I came up with the following snippet, but the outcome is different!
n = 10 # figure with 10x10 digits
digit_size = 28
# linearly spaced coordinates on the unit square were transformed
# through the inverse CDF (ppf) of the Gaussian to produce values
# of the latent variables z, since the prior of the latent space
# is Gaussian
z1 = torch.linspace(0.01, 0.99, n)
z2 = torch.linspace(0.01, 0.99, n)
z_grid = np.dstack(np.meshgrid(z1, z2))
z_grid = torch.from_numpy(z_grid).to(device)
z_grid = z_grid.reshape(-1, embeddingsize)
x_pred_grid = model.decoder(z_grid)
x_pred_grid= x_pred_grid.cpu().detach().numpy().reshape(-1, 1, 28, 28).transpose(0,2,3,1)
plt.figure(figsize=(10, 10))
plt.imshow(np.block(list(map(list, x_pred_grid))), cmap='gray')
plt.show()
The problem that I have is that, first I dont know what the counter part for norm.ppf in Pytorch is, so I just ignored it for now. second, the way the line :
x_pred_grid = decoder.predict(z_grid.reshape(n*n, latent_dim)) \
.reshape(n, n, digit_size, digit_size)
reshapes the input is impossible for me! he is feeding the (nxn,latent_dim), which for n=10, and latent_dim =10, is (100,10) .
However, when I reshape like(nxn, latent_dim) I get the error :
RuntimeError : shape ‘[100, 10]’ is invalid for input of size 200
So I had to reshape like (-1, embeddingsize) and this I guess is why my output is different .
for the record the keras output is like this :
and mine is like this :
vs.png95×771 16.1 KB
So how can I closely replicate this keras code in Pytorch? where am I going off road?
Thank you all in advance |
st81448 | Solved by Shisho_Sama in post #2
OK, Thank God! I finally got the hang of it! here is what I ended up doing !
# display a 2D manifold of the digits
embeddingsize = model.embedding_size
# figure with 20x20 digits
n = 20
digit_size = 28
z1 = torch.linspace(-2, 2, n)
z2 = torch.linspace(-2, 2, n)
z_grid = np.dstack(np.meshgrid(z1… |
st81449 | OK, Thank God! I finally got the hang of it! here is what I ended up doing !
# display a 2D manifold of the digits
embeddingsize = model.embedding_size
# figure with 20x20 digits
n = 20
digit_size = 28
z1 = torch.linspace(-2, 2, n)
z2 = torch.linspace(-2, 2, n)
z_grid = np.dstack(np.meshgrid(z1, z2))
z_grid = torch.from_numpy(z_grid).to(device)
z_grid = z_grid.reshape(-1, embeddingsize)
x_pred_grid = model.decoder(z_grid)
x_pred_grid= x_pred_grid.cpu().detach().view(-1, 1, 28,28)
img = make_grid(x_pred_grid,nrow=n).numpy().transpose(1,2,0)
plt.figure(figsize=(10, 10))
plt.imshow(img)
plt.show()
and the output is :
visualization_vae_pytorch.jpg789×771 149 KB |
st81450 | Hello! I have a NN with a single linear layer, no activation function, just a matrix multiplication and bias (I need to do some tests and I came across this issue). So the input is 4D and the output is 2D and the relation between them is like this: [x,y,x+vx,y+vy] -> [x+2vx,y+2vy] so for example [10,20,40,100] -> [70,180]. My training data has no noise i.e. ideally, for a given input you should get the exactly right output and it can be shown that the matrix that the NN should learn, in the perfect case is this:
[[-1, 0, 2, 0],
[ 0, -1, 0, 2.]]
with a bias of zero for both output nodes. My training data has 512 examples and after the training, this is the matrix learnt by the NN:
[[-9.9997e-01, 2.6156e-05, 2.0000e+00, -2.6044e-05],
[ 2.6031e-05, -9.9996e-01, -2.5983e-05, 2.0000e+00]]
and the bias is:
[0.0003, 0.0003].
As you can see the result is very close to the right answer, but I am not sure why it doesn’t go lower than that. Given that I have a single linear layer, it should be just one minima, so the algorithm shouldn’t get stuck and given that I have no noise, the loss should simply go to zero, but it seems to get stuck somewhere around 1e-5. I am using adam optimizer. I do the training by starting with a LR of 1e-3 and I train until there is no significant improvement over several epoch, then I go to 1e-4 and so on until around 1e-8. No other tricks beside this, just the linear layer. Can someone tell me if this is normal? I am not sure what prevents the NN to get zero loss. Thank you! |
st81451 | Since the weight and bias are almost perfectly fit, the loss value should be quite small at the end of the training. If you lower the learning rate further (to 1e-8), the parameter updates will be very small and might even get lost due to floating point precision.
You could try to increase the learning rate a bit or skip some reductions and see, if that would lower the loss. However, since your model is almost perfectly fit, I’m not sure if it’s worth a lot of effort to get the “perfect” numbers. |
st81452 | Neural net (NN) is a function, right.
The realm what it do may be converting the image into another image (segmentation tasks).
NN: I => I
Can you provide some hints or papers how based on NN we can create inverse neural net INN? |
st81453 | Hi Dejan!
dejanbatanjac:
Neural net (NN) is a function, right.
Yes, but not necessarily an invertible function.
Can you provide some hints or papers how based on NN we can create inverse neural net INN?
In general, you cannot invert a neural network. And this is true
not just for unusual edge cases – most typical neural networks
won’t be invertible.
Consider (among others) these two points:
Rectified linear units (f (x) = max (0, x)) are not invertible.
Any linear layer with fewer outputs than inputs in not invertible.
Best.
K. Frank |
st81454 | Hi @dejanbatanjac,
You can take a look at this paper Deep Invertible Networks 197 and its (PyTorch) official implementation 229.
It propose two fully invertible architectures (section 3.1), the first one being only injective and the second one bijective. The two are actually made of two sub-models: a mapping Φ and it’s pseudo-inverse Φ-1, the latter being built with inverse and pseudo-inverse blocks/layers from Φ.
The authors focus classification, so I’m not sure how (good) you could reuse this work for segmentation task |
st81455 | I have 2 Tensors named x and list and their definitions are below:
x = torch.tensor(3)
list = torch.tensor([1,2,3,4,5])
Now I want to get the index of element x from list . The expected output is an Integer:
2
How can I do in an easy way? |
st81456 | I want to expand a 4-GPU training scheme to 8 GPUs. I’m wondering if I can use the adjustment rules from the paper “Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour”. The scheme proposed in the paper is for Distributed synchronous SGD. Even though the optim.SGD in PyTorch is not distributed version, but I’m assuming it is a synchronous one when it comes to multi-GPU training. I’m not sure if it’s the right assumption. |
st81457 | Is lu_unpack (src 3) as efficient as possible? For instance let’s have a look at this part:
P = torch.eye(sz, device=LU_data.device, dtype=LU_data.dtype)
final_order = list(range(sz))
for k, j, in enumerate(LU_pivots_zero_idx):
final_order[k], final_order[j] = final_order[j], final_order[k]
P = P.index_select(1, torch.as_tensor(final_order, device=LU_pivots.device))
Why isn’t final_order allocated on LU_pivots.device from the start? Doesn’t final_order[j] forces a sync if LU_pivots is on a GPU? |
st81458 | I can’t edit the text of my question anymore. Maybe I sounded too critical. I was just wondering whether there was a particular reason the author chose to use a Python list for final_order instead of a tensor allocated on the same device as LU_pivots.
It’s my understanding that most(?) of PyTorch commands are asynchronous, that is, the command returns right away and doesn’t wait for the operation to complete. If we use the computed value on the Python-side, though, the Python code stops and waits for the value to be available before proceeding to the next statement.
So I think that if final_order were a PyTorch tensor then the whole piece of code above would be executed asynchronously without any slowdown. Also, the last line wouldn’t need to transfer data between devices. |
st81459 | Q1:
Let’s say D is a matrix with k very long columns and I want to compute D + d.ger(x). Also, I don’t need to backpropagate through it. I think the best approach is a simple for loop:
for i in range(k):
D[:,i] = x[i]*d
Does x[i]*d create a temporary tensor?
Maybe I should use
for i in range(k):
D[:,i].add_(d, alpha=x[i])
Is this the most memory-efficient version?
Q2:
What happens when I use python scalars as in some_tensor * 2.4? Is 2.4 moved from the cpu to the gpu as usual? Should I pre-initialize known constants (T.scalar_tensor(2.4)) as much as possible to avoid slowdowns? Even small integers such as 2 and 3? Or maybe the transfer is asynchronous and thus irrelevant as long as the GPU has still some work to complete?
Q3:
Is there a way to make absolutely sure a piece of code is not creating temporary buffers or doing cpu->gpu transfers? Maybe some context managers which cause the code to throw? |
st81460 | Q1: The function torch.addr_ is what I was looking for because it adds the outer product of two vectors completely in-place without allocating any extra memory.
As a side note, since the matrices are in row-major order in PyTorch, accessing the columns the way I do in my question above is slower than accessing the rows. |
st81461 | Hi all,
I’m trying to optimize my network and have been using torch.utils.bottleneck. I’m now at the point in developing this that the profiler says my largest bottleneck is to.
I haven’t been able to find any documentation for this. What does this mean, exactly? And is this a good or bad sign in terms of optimization/GPU utilization? |
st81462 | Hi @noahtren!
The to seems to me to point towards moving tensors from CPU to GPU, i.e. a_cpu_tensor.to('cuda'). Are there a lot of device-related tensor instructions back and forth in your code? |
st81463 | Additionally to @karmus89 answer:
your code might run asynchronous CUDA operations and the .to operation might create a synchronization point, so that the actual kernel times will be accumulated in the .to call. |
st81464 | @karmus89 my code doesn’t have any device-specific instructions. All relevant tensors are stored on a single CUDA device.
If there is a slow-down in my code, it’s likely due to either using a lot of indexing, or calling clone(). Any chance cloning could show up as to in the profiler?
Thanks.
Note: I’m not explicitly calling to anywhere in my code |
st81465 | Why does triangular_solve return a copy of the matrix A? It doesn’t seem the matrix is changed in any way, so why return a copy of it?
META: is this the right forum to ask non-ML questions about PyTorch? |
st81466 | Hi,
I am trying to export Densenet121 to ONNX format, but I am stuck with
the process, getting exception :
ONNX export failed: Couldn’t export Python operator CheckpointFunction
Is there any way to go around the exception?
My code for export is:
def export_model(model):
sample_batch_size = 1
channel = 3
height = 224
width = 224
dummy_input = torch.randn(sample_batch_size, channel, height, width)
torch.onnx.export(model, dummy_input, "onnx_model_name.onnx", input_names=['input'], output_names=['output'])
model = torch.hub.load('pytorch/vision', 'densenet121', pretrained=True, memory_efficient=True)
export_model(model) |
st81467 | The memory_efficient=True option uses internally torch.utils.checkpoint to trade compute for memory.
This operator is not defined in ONNX, which raises this error.
However, if you don’t necessarily need checkpointing, you could just set this argument to False and the export should work. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.