id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st116168
|
I’m curious about best practices regarding PyTorch performance. When considering PyTorch on the CPU, I think I have a rough understanding of how one should write code – basically treat it like Numpy. Things that can use PyTorch functions are going to be faster than using e.g. native python loops. As I understand it, this is using optimised C code to do the heavy lifting.
My confusion arises when the GPU comes into play:
What is the effect of using native python functions on Variables that live on the GPU? Is it ‘doubly bad’, as the data not only is using slow python functions, but has to be shuffled between the GPU and CPU? If not, how does it work?
How can I work out when I am being inefficient with the GPU? I can use e.g. line_profiler to work out which functions are taking a long time. How do I work out why it is slow (e.g. the underlying algorithm, or data transfer)?
How do I take advantage of pinned memory? When do I need to?
If I have intermediate Variables for some network, is there a penalty to e.g. creating a new Variable each time? How can I structure my class, inheriting from nn.Module, such that I can have these Variables moved to the GPU by calling .cuda()? Or do I just have to pass in an extra flag to __init__, and manually check everything?
Sorry for all the questions!
|
st116169
|
When I run this code, something is wrong…
vgg = VGGNet()
if use_cuda:
# vgg.cuda()
vgg = torch.nn.DataParallel(vgg, [0, 1, 2])
for step in range(config.total_step):
# Extract multiple(5) conv feature vectors
target_features = vgg(target) # error
target is in GPU_0
How to solve this? Thanks~
|
st116170
|
I’ve analysed the run times during in training script, and it turns out (if I’ve done everything right), that calculating the criterion (torch.nn.MSELoss()) takes most of the time (factor 10 compared to forward() ).
Is there anything I can do to improve the runtime ?
|
st116171
|
OK found the issue.
Apparently the gpu still works in the background and measuring your runtime only works correctly if you use torch.cuda.synchronize() :
Strange speed of tensor.sum()
Hello, all.
I meet a problem that speed of function tensor.sum() is strange when calculate the same kind of Tensor.
I want to calculate the confusion matrix of a foreground/background segmentation model, and below is my testing code:
import time
import torch
def func(pr, gt):
dump = 0.0
for gt_i in range(2):
for pr_i in range(2):
num = (gt == gt_i) * (pr == pr_i)
start = time.time()
dump += num.sum()
print("Finding Time: {} {}…
|
st116172
|
Hello everyone,
I created a C extension based on the FFI tutorial in “Custom C extensions for pytorch” (http://pytorch.org/tutorials/advanced/c_extension.html). I would like to be able to call the backward function but it returns the following error:
`RuntimeError: there are no graph nodes that require computing gradients`
Actually, even trying to call backward() in the provided example code returns an error on my machine. I tried adding the following commands at the end of the main.py function.
model = MyNetwork()
out = model(input1, input2)
out.backward()
Am I doing something wrong? Any idea how to make this work?
Any help would be appreciated.
Thanks,
|
st116173
|
Hi,
This error is because no element with requires_grad=True has been used in the computations.
This may be due to the fact that you model contains no learnable parameter and your inputs does not require grad.
Or it can be due to the fact that out is independent of all the elements that requires_grad=True
|
st116174
|
I have two kinds of Tensor,
a = torch.LongTensor([1, 2, 3])
b = [torch.LongTensor([1, 1]), torch.LongTensor([2, 2]), torch.LongTensor([3,3])]
then, i want to do a operation like nn.MixtureTable in Torch.
c = something(a, b)
print(c)
result will be like this,
LongTensor - size: 1x2
c = [14, 14]
Are there any operation to do that? or, Do I have to implement this by myself?
|
st116175
|
Try with torch.stack(b,dim=1).mul(a).sum(dim=1). It should work both with Tensors as well as Variables.
|
st116176
|
@lantiga
Thank you for your replying.
There is a issue that two tensor of different sizes cannot be multiplied. but I can be aware of your idea.
The following is an example for 2-rank tensor,
a = torch.LongTensor([[1,2,3],[4,5,6]])
b = [torch.LongTensor([[1,1],[1,1]]), torch.LongTensor([[2,2],[4,4]]), torch.LongTensor([[3,3],[9,9]])]
b = torch.stack(b, dim=2)
a = a.view(a.size(0),1,a.size(1)).expand_as(b)
c = b.mul(a).sum(dim=2).view(b.size(0), b.size(1))
If anyone has better idea, please let me know.
Thanks.
|
st116177
|
Equivalent, but a bit more concise:
a = torch.LongTensor([[1,2,3],[4,5,6]])
b = [torch.LongTensor([[1,1],[1,1]]),
torch.LongTensor([[2,2],[4,4]]),
torch.LongTensor([[3,3],[9,9]])]
b = torch.stack(b, dim=2)
c = b.mul(a.unsqueeze(1)).sum(dim=2).squeeze()
|
st116178
|
Oh right! ‘squeeze’ and ‘unsqueeze’ are more suitable than ‘view’.
but there is still the problem that two tensor of different size cannot be multiplied element-wisely, not different rank.
(i.e. a (2x1x3 tensor) * b (2x2x3 tensor) , ‘*’ denotes element-wise multiplication)
it can be solved by using ‘expand’
so, I suggest the following for people watching this discussion,
a = torch.LongTensor([[1,2,3],[4,5,6]])
b = [torch.LongTensor([[1,1],[1,1]]),
torch.LongTensor([[2,2],[4,4]]),
torch.LongTensor([[3,3],[9,9]])]
b = torch.stack(b, dim=2)
c = b.mul(a.unsqueeze(1).expand_as(b)).sum(dim=2).squeeze()
I really appreciate for your helpful comments.
@lantiga
|
st116179
|
nn.LSTM(100, 100, num_layers=1, bidirectional=True),this model outputs is 200,positive 100 + reverse 100。
I dont know which the 100 is positive result,which is the reverse result?
|
st116180
|
anyone knows that:the 200 hidden,first 100 or second 100 is reverse lstm output?
|
st116181
|
In other words:
Do you plan to make torch.Tensor have rich functions or operations as Numpy?
Do you plan to make torch.Tensor be efficient as Numpy with CPU, even surpass the latter with GPU acceleration?
If so, is there any benchmarks can show your good work progress on these?
|
st116182
|
I have 8 variable the size is 128641616 in a list lista , and a variable is 1288 which named mask
I want use the mask combine the lista to a variable named output.
mask=torch.t(mask)
for i in range(8):
for j in range(128):
lista[i][j] = lista [i][j] * mask[i][j]
but there is a error:
TypeError: mul received an invalid combination of arguments - got (torch.FloatTensor), but expected one of:
(float value)
didn’t match because some of the arguments have invalid types: (torch.FloatTensor)
(torch.cuda.FloatTensor other)
didn’t match because some of the arguments have invalid types: (torch.FloatTensor)
how can i solve it?
thank you.
|
st116183
|
Hi the problem is that one of your tensor is a cuda tensor while the other one is a cpu tensor. You should call .cuda() on the cpu one.
|
st116184
|
I’d like to convert a Torch code to PyTorch code. But I found that the nn.gModule could return a Module with multiple input. I want to know how to implement the same thing in PyTorch.
|
st116185
|
In the documentation of Pytorch’s batchnorm there is a parameter called momentum for expectation/variance updates. But, in the original batch normalization paper by Ioffe, there is no momentum. Which batch normalization algorithm does pytorch implement? Can you give reference to a paper?
|
st116186
|
Looking thoroughly, the trick is written in loffe’s batchnorm paper (page 4).
… their sample variances. Using moving averages instead,
we can track the accuracy of a model as it trains.
Since the means and …
|
st116187
|
In my forward function, I want to implement something like this
def forward(self, x):
for i in range(...):
do something with x.narrow(0, l(i), r(i) - l(i)) # I could ensure that l(i) <= r(i+1)
Each chunk of x could be used parallel, but if I implement it with for-loop in forward function, it could only be processed in a pipeline. So what’s the best practice?
|
st116188
|
Run:
import torch
n = 20
y = torch.arange(0,n).view(-1,1)
y = torch.cat([y,y],1)
print list(y.contiguous().view(-1).sort()[1] == torch.arange(0,2*n).long())
The key problem there’s no pattern what is deciding the order of equal elements. I guess this is because parallel sorting?
|
st116189
|
Hey.
So I made a neural net that is supposed to play a board game, and for every game state it outputs the probability of winning at that point.
When I’m training the net, I have it occasionally print it’s output, target, loss, and accuracy. Here, the output varies.
However, when I just have the net predict a singular outcome or have it play the game, it always produces the same exact output.
I don’t understand why, it’s capable of producing different outputs for different inputs from the training set, and even when I have it just produce an output from one training example (instead of a batch) it again reverts to that same output.
Why is this happening?
If you need me to explain my net more in-depth or post some code, let me know.
Any help is appreciated, thanks
EDIT
Here’s the definition of my network:
class DiceProbabilisticNN(nn.Module):
boardState = [[0 for i in range(13)] for j in range(11271)]
dicePairings = [[0 for i in range(5)] for j in range(8301)]
fourDice = [[0 for i in range(5)] for j in range(3757)]
def __init__(self, inputSize, outputSize):
super(DiceProbabilisticNN, self).__init__()
self.numTurns = self.readFile()
self.lin1 = nn.Linear(inputSize, 244)
self.leak1 = nn.LeakyReLU()
self.relu1 = nn.ReLU()
self.batch1 = nn.BatchNorm1d(244)
self.lin2 = nn.Linear(244, outputSize)
self.relu2 = nn.ReLU()
self.batch2 = nn.BatchNorm1d(outputSize)
def forward(self, input):
output = self.lin1(input)
output = self.leak1(output)
output = self.batch1(output)
output = self.lin2(output)
output = self.relu2(output)
output = self.batch2(output)
return output
The input format is a 1x122 vector of bits, this describes three game states (current position, position at start of turn, opponent’s position) and two pairs of dice.
The actual input is a batch of these vectors over five random turns (we have a data file with something like 600 recorded turns).
Here’s the training and whatnot:
diceNN = DiceProbabilisticNN(122, 1)
diceOptimizer = optim.SGD(diceNN.parameters(), lr=0.001)
diceLossFunc = nn.MSELoss()
def train(input, target):
diceOptimizer.zero_grad()
output = diceNN(input)
loss = diceLossFunc(output, target)
loss.backward()
diceOptimizer.step()
return output, loss
allLosses = []
for i in range(10000):
diceInput, diceTarget = diceNN.getRandomTrainingExample()
diceOutput, diceLoss = train(0, diceInput, diceTarget)
diceAccuracy = (1-((diceOutput - diceTarget)**2))*100
if i%1000 == 0:
allLosses.append(diceLoss.data[0])
print("Progress: %s percent" % int((i/100)))
if i == 0:
print("Output: %s, Target: %s, Loss: %s, Accuracy: %s" % (diceOutput.data[0][0], float(diceTarget.data[0]), float(diceLoss.data[0]), diceAccuracy.data[0][0]))
elif i == 4999:
print("Output: %s, Target: %s, Loss: %s, Accuracy: %s" % (diceOutput.data[0][0], float(diceTarget.data[0]), float(diceLoss.data[0]), diceAccuracy.data[0][0]))
elif i == 9999:
print("Output: %s, Target: %s, Loss: %s, Accuracy: %s" % (diceOutput.data[0][0], float(diceTarget.data[0]), float(diceLoss.data[0]), diceAccuracy.data[0][0]))
torch.save(diceNN.state_dict(), "./DiceNet.pth")
It’s a bit sloppy right now, I know, and I have omitted a bunch of the complicated input-constructing and file-reading methods, I am 100% certain that they work properly.
When I run the training cycle, I get this:
Progress: 0 percent
Output: -0.7317960858345032, Target: 0.2633669972419739, Loss: 0.7930728197097778, Accuracy: 0.965040922164917
Progress: 10 percent
Progress: 20 percent
Progress: 30 percent
Progress: 40 percent
Output: 0.21730580925941467, Target: 0.3980500102043152, Loss: 0.015895208343863487, Accuracy: 96.733154296875
Progress: 50 percent
Progress: 60 percent
Progress: 70 percent
Progress: 80 percent
Progress: 90 percent
Output: 0.7285764217376709, Target: 0.7772169709205627, Loss: 0.01874120719730854, Accuracy: 99.76341247558594
As you can see, the outputs are varied, and even looking like what they should be. However, when I run a two random examples (batch size 1) through the network, the output is the same (format here is input vector, output probability), namely, 0.5053:
Columns 0 to 12
0 1 1 1 0 1 1 1 1 1 0 0 1
Columns 13 to 25
1 0 1 1 1 1 0 1 1 0 1 1 1
Columns 26 to 38
0 0 1 1 1 1 1 0 1 0 1 1 0
Columns 39 to 51
1 1 1 0 1 1 1 0 1 0 0 1 1
Columns 52 to 64
0 1 0 1 1 0 1 1 0 1 0 1 0
Columns 65 to 77
0 1 1 1 1 1 0 1 0 1 1 0 1
Columns 78 to 90
1 1 0 1 1 1 1 1 0 0 1 1 0
Columns 91 to 103
1 1 1 1 0 1 1 0 1 1 1 0 0
Columns 104 to 116
1 1 1 1 1 0 1 0 1 1 0 1 1
Columns 117 to 121
0 0 1 0 1
[torch.FloatTensor of size 1x122]
, Variable containing:
0.5053
[torch.FloatTensor of size 1x1]
)
(Variable containing:
Columns 0 to 12
0 1 1 1 0 1 1 1 1 1 0 0 1
Columns 13 to 25
1 0 1 1 1 1 0 1 1 0 1 1 1
Columns 26 to 38
0 0 1 1 1 1 1 0 1 0 1 1 0
Columns 39 to 51
1 1 1 0 1 1 1 1 1 0 0 1 1
Columns 52 to 64
0 1 1 1 1 0 1 1 0 1 1 1 0
Columns 65 to 77
0 1 1 1 1 1 0 1 0 1 1 0 1
Columns 78 to 90
1 1 0 1 1 1 1 1 0 0 1 1 0
Columns 91 to 103
1 1 1 0 1 1 1 0 1 0 1 0 0
Columns 104 to 116
1 1 1 1 1 0 1 0 1 0 1 0 0
Columns 117 to 121
0 1 0 0 1
[torch.FloatTensor of size 1x122]
, Variable containing:
0.5053
[torch.FloatTensor of size 1x1]
)
Is batch size simply the issue? This problem occurs both when i pass singular training examples through the net and also when i pass singular organic examples through the net.
|
st116190
|
Probably you are mistakenly passing it always the same input? Without looking at the code it is difficult to help you more
|
st116191
|
That would make me look really stupid lmao but no my inputs are different… check the edit for some of my code (please)
|
st116192
|
I may have found your problem. It has to do with batch normalization. When evaluating your model, you have to tell your model to disable batch norm, by calling diceNN.train(False). Otherwise, applying batch normalization to a single sample will return a vector of zeros.
|
st116193
|
This is a more general question but when is it ok to define just a layer and use it multiple times ? For example, if I want to use batchnorm in my model, can I just define
self.batchnorm = nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True)
and then just apply self.batchnorm after every layer? Or should I define a separate batchnorm layer for every time I use it, for example
self.batchnorm1 = nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True)
self.batchnorm2 = nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True)
self.batchnorm3 = nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True)
|
st116194
|
I think it would in general depend on if the module is stateless or not; In the BatchNorm case you’d be sharing weights, bias, running_mean, running_var, which is probably not what you want.
|
st116195
|
My pytorch model works well on cpu except it is time-comsuming. I would like to try the GPU version of pytorch. I added with cuda.device(1): and declared the tensors with cuda. There raised the AssertionError: "Torch not compiled with CUDA enabled."
What should I do to fix this installation error?
|
st116196
|
It seems you installed the version without CUDA support. Reinstall it, but this time be sure to select your CUDA version (7.5 or 8) in the download page.
|
st116197
|
Error after upgrading to v0.2
RuntimeError: Expected argument self to have 1 dimension(s), but has 4 at /opt/conda/conda-bld/pytorch_1502006348621/work/torch/csrc/generic/TensorMethods.cpp:23086
Line that has issue
torch.dot(tensor_1, tensor_2)
tensor_1 size: (512L, 256L, 3L, 3L)
tensor_2 size: (512L, 256L, 3L, 3L)
Used to work prior to v0.2 (I downgraded to 0.1.12 and it worked) @Soumith_Chintala
|
st116198
|
Can you open an issue at https://github.com/pytorch/pytorch/issues/ 60
Looks like we removed implicit flattening, and Gregory Chanan will reply with either the reason, or issue a fix.
For now you can do:
torch.dot(tensor_1.view(-1), tensor_2.view(-1))
|
st116199
|
Indeed, there was a valid reason: https://github.com/pytorch/pytorch/issues/2313 212
|
st116200
|
While testing a very deep convolutional network, I noticed that there is no padding ='SAME' option, like tensorflow has. What I did was to set the padding inside the convolutional layer, like so
self.conv3 = nn.Conv2d(in_channels=10, out_channels=10, kernel_size=3, stride=1, padding=(1,1))
This works in terms of preserving dimensionality, but what I am worried by is that it applies padding after the convolution, so that the last layers actually perform convolutions over an array of zeros. My network is also not training.
The dataset I am using is CIFAR-10 , so, without proper padding before the convolution, the height and width of the image goes to zero very fast (after 3-4 layers).
How can I get around that?
EDIT: If I print out the first example in a batch, of shape [20, 16, 16], where 20 is the number of channels from the previous convolution, it looks like this:
(1 ,.,.) =
1.00000e-02 *
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
... ⋱ ...
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
(2 ,.,.) =
1.00000e-02 *
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
... ⋱ ...
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
...
(17,.,.) =
1.00000e-02 *
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
... ⋱ ...
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
(18,.,.) =
1.00000e-02 *
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
... ⋱ ...
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000
(19,.,.) =
1.00000e-02 *
6.4769 7.6986 7.5997 ... 7.5947 7.5006 6.4277
6.7377 6.7590 6.2768 ... 6.3319 6.1432 5.2836
6.6169 6.4841 6.1549 ... 6.1608 6.0279 5.2591
... ⋱ ...
6.5688 6.4459 6.0924 ... 6.1113 6.0179 5.2809
6.4056 5.7569 5.3210 ... 5.3467 5.2885 4.7401
5.2931 5.1357 4.9795 ... 4.9808 4.8801 4.4145
[torch.cuda.FloatTensor of size 20x16x16 (GPU 0)]
Basically everything is zero, except for one channel. Any idea why this is?
|
st116201
|
The padding option appends zeros before the convolution (in the input), pretty much like SAME option in TF.
If you want, you can also use F.pad with reflect or replicate mode 1.9k, with you don’t want to pad the input with zeros.
|
st116202
|
There are plenty of reasons why it could be failing.
Are you using ReLU? If your bias in the convolution are very negative, you could be in the 0 zone of the ReLU
Are you using Batch Norm? It could help with ReLU, or maybe you should check the initialization of your weights?
If you think that the padding is the problem, try padding with F.pad before the convolution, and remove the padding from there, but I doubt it is the cause.
|
st116203
|
But I still do not understand why all feature maps are zero except for the last one.
|
st116204
|
>>> # Assuming optimizer uses lr = 0.5 for all groups
>>> # lr = 0.05 if epoch < 30
>>> # lr = 0.005 if 30 <= epoch < 80
>>> # lr = 0.0005 if epoch >= 80
>>> scheduler = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)
>>> for epoch in range(100):
>>> scheduler.step()
>>> train(...)
>>> validate(...)
In the first stage, the lr is also scale by 0.1. This is a little bit weird.
It think it should be like:
>>> # Assuming optimizer uses lr = 0.5 for all groups
>>> # lr = 0.5 if epoch < 30
>>> # lr = 0.05 if 30 <= epoch < 80
>>> # lr = 0.005 if epoch >= 80
When I train CIFAR10, lr=0.1 is used at the beginning, now I have to change it to 1 , so the lr_scheduler can scale it back to 0.1. Personally I think this is counter intuitive.
|
st116205
|
I want to pull a batch of Variable to one Variable, or one Tensor?[ type = ‘list’ ----> type = ‘torch.LongTensor’] How can I deal with this problem.
|
st116206
|
You may want to use torch.stack and torch.squeeze in conjunction. The first converts a list of tensors to a single stacked tensor, and the latter removes all dimensions=1, thus making your data compact.
|
st116207
|
Hi everyone,
I’m trying to apply newly added weight_norm to nn.GRU.
But it seems weight_norm only accept single name (default: ‘weight’) per call.
As you know, nn.GRU has #(layer x 2) separate weight variables named weight_ih_l[k] and weight_hh_l[k].
I wonder what would be most elegant way to apply the weight_norm to all those weight variables.
Thanks!
|
st116208
|
Oh, It could work. thanks.
Think it would be nicer if weight_norm could take name list as argument.
|
st116209
|
I have found that if we define a nn.Parameter on GPU in a Module, it will not appear in the list(parameters()). But if it is on cpu, it will. Why is the GPU version of nn.Parameter not implemented? How could we use nn.Parameter on GPU?
Test Code:
from torch import nn
from torch import Tensor
class M(nn.Module):
def __init__(self):
super(M, self).__init__()
self.a = nn.Parameter(Tensor(1)).cuda()
m = M()
list(m.parameters())
class G(nn.Module):
def __init__(self):
super(G, self).__init__()
self.a = nn.Parameter(Tensor(1))
g = G()
list(g.parameters())
|
st116210
|
Try this instead:
self.a = nn.Parameter(Tensor(1).cuda())
Also, I think you have an issue in that the second list should be g.parameters().
|
st116211
|
Thank you. It works for me.
So nn.Parameter's API is different from that of nn.Linear and other layers.
|
st116212
|
how to deal with feature transform in pytorch, it seems not supported right now
|
st116213
|
I’m using the double backward capability for BatchNorm that was recently added (thanks so much!). But I’m getting an error that I’m struggling to track down. I’ve tried to write down a minimal example.
import torch
import torch.nn as nn
from torch.autograd import Variable
class Net(nn.Module):
def __init__(self):
super(self.__class__, self).__init__()
self.classifier = nn.Sequential(
nn.Linear(4,4),
nn.BatchNorm1d(4, momentum=0.1, affine=False),
nn.ReLU(inplace=True)
)
def forward(self, x):
return self.classifier(x)
input_var = Variable(torch.ones((4))).cuda()
net = Net()
net.cuda()
out = net.forward(input_var)
loss = sum(out)
loss.backward(create_graph=True)
loss.backward(create_graph=True)
The output of running this code is an error:
RuntimeError: missing required argument 'save_mean'
For some reason, I’m not able to trace the error any further than the torch.autograd.backward() function. And the only place save_mean seems to appear is in the legacy batch norm code.
Can anyone see what I’m doing wrong?
Thanks!
|
st116214
|
Closing this as the bug was resolved in this PR: https://github.com/pytorch/pytorch/pull/2277 31
|
st116215
|
Hi,
I am working on adding batchnorm in the discriminator in WGAN-GP. However, I encountered a bug where gpu memory continues to increase when using batchnorm double backprop. This bug only occurs when using batchnorm. If i remove batchnorm from the model, the bug doesn’t occur.
Here’s the code you can experiment with.
import torch
import torch.nn as nn
from torch.autograd import Variable
# Model (partial discriminator)
D = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(64), # if you remove this, the bug does not occur.
nn.LeakyReLU(0.2))
D.cuda()
for i in range(1000):
# Input
x = Variable(torch.randn(10, 3, 128, 128).cuda(), requires_grad=True)
out = D(x)
grad = torch.autograd.grad(outputs=out,
inputs=x,
grad_outputs=torch.ones(out.size()).cuda(),
retain_graph=True,
create_graph=True,
only_inputs=True)[0]
grad_norm = grad.pow(2).sum().sqrt()
loss = torch.mean((grad_norm - 1)**2)
# Reset grad and backprop
D.zero_grad()
loss.backward()
if (i+1) % 10 == 0:
print (i+1)
How can i solve this problem?
Thanks,
|
st116216
|
Gregory Chanan is investigating this. He’ll follow up more on your github issue tomorrow.
|
st116217
|
@smth Ok, Thanks.
I left a link to the github issue.
github.com/pytorch/pytorch
Issue: A bug in batchnorm double backward 36
opened by yunjey
on 2017-08-07
closed by gchanan
on 2017-08-07
I am working on adding batchnorm in the discriminator in WGAN-GP. However, I encountered a bug where gpu memory continues to...
|
st116218
|
I’ve been looking into this technique: Scalable and Sustainable Deep Learning via Randomized Hashing 18 that allegedly “uses only 5% of the total multiplications, while
keeping on average within 1% of the accuracy of the original model.” It looks pretty straightforward and appears to be suitable for PyTorch, so I’d like some guidance on how we might go about implementing this functionality unless it would have undesirable consequences that are not immediately apparent to me.
|
st116219
|
I wanted to do SGD but I wasn’t sure if I understood when one should be zeroing out gradients. There are two examples in the tutorials. One zeros before the backward+update pass and the other after the backward+update pass. Are these two the same? What is the difference? code (http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-nn 40):
for t in range(500):
# Forward pass: compute predicted y by passing x to the model.
y_pred = model(x)
# Compute and print loss.
loss = loss_fn(y_pred, y)
print(t, loss.data[0])
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights
# of the model)
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to model
# parameters
loss.backward()
# Calling the step function on an Optimizer makes an update to its
# parameters
optimizer.step()
vs (http://pytorch.org/tutorials/beginner/pytorch_with_examples.html 14):
for t in range(500):
# Forward pass: compute predicted y using operations on Variables; these
# are exactly the same operations we used to compute the forward pass using
# Tensors, but we do not need to keep references to intermediate values since
# we are not implementing the backward pass by hand.
y_pred = x.mm(w1).clamp(min=0).mm(w2)
# Compute and print loss using operations on Variables.
# Now loss is a Variable of shape (1,) and loss.data is a Tensor of shape
# (1,); loss.data[0] is a scalar value holding the loss.
loss = (y_pred - y).pow(2).sum()
print(t, loss.data[0])
# Use autograd to compute the backward pass. This call will compute the
# gradient of loss with respect to all Variables with requires_grad=True.
# After this call w1.grad and w2.grad will be Variables holding the gradient
# of the loss with respect to w1 and w2 respectively.
loss.backward()
# Update weights using gradient descent; w1.data and w2.data are Tensors,
# w1.grad and w2.grad are Variables and w1.grad.data and w2.grad.data are
# Tensors.
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
# Manually zero the gradients after updating weights
w1.grad.data.zero_()
w2.grad.data.zero_()
maybe the only time its “wrong” is to zero out after the backward but before the SGD updated?
|
st116220
|
Both examples are correct. The first example is more explicit, while in the second example w1.grad is None up to the first call to loss.backward(), during which it is properly initialized. After that, w1.grad.data.zero_() zeroes the gradient for the successive iterations.
You’re right, optimizer.step() needs the gradients to be there, so you don’t want to zero gradients before the step. However, you can eventually zero gradients for specific variables that you don’t want the optimizer to update.
|
st116221
|
Hi,
I am trying to use the torch.distributed package on a cluster. But, I can’t find relevant docs or an example to start with.
I am aware of the this link 110. But, I think the API has changed considerably from what is mentioned in this link. For example the function get_rank is now defined torch.distributed.collectives. Another thing is I am unable to locate the file pytorch_exec script to launch my program.
Please point me in the right direction.
Thanks and regards,
Shirin
|
st116222
|
as mentioned eksewgerem distributed is not ready for usage yet, we are changing a lot of things. we’ll announce when it’s ready.
|
st116223
|
You can have a look at the imagenet example 435, which contains options to train in distributed mode.
|
st116224
|
I have a situation where I do:
opt = optim.Adam(model.parameters(), lr = 0.001)
out = model(input)
out.backward()
opt.step()
when I look at the model parameters, they do have non-zero gradients, but opt.step() does not update the parameters. What would I look for in this type of situation?
I should mention that I override the model.parameters() function to return an array of two specific parameters (because my model does some custom stuff that is not in any of the standard layers).
|
st116225
|
What do you mean by
pchalasani:
I should mention that I override the model.parameters() function to return an array of two specific parameters (because my model does some custom stuff that is not in any of the standard layers).
?
The optimizer will only update the parameters that are passed in the constructor, and if model.parameters() do not return all the parameters, then it won’t update the parameters that are not returned.
|
st116226
|
sorry I meant to say, when I defined the model class, I override the parameters method to return my custom params. I’m not overriding them after doing backprop.
This was a strange situation, and now it appears to be working fine.
|
st116227
|
I want to upgrade to v0.2 from source with python setup.py install --user, but I got some error,
[ 69%] Built target gloo
[ 76%] Building NVCC (Device) object gloo/CMakeFiles/gloo_cuda.dir/gloo_cuda_generated_cuda.cu.o
[ 76%] Building NVCC (Device) object gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o
[ 80%] Building NVCC (Device) object gloo/CMakeFiles/gloo_cuda.dir/gloo_cuda_generated_cuda_private.cu.o
/local_directory/pytorch/torch/lib/gloo/gloo/nccl/nccl.cu(36): error: argument of type "const int *" is incompatible with parameter of type "int *"
1 error detected in the compilation of "/tmp/tmpxft_0000360c_00000000-17_nccl.compute_61.cpp1.ii".
CMake Error at gloo_cuda_generated_nccl.cu.o.cmake:263 (message):
Error generating file
/local_directory/pytorch/torch/lib/build/gloo/gloo/CMakeFiles/gloo_cuda.dir/nccl/./gloo_cuda_generated_nccl.cu.o
gloo/CMakeFiles/gloo_cuda.dir/build.make:77: recipe for target 'gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o' failed
make[2]: *** [gloo/CMakeFiles/gloo_cuda.dir/nccl/gloo_cuda_generated_nccl.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
CMakeFiles/Makefile2:138: recipe for target 'gloo/CMakeFiles/gloo_cuda.dir/all' failed
make[1]: *** [gloo/CMakeFiles/gloo_cuda.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
Does any one have any idea about this? Thanks a lot.
|
st116228
|
Say I have 2 Variables v1, v2, this is what I want to do:
v3 = torch.cat( [f1(v1), f2(v2)], 0) # f1 and f2 are functions
However, it first creates a memory for saving f1(v1) and f2(v2), and then create a memory for v3.
A desirable way is we can create a memory which has size of v3, and assign the corresponding component to be f1(v1) and f3(v2), like
v3 = create_placeholder() # create a placeholder Variable
v3[:10] = f1(v1)
v3[:20] = f2(v2)
Is there any solution following this idea???
A more complicated example:
I want to create a Varaible v3 from two variable v1 v2.
Suppose v3 has size (2,3,10), v1 and v2 has size (10)
I want v3 to be:
v3[0,0] = v1
v3[1,2] = v2
0 elsewhere.
How should I construct v3.
|
st116229
|
Hi,
If input is the output of the cat operation, it should be contiguous already.
So the call to .contiguous() is already a no-op.
|
st116230
|
Ok. Bad example then. (But still, torch.cat will use additional memory.)
I will change the question.
|
st116231
|
ruotianluo:
A desirable way is we can create a memory which has size of v3, and assign the corresponding component to be f1(v1) and f3(v2), like
I don’t think that’s supported, because Variables only track the history for the entire tensor/variable, and your example requires two divergent histories.
|
st116232
|
I noticed that one can use the functional library:
import torch.nn.functional as F
...
x = F.relu(self.conv1(x))
or use the torch.nn as in:
torch.nn.ReLU
or even the clamp operator:
x.clamp(min=0)
is there a reason why there are three ways to do the same thing?
|
st116233
|
F.relu is the functional interface of torch.nn.ReLU. Modules like torch.nn.ReLU are sometimes handy, for example when quickly creating a model using nn.Sequential.
You can’t add F.relu in a nn.Sequential, as it expects an object that is inherited from nn.Module.
About clamp, it is a tensor/variable function which is more generic than ReLU, and it works for both Tensor and Variable, while F.relu only works for variables. Also, ReLU is a common enough module to deserve it’s own function, instead of having to write x.clamp(min=0).
|
st116234
|
maybe what Im confused about is why functional interface even exists. Do you know?
|
st116235
|
Yes. The functional interface is very handy when you want to perform some more complex operations.
For example, let’s say that you want the weights of your convolution to be the output of some network (for example in hyper networks 59.
In this case, you can’t use the Module interface, as it creates the weights during initialization, but you can easily use the functional interface for that
weights = net1(input)
res = F.conv2d(input, weights)
|
st116236
|
How can I deploy a pytorch model in a c++ project, if I don’t mind running python from c++ in production?
I don’t want to convert the pytorch trained model to others because I have a lot of custom operations that are written in c++. Porting those custom operations to other frameworks is not easy.
|
st116237
|
I think you’ll need to have the python interpreter running from C++. This link could be of help https://docs.python.org/3/extending/embedding.html 383
|
st116238
|
Shameless plug for my project: https://github.com/bzcheeseman/pytorch-inference 570
Embedding the interpreter would also work, but I had trouble running network inferences that way in the past.
|
st116239
|
Hi everyone,
I’ve a very trivial question
Currently I am working with the older version (v0.1.12_2). How do I update to the new version?
Do I just execute the commands mentioned in pytorch.org 5? or do I need to remove first the current version that I’ve?
Thanks!
|
st116240
|
might be better to remove the old version before.
pip uninstall torch
pip uninstall torch
conda uninstall pytorch
conda uninstall libtorch
|
st116241
|
I am now reading the pytorch tutorial on name/country classification (see the Classifying Names with a Character-Level RNN 14). After the training, in chapter evaluate result 8, the evaluate() function will take a name and predict which language it belongs too. But it will give a hidden input of all zero as the input. My question is, we already have a trained internal state in the training, shall we always reuse it, or discard it every time? Or what is the recommended treatment of the internal state among different evaluation?
# Just return an output given a line
def evaluate(line_tensor):
hidden = rnn.initHidden()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
return output
Thanks
|
st116242
|
Hi @koodailar,
(…) we already have a trained internal state in the training (…)
Why do you say so? Have you trained the initial state of the RNN as a model parameter?
In most applications, the internal state is reset (to zero) after each sequence. Having an internal state is useful because it may be used to store information from the “past” that might be useful in the “future”, i.e. information from the previous inputs that conditions the probability distribution of the next outputs. In your example, each input sequence (surname) is completely independent from the following inputs, so it makes sense to reset the state to zero in order to flush the memory.
|
st116243
|
Quick question, if we only wanted to penalize the weights of the (say) the first layer and not the others. Is there an example to refer to?
|
st116244
|
Even more specifically, it would be great to able to penalize the gradient with respect to the weights of a certain layer
|
st116245
|
You can take the weights of the first layer with model.layer1.parameters(), and apply your penalty only to those weights.
To penalize the gradient of a layer, have a look at the Higher Order gradients in https://github.com/pytorch/pytorch/releases/tag/v0.2.0 26
|
st116246
|
Here comes the next major release of PyTorch, just in time for ICML.
Install it today from our website http://pytorch.org 27
We’re introducing long-awaited features such as Broadcasting, Advanced Indexing, Higher-order gradients and finally: Distributed PyTorch.
Due to introducing Broadcasting, the code behavior for certain broadcastable situations is different from behavior in 0.1.12. This might lead to silent bugs in your existing code. We’ve provided easy ways of identifying this ambiguous code in the Important Breakages and Workarounds section.
Our detailed release notes cover the following topics.
Please read them here: https://github.com/pytorch/pytorch/releases/tag/v0.2.0 330
Tensor Broadcasting (numpy-style)
Advanced Indexing for Tensors and Variables
Higher-order gradients
Distributed PyTorch (multi-node training, etc.)
Neural Network layers and features: SpatialTransformers, WeightNorm, EmbeddingBag, etc.
New in torch and autograd: matmul, inverse, etc.
Easier debugging, better error messages
Bug Fixes
Important Breakages and Workarounds
Package documentation is available here: http://pytorch.org/docs/0.2.0/ 107
Discuss the release here!
Cheers,
The PyTorch Team
|
st116247
|
0.xxx 9 are all not production ready. when we are confident of quality, we’ll transition to 1.0, starting which we shall declare beta to be over.
|
st116248
|
smth:
Distributed PyTorch (multi-node training, etc.)
I am looking at http://pytorch.org/docs/master/distributed.html 53, I will have to decay my thesis work once again and spend all my days to try this new toys.
|
st116249
|
This could be due to something very stupid I did but I get massive loss values like
loss: 37278441275392.0
I am using a LogSoftmax + NLLLoss. I have noticed that this massive loss occurs only when I have multiple decoders stacked together. The implementation is a multi layer convolutional decoder with attention. If I skip over any attention computation, I get a more sane loss value in the order of hundreds.
Attention = Linear + bmm + Softmax
Though, the network does learn and training accuracies increases over time (validation accuracy is not increasing past zero though), loss is either zero or exceptionally high.
Any idea what could cause such an issue? (I did not know what other information to provide. Please let me know if any other info would be helpful)
|
st116250
|
Hi, a simple question, I currently installed PyTorch on my Mac from the website 69 command: > conda install pytorch torchvision -c soumith , and I have a couple questions:
How do I check the current PyTorch version that has been installed, and secondly, I am seeing from the git repo that there have been some additions, (https://github.com/pytorch/pytorch/commit/c7c8aaa7f040dd449dbc6aca9204b2f943aef477 106), so I am confused as to what to do: Do I umm, clone the pytorch repo?.. or … re-install? … if I do clone, then how do I install?.. very confused as to what to do :-/
Thanks
|
st116251
|
If you want to have a bleeding edge install, you need to build from source (see instructions in the README). If you upgrade the package it should update to the latest release (see tags on GitHub 620).
|
st116252
|
Thank you, and how do I explicitly check what my current PyTorch installation is/was? In tensorflow I would do tf.version …
|
st116253
|
How can I check the version of pytorch if installed with pip?
I imported torch inside a python shell and ran the command torch.__version__ which displayed 0.1.12_2. Is that it?
EDIT: Can confirm that this is the correct way of finding your PyTorch version. After upgrading to 0.2.0, the same commands prints out '0.2.0_1'.
|
st116254
|
A few questions and comments about the distributed API
Although CUDA tensors are pretty much not supported for tcp, gloo, or mpi, after receiving the torch tensors, can we still convert them with .cuda() and use them locally?
Typo in the torch.distributed.get_rank() API: “Rank is a unique identifier assigned to each process withing a distributed group”
Do all machines have to be on the same network? I didn’t catch this from reading the API or the GitHub release note, but seems like I can’t train across networks e.g. one machine in Massachusetts and another in New York.
3.5. Can members of a group be on different networks?
Just to see if my understanding is correct:
You start a distributed training session with torch.distributed.init_process_group(backend). The backend parameter specifies the network protocol. Then wrap the network with torch.nn.parallel.DistributedDataParallel(model). Setup the dataset and wrap it in torch.utils.data.distributed.DistributedSampler(train_dataset), and setup everything else as usual and train normally?
Thanks.
|
st116255
|
Dear PyTorch users,
Most of you use our stable releases. Our current stable release is v0.1.2
However, some of you use the master branch of PyTorch.
We wanted to give those of you who use the master branch a heads-up about some breaking changes that will be merged starting today.
These breaking changes are because we will be introducing NumPy-like Broadcasting into PyTorch (See PR#1563).
We will be releasing a comprehensive set of backward-compatibility warnings and codemod mechanisms in v0.2 to detect code that will change behavior, so that you can be aware of and fix it.
However, these warnings will only be available when we release v0.2 one or two weeks from now.
In this small window of time, our master branch will change behavior.
We hope this does not cost too much trouble to you,
Best,
The PyTorch Team
|
st116256
|
You mean that we will be released from torch.expand_as() and we will directly sum two dimension-compatibles tensors ?
|
st116257
|
yes, you can directly do broadcasting, instead of having to expand/expand_as all the time.
|
st116258
|
We hope this does not cost too much trouble to you,
In the long run, this’ll make life that much easier.
|
st116259
|
Note that in the vast majority of cases, it won’t be breaking at all. Previously we accepted addition of tensors with shapes (4, 1) and (1, 4), which would give a (4, 1) tensor as a result. However, after these changes the inputs are going to get broadcast and output a (4, 4) tensor. So, as long as you don’t depend on this, and all shapes in your program were matching, the new changes won’t change the behavior at all. If we detect that you depended on it, a warning will be raised.
|
st116260
|
Hello @apaszke, can you have a look at https://discuss.pytorch.org/t/backward-error-when-using-expand/4273. Is this problem also related to this issue ?
|
st116261
|
New release is out.
https://discuss.pytorch.org/t/v0-2-higher-order-gradients-distributed-pytorch-broadcasting-advanced-indexing-new-layers-and-more 49
|
st116262
|
I want to extract features from some pictures, and use res152 with the following code.
When it runs to 17/1000 pics, my ubuntu computer with 8G memory will crash.
import torchvision, torch, os
from torch import nn
from torch.autograd import Variable
from dataset import DATASET
mydata = DATASET(os.getcwd())
model = torchvision.models.resnet152(pretrained=True)
new_classifier = nn.Sequential(*list(model.children())[:-1])
model.classifier = new_classifier
pos1 = model(Variable(mydata[0][0].view(1, 3, 224, 224)))
for index in range(1, 1000):
f = model(Variable(mydata[index][0].view(1, 3, 224, 224)))
pos1 = torch.cat((pos1, f))
with open('pos1_fea.pt', 'wb') as f:
torch.save(pos1, f)
|
st116263
|
Hi,
just call torch.nn.init.normal with the parameters:
l = torch.nn.Linear(5,10)
torch.nn.init.normal(l.weight)
torch.nn.init.normal(l.bias)
there are extra arguments for mean and standard deviation.
If all parameters look the same to you, you could do
for p in l.parameters():
torch.nn.init.normal(p)
though it might not necessarily be a good idea.
Note that there are methods like xavier_normal 50
and kaiming_normal 68 that attempt to set the standard variance based on the number of parameters and, if you provide one, the gain 6 of the activation function.
Best regards
Thomas
|
st116264
|
When I try
class Feedforward(nn.Module):
def __init__(self):
super(Feedforward, self).__init__()
prob_drop = 0.0
self.fc1 = nn.init.normal(nn.Linear(784, 200))
I get
File "model/network.py", line 20, in __init__
self.fc1 = torch.nn.init.normal(nn.Linear(784, 200))
AttributeError: 'module' object has no attribute 'init'
|
st116265
|
If I do torch.max(a) inside the python shell that returns <type 'float'>. But if I run try this in my script, it returns [torch.cuda.FloatTensor of size 1 (GPU 0)]. Same issue with torch.sum().
Shell:
>>> a = torch.FloatTensor([3])
>>> a
3
[torch.FloatTensor of size 1]
>>> type(torch.max(a))
<type 'float'>
Script:
print 'losses: '+str(torch.max(loss))
outputs:
[torch.cuda.FloatTensor of size 1 (GPU 0)]
Is this a bug or am I doing something wrong?
|
st116266
|
Not a bug.
This is because you enabled CUDA in your script but disabled CUDA in Python interactive shell.
In fact, CUDA is disabled by default unless you enable it explicitly.
So you must have enabled CUDA somewhere in your script code.
|
st116267
|
I am getting the following error (see the stacktrace) when I ran my code in a different GPU (Tesla K-20, cuda 7.5 installed, 6GB memory). Code works fine if I run in GeForce 1080 or Titan X GPU.
Stacktrace:
File "code/source/main.py", line 68, in <module>
train.train_epochs(train_batches, dev_batches, args.epochs)
File "/gpfs/home/g/e/geniiexe/BigRed2/code/source/train.py", line 34, in train_epochs
losses = self.train(train_batches, dev_batches, (epoch + 1))
File "/gpfs/home/g/e/geniiexe/BigRed2/code/source/train.py", line 76, in train
self.optimizer.step()
File "/gpfs/home/g/e/geniiexe/BigRed2/anaconda3/lib/python3.5/site-packages/torch/optim/adam.py", line 70, in step
bias_correction1 = 1 - beta1 ** state['step']
OverflowError: (34, 'Numerical result out of range')
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.