id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st98868
|
I noticed a strange behavior when using multiprocessing. My main process sends data to a queue. Two spawned processes read from the queue and create cuda tensors. If I print the tensors I see the process running on gpu-1 copied to gpu-0 in nvidia-smi. Hence it looks as if I had 3 processes, 2 on gpu-0 and 1 on gpu-1. Two of this 3 processes have the same p-id, namely the p-id of the process running on gpu-1. Now if I print the tensor size or anything else, or if I don’t print anything at all I end up having only 2 processes each running on one gpu. The behavior is reproducible using multiprocessing as well as torch.multiporcessing. See code below for reproduction.
Does anyone have an idea why this is?
nvidia-smi output when printing tensor:
Screenshot_20181018-015416.png1439×821 52.2 KB
nvidia-smi output otherwise:
Screenshot_20181018-015351.png1431×774 47.7 KB
Code:
import torch
import torch.multiprocessing as mp
def run(q, dev):
t = torch.tensor([1], device=dev)
for data in iter(q.get, None):
new_t = torch.tensor([data], device=dev)
t = torch.cat((t, new_t), dim=0)
# Causes the Copy
print(t)
# Any of the following doesn't causes the copy
print(t.size())
print('t')
continue
q.put(None)
if __name__ == '__main__':
ctx = mp.get_context('spawn')
q = ctx.Queue()
devices = [
torch.device('cuda:{}'.format(i))
for i in range(torch.cuda.device_count())
]
processes = [
ctx.Process(target=run, args=(q, dev))
for dev in devices
]
for pr in processes:
pr.start()
for d in range(1, 1000000):
q.put(d)
q.put(None)
for pr in processes:
pr.join()
|
st98869
|
Solved by SimonW in post #2
this was fixed on master
|
st98870
|
I’ve got two questions regarding performance.
Is there a performance difference between first creating a tensor, then sending it to the device with the “to” function, and with specifying the device directly during creation?
When having tensors on the GPU and switching data type (e.g from int to float), does it do this directly on the device or first moves it back to CPU before casting?
|
st98871
|
Solved by SimonW in post #2
yes, one is directly creating on the particular device. and the other one creates on cpu and does a copy (if dev is not cpu)
directly on the device
|
st98872
|
bananacode:
Is there a performance difference between first creating a tensor, then sending it to the device with the “to” function, and with specifying the device directly during creation?
yes, one is directly creating on the particular device. and the other one creates on cpu and does a copy (if dev is not cpu)
bananacode:
When having tensors on the GPU and switching data type (e.g from int to float), does it do this directly on the device or first moves it back to CPU before casting?
directly on the device
|
st98873
|
Thanks for the rapid response. So to confirm, torch.zeros((…), device=dev) is the faster way?
|
st98874
|
I define this simple network:
class DenseModel(nn.Module):
def __init__(self, inputSize, hiddenSize, outputSize, numLayers, p):
super().__init__()
self.i2h = nn.Linear(inputSize, hiddenSize)
self.relu = nn.ReLU()
self.h2h = nn.Linear(hiddenSize, hiddenSize)
self.h2o = nn.Linear(hiddenSize, outputSize)
self.softmax = nn.Softmax(dim=1) # Normalize output to probs.
self.dropout = nn.Dropout(p)
def forward(self, x):
out = self.i2h(x)
out = self.relu(out)
out = self.dropout(out)
layer = 1
while layer <= numLayers:
out = self.h2h(out)
out = self.relu(out)
out = self.dropout(out)
layer+=1
out = self.h2o(out)
out = self.softmax(out)
return out
Output is two-classes, loss is nn.CrossEntropyLoss, optimizing using sgd like so:
for epoch in range(epochs):
tl = 0
vl = 0
batchCounter = 0
v = 0
for k in trainBatches:
nn1.train()
input_ = Variable(torch.FloatTensor(XTY[k:k+batchSize,:-1]))
target_ = Variable(torch.FloatTensor(XTY[k:k+batchSize,-1:])) # Last position
# Forward
output_ = nn1.forward(input_)
# Backward / Optimize
nn1.zero_grad()
loss = lossFn(output_, torch.max(target_, 1)[0].long())
loss.backward() # Backprop
optimizer.step() # Gradient descent
tl += loss.data.item()
for v in valBatches:
nn1.eval()
input_ = Variable(torch.FloatTensor(XTY[v:v+batchSize,:-1]))
target_ = Variable(torch.FloatTensor(XTY[v:v+batchSize,-1:])) # Last position
# Forward
output_ = nn1.forward(input_)
loss = lossFn(output_, torch.max(target_, 1)[0].long())
vl += loss.data.item()
batchCounter += 1
lossT = np.append(lossT, tl/len(trainBatches)) # Log training loss
lossV = np.append(lossV, vl/len(valBatches)) # Log training loss
My training and validation errors converge, even after 500 epochs:
I can’t anything helpful, any ideas?
Thanks.
|
st98875
|
nn.CrossEntropyLoss expects raw logits as the input as it internally calls F.log_softmax and nn.NLLLoss on the model output.
Just remove the self.softmax(out)) in your model’s forward and run it again.
Also some minor tips for your code:
Variables are deprecated since 0.4.0. You can just use tensors instead, you don’t have to warp them anymore.
Call the model directly as nn1(input_) instead of the forward() method, as this will make sure to properly register all hooks.
|
st98876
|
@ptrblck Sorry for the delayed answer. I followed the suggestion but removing that line of code does not help, still getting same results. I left it running for a long time just to make sure it isn’t something silly like that, same result:
Anything else that pops out? Bit at a loss here. Thanks.
|
st98877
|
As a side note: I’m trying to overfit my model with a small data (10k) and 200 neurons (1 layer) and I can’t (same pattern as above). Is there anything wrong in the definition of the model itself?
|
st98878
|
I don’t know what kind of data you are using, but your model might just not have enough capacity.
Try to overfit using a smaller sample size, e.g. 10 samples and see how the training loss behaves.
If that looks good, you could add some more samples and see it the model is still able to learn the data.
|
st98879
|
What kind of model did you use for the 10k samples?
As your model is quite simple could you just increase the number of neurons in the hidden layer and see how the training loss behaves?
|
st98880
|
I realized that my classes are very highly imbalanced and the model was behaving as expected by classifying always as the dominant class, that’s why the errors were converging to the observed proportion of cases in the minority class. After resampling (undersample majority class, oversample minority class) and adding weights to the loss function to correct persisting imbalances my results now look like this:
Thanks for your time @ptrblck
|
st98881
|
I tested both
giving an input x of size (1,1,s,s)
nn.softmax(X[0,0])
nn.softmax2d(x)
the sum of the array or the feature map in the first is 100 while in the second is 1000000
the maximum value is 1 but it seems like it doesn’t apply softmax on all the array items ensemble
|
st98882
|
Which PyTorch version are you using?
You should get a warning in 0.3.1, that the implicit dimension choice for softmax has been deprecated.
In your first example, the softmax is calculated in dim=1, so that softmax(x[0, 0]).sum(1) will return ones.
The second example calculates the softmax in the channels, i.e. also dim=1.
Since you just have one channel, all values will be ones.
Change it to x = Variable(torch.randn(1, 3, 10, 10)), which could be an output of a segmentation model.
Now, the channels will sum to one.
|
st98883
|
yes I’m having that warning and I thought of this answer but I wanted to show the different output of the two functions using my case.
In the second function it should work, why in case of 1 channel it gives ones everywhere ? I don’t have three channels, I only have this as output (P,1,s,s) where I want to perform softmax on the one channel I have as feature map. P is my batch-size and s is image dimension
|
st98884
|
You need different output channels to apply softmax on them.
For example, if you would like to output a segmentation, where each channel stores the probabilities for one class, you would have [batch_size, n_class, w, h] as the output dimension.
Now you could call softmax on it and each pixel will have the probability belonging to the class corresponding to the channel, i.e. output[0, 0, ...] will give you a probability map of class one for every pixel.
If you only have one output channel, it seems you have a binary classification task?
In this case you could use sigmoid functions.
|
st98885
|
Actually, I’m not interested in channel normalization or class probability across channel. it is neither a question about classification. I’m doing post-processing on last feature map where I want to calculate the softmax of the entire matrix to have a normalized heatmap of the this channel.
the implementation in numpy is easy but I still would like to have it in pytorch running on GPU.
|
st98886
|
So you would like to calculate the softmax over all logits in the Tensor, such that the sum over all pixels returns one?
In this case, you could try the following:
x = Variable(torch.randn(1, 1, 10, 10))
softmax = nn.Softmax(dim=0)
y = softmax(x.view(-1)).view(1, 1, 10, 10)
If I misunderstood your question, could you post the numpy code so that I can have a look?
|
st98887
|
if I have only one input this way of factorizing the array works perfectly, but I might have (10,1,10,10).
by reversing the order again using .view(1,1,10,10), are all the values return to their origin place or I might have some displacement ?
|
st98888
|
They will be returned to their original place.
Have a look at this example:
# Create input with 1s and 2s
x = Variable(torch.cat((torch.ones(1, 1, 10, 10), torch.ones(1, 1, 10, 10)*2), dim=0))
softmax = nn.Softmax(dim=0)
y = softmax(x.view(-1)).view(2, 1, 10, 10)
print(y)
y.sum()
Now, the whole Tensor is normalized, such that its sum is 1.
|
st98889
|
The sum of the whole tensor is 1. what i want is the sum should be for each P in the tensor (p,1,s,s).
a softmax that normalize a 2D tenor in its spatial values. not in their channels and not in their batch
|
st98890
|
Yeah, I thought so, that’s why I mentioned it again.
I misunderstood the following statement:
falmasri:
where I want to calculate the softmax of the entire matrix to have a normalized heatmap of the this channel.
You could use the following code:
x = Variable(torch.cat((torch.ones(1, 1, 10, 10), torch.ones(1, 1, 10, 10)*2), dim=0))
softmax = nn.Softmax(dim=1)
y = softmax(x.view(2, -1)).view(2, 1, 10, 10)
Now, each batch sample will have the sum of 1:
print(y[0, 0, ...].sum())
print(y[1, 0, ...].sum())
Does this work for you?
|
st98891
|
@ptrblck Follow the example you gave, does the following right? Thank you in advance.
x=torch.ones(4,1,4,4)
b,c,h,w=x.size()
softmax=nn.Softmax(dim=1)
y=softmax(x.view(b,-1)).view(b,c,h,w)
|
st98892
|
This would work for the use case in this thread, i.e. normalizing each sample in the batch to visualize heatmaps.
It’s not a usual classification use case, if you are looking for it!
|
st98893
|
That’s what I need, I need to do spatial softmax, which means the sum of all pixel in one batch equals 1.
|
st98894
|
When defining forward my lizard brain always reels at code like:
def forward(self, x):
x = nn.foo(x)
x = nn.bar(x)
x = nn.baz(x)
return x
or
def forward(self, x):
return nn.baz(nn.bar(nn.foo(x)) #imagine depper nesting and longer names
and wants to write something like:
from toolz.functoolz import pipe
def forward(self, x):
return pipe(x, nn.foo, nn.bar, nn.baz)
or maybe (to structure more complicated models like ResNet or DenseNet):
from toolz.functoolz import compose
class Model(nn.Module):
def __init__(self)__:
self.block1 = compose(baz, bar, foo)
... # more stuff here
def forward(self, x):
x = self.block1(x)
... # more stuff here
Now I wonder (since PyTorch uses a dynamic computation graph) whether additional python code (like lambdas and other pure python code as in modules like toolz and functools etc.) incurs some (non-negligible) overhead? This maybe touches upon how the dynamic computation graph is actually build from the python code in forward and I’d really be interested to learn about that.
…all the while my lizard brain hopes it can indeed write more functional code without incurring a performance penalty.
|
st98895
|
Solved by albanD in post #2
Hi,
The autograd engine only records the “basic” operations done on Tensors. So any logic that you add around it will not impact the whole autograd engine.
The overhead you will have is the overhead of running more python code. Depending on the size of your net, that can be completely negigeable (…
|
st98896
|
Hi,
The autograd engine only records the “basic” operations done on Tensors. So any logic that you add around it will not impact the whole autograd engine.
The overhead you will have is the overhead of running more python code. Depending on the size of your net, that can be completely negigeable (big cnns) or significant (very very small nets). But the autograd engine will not be impacted by these changes.
|
st98897
|
@albanD thank your for your answer:) Can you point me to further information/elaborate a bit more how the PyTorch/Autograd internals work (conceptually) with regard to building the computation graph from the python code?
|
st98898
|
Basically the idea is that the autograd engine needs to know every operations that you performed to be able to use their backward equivalent to during the backward pass.
The basic operation is an autograd.Function. for which both forward and backward are defined. They are quite hidden to the end user. For example, for the torch.checkpoint method, it is actually using a single Function here 3.
Assuming your Tensors require grads, when you apply a Function to a Tensor, it will record this and the output Tensor will have grad_fn field that says which Function was last applied to that Tensor. Similarly, the Function will look at it’s inputs and find out what were the previous functions created it own input. By doing this, you obtain a directed acyclic graph of Functions.
The cpp version works the exact same way where each output of a Function will link to that Function’s backward.
You can explore this graph this way:
import torch
a = torch.rand(10, 10, requires_grad=True)
out = a * 2
print(out.grad_fn) # MulBackward
loss = out.sum()
print(loss.grad_fn) # SumBackward
print(loss.grad_fn.next_functions) # ((MulBackward, 0),)
# Returns which functions corresponds to which input:
# Here only one input corresponds to MulBackward
# The 0 means that it was output 0 from this Function
# (mulitplication had a single output)
more_things = loss / out
print(more_things.grad_fn) # DivBackward
print(more_things.grad_fn.next_functions) # ((SumBackward, 0), (MulBackward, 0))
# Returns which functions corresponds to which input:
# First input loss corresponds to SumBackward
# Second input out corresponds to Mul Backward
As you can see, the autograd engine only kicks in at the Function level, and so for your original question, having more convoluted python logic will not change it’s behaviour (as long as you still perform the same operations on your Tensors).
|
st98899
|
During training, everything is ok.
However, I got the same prediction results when inputs are differenet. What are the possible reason?
When I use cuda9.2 instead of cuda9.0, the bug dispear.
|
st98900
|
Solved by fangyh in post #3
I find the reason.
import torch
print(torch.version.cuda)
8.0.61
|
st98901
|
Are you loading the same model with its state_dict and get constant results for CUDA9.0 and different ones for CUDA9.2?
Could you post a code to reproduce this issue?
|
st98902
|
I try exporting to onnx model from pytorch.
Here is my code:
import torch
from darknet import Darknet
det_model = Darknet("../yolo/cfg/yolov3-spp.cfg")
det_model.load_weights('../models/yolo/yolov3-spp.weights')
dummy_input=torch.Tensor(1,3,608,608)
torch.onnx.export(det_model,(dummy_input,True),'darknet.onnx')
Here is the error:
Traceback (most recent call last):
File "/home/test1050/Desktop/tf-pt/playground/play.py", line 14, in <module>
torch.onnx.export(det_model,(dummy_input,),'darknet.onnx')
File "/home/test1050/miniconda3/lib/python3.6/site-packages/torch/onnx/__init__.py", line 25, in export
return utils.export(*args, **kwargs)
File "/home/test1050/miniconda3/lib/python3.6/site-packages/torch/onnx/utils.py", line 84, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names)
File "/home/test1050/miniconda3/lib/python3.6/site-packages/torch/onnx/utils.py", line 134, in _export
trace, torch_out = torch.jit.get_trace_graph(model, args)
File "/home/test1050/miniconda3/lib/python3.6/site-packages/torch/jit/__init__.py", line 255, in get_trace_graph
return LegacyTracedModule(f, nderivs=nderivs)(*args, **kwargs)
File "/home/test1050/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/test1050/miniconda3/lib/python3.6/site-packages/torch/jit/__init__.py", line 291, in forward
torch._C._tracer_exit(out_vars)
RuntimeError: /pytorch/torch/csrc/jit/tracer.h:120: getTracingState: Assertion `state` failed.Preformatted text
I am new for ONNX.Anyone give me some help,thx.
|
st98903
|
When initializing the GRU layer, Fairseq enforced the dropout from https://github.com/pytorch/fairseq/blob/master/fairseq/models/lstm.py#L180 3
self.lstm = LSTM(
input_size=embed_dim,
hidden_size=hidden_size,
num_layers=num_layers,
dropout=self.dropout_out if num_layers > 1 else 0.,
bidirectional=bidirectional,
)
But why does it drop out again after the GRU?
From https://github.com/pytorch/fairseq/blob/master/fairseq/models/lstm.py#L227 4, it looks like after applying the GRU and unpacking the sequence it dropouts again:
def forward(self, src_tokens, src_lengths):
if self.left_pad:
# convert left-padding to right-padding
src_tokens = utils.convert_padding_direction(
src_tokens,
self.padding_idx,
left_to_right=True,
)
bsz, seqlen = src_tokens.size()
# embed tokens
x = self.embed_tokens(src_tokens)
x = F.dropout(x, p=self.dropout_in, training=self.training)
# B x T x C -> T x B x C
x = x.transpose(0, 1)
# pack embedded source tokens into a PackedSequence
packed_x = nn.utils.rnn.pack_padded_sequence(x, src_lengths.data.tolist())
# apply LSTM
if self.bidirectional:
state_size = 2 * self.num_layers, bsz, self.hidden_size
else:
state_size = self.num_layers, bsz, self.hidden_size
h0 = x.data.new(*state_size).zero_()
c0 = x.data.new(*state_size).zero_()
packed_outs, (final_hiddens, final_cells) = self.lstm(packed_x, (h0, c0))
# unpack outputs and apply dropout
x, _ = nn.utils.rnn.pad_packed_sequence(packed_outs, padding_value=self.padding_value)
x = F.dropout(x, p=self.dropout_out, training=self.training)
assert list(x.size()) == [seqlen, bsz, self.output_units]
|
st98904
|
As an exercise I’m rewriting a simple numpy example in pytorch and so far I’ve been having problems to match the results. As a pytorch newbie it’s highly possible I have done some stupid mistake. So far it seems the loss always converges to 0.25 (in my example) and I have no idea why.
Ah, yes I’m still on 0.3.1, perhaps I should also consider an upgrade.
Thanks
import torch
import numpy as np
from torch.autograd import Variable
N, D_in, H, D_out = 4, 3, 4, 1
x_np = np.array([[0,0,1],
[0,1,1],
[1,0,1],
[1,1,1]])
x = Variable(torch.Tensor(x_np), requires_grad=True)
y = np.array([[0],
[1],
[1],
[0]])
y = Variable(torch.Tensor(y))
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.Linear(H, D_out),
)
loss_fn = torch.nn.MSELoss(reduce=False)
learning_rate = 1e-4
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(500):
y_pred = model(x)
loss = loss_fn(y_pred, y)
print loss
optimizer.zero_grad()
loss.backward(torch.ones(4).view(-1,1))
optimizer.step()
predicted = model.forward(Variable(torch.from_numpy(x_np).float())).data.numpy()
print '\n', predicted
numpy version:
import numpy as np
import pdb
def nonlin(x,deriv=False):
''' sigmoid'''
if(deriv==True):
return x*(1-x)
return 1/(1+np.exp(-x))
X = np.array([[0,0,1],
[0,1,1],
[1,0,1],
[1,1,1]])
y = np.array([[0],
[1],
[1],
[0]])
np.random.seed(1)
# randomly initialize our weights with mean 0
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1
for j in xrange(500):
l0 = X
l1 = nonlin(np.dot(l0,syn0))
l2 = nonlin(np.dot(l1,syn1))
l2_error = y - l2
print l2_error
l2_delta = l2_error*nonlin(l2,deriv=True)
l1_error = l2_delta.dot(syn1.T)
l1_delta = l1_error * nonlin(l1,deriv=True)
syn1 += l1.T.dot(l2_delta)
syn0 += l0.T.dot(l1_delta)
print '\n', l2
|
st98905
|
I think I’ve found the issue. The problem was with the choice of optimizer. Switching from sgd to adam seems to work. Including the changes for reference.
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch.optim as optim
import matplotlib.pyplot as plt
from tqdm import tqdm
np.set_printoptions(precision=4, linewidth=500, suppress=True)
class Example(nn.Module):
def __init__(self, D_in, D_out, total):
super(Example, self).__init__()
self.syn0 = nn.Linear(D_in,total)
self.syn1 = nn.Linear(total,D_out)
def forward(self, x):
x = F.sigmoid(self.syn0(x))
x = F.sigmoid(self.syn1(x))
return x
X_np = np.array([[0,0,1],
[0,1,1],
[1,0,1],
[1,1,1]])
X = Variable(torch.Tensor(X_np), requires_grad=True)
y = np.array([[0],
[1],
[1],
[0]])
y = Variable(torch.Tensor(y))
model = Example(3,1,4) #in, out, total
# optimizer = torch.optim.SGD(model.parameters(), lr=0.001) # !!!
optimizer = optim.Adam(model.parameters(), lr=0.001)
losses = []
for t in tqdm(range(10000)):
y_pred = model(X)
loss = (y_pred - y).pow(2).sum()
optimizer.zero_grad()
loss.backward()
optimizer.step()
losses.append(loss.data.numpy())
plt.plot(losses)
plt.show()
predicted = model.forward(Variable(torch.from_numpy(X_np).float())).data.numpy()
print '\n', predicted
|
st98906
|
In your first example you dind’t use a non-linearity between the layers as far as I see it, so that your model basically was just a matrix multiplication.
|
st98907
|
Thanks, indeed torch.nn.Sigmoid() helps but seems the sgd was not a great choice here. Changing to adam seems to give good results.
|
st98908
|
Makefile:518: recipe for target ‘.build_release/src/caffe/layers/base_data_layer.o’ failed
make: *** [.build_release/src/caffe/layers/base_data_layer.o] Error 1
In file included from ./include/caffe/common.hpp:19:0,
from src/caffe/util/math_functions.cpp:6:
./include/caffe/util/device_alternate.hpp:34:23: fatal error: cublas_v2.h: No such file or directory
#include <cublas_v2.h>
^
compilation terminated.
Makefile:518: recipe for target ‘.build_release/src/caffe/util/math_functions.o’ failed
make: *** [.build_release/src/caffe/util/math_functions.o] Error 1
In file included from ./include/caffe/common.hpp:19:0,
from src/caffe/util/benchmark.cpp:3:
./include/caffe/util/device_alternate.hpp:34:23: fatal error: cublas_v2.h: No such file or directory
#include <cublas_v2.h>
^
compilation terminated.
Makefile:518: recipe for target ‘.build_release/src/caffe/util/benchmark.o’ failed
make: *** [.build_release/src/caffe/util/benchmark.o] Error 1
|
st98909
|
It looks like you are trying to compile Caffe by BAIR 10. I’m not sure if this PyTorch forum is the best place to ask. Maybe their github 15 might be a better place.
|
st98910
|
Hello,
I’ve installed Pytorch according to https://pytorch.org/get-started/locally/ 31
by running
conda install pytorch torchvision cuda92 -c pytorch
and video card is installed
lspci | grep VGA
01:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
But still for some reason I get False
Python 3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 17:14:51)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
False
>>>
How to fix it?
Thank you!
|
st98911
|
since 9.2 is fairly recent my hunch is that your driver is out of date. See here for tips: https://devtalk.nvidia.com/default/topic/1035692/ubuntu-16-04-cuda-9-2-with-driver-390-59-insufficient-driver-/ 570
|
st98912
|
When I run the PyTorch (0.4.1) in Anaconda everything is fine but the pip version some colleages try all result in segfault. The gcc version installed is 5.4, but I do not know which version of gcc those PyTorch binaries are compiled against if I hit this error 22.
The warning about ABI compatibility is always printed, regardless of when I try 4.8, 4.9 or 5.4
|
st98913
|
Solved by dashesy in post #3
For the next person hitting this issue :–)
the environment was a little odd, it was Ubuntu14, and even though
$ gcc --version
gcc (Ubuntu 5.4.1-2ubuntu1~14.04) 5.4.1 2016090
I had
$ x86_64-linux-gnu-gcc --version
x86_64-linux-gnu-gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4
It took some time to se…
|
st98914
|
This is line I get the segfault:
0x00007fffe6aff4a0 in construct<_object*, _object*> (__p=0xb, this=0x14d7b08) at /usr/include/c++/4.8/ext/new_allocator.h:120
120 { ::new((void *)__p) _Up(std::forward<_Args>(__args)…); }
Does this mean the pip version of PyTorch is compiled with gcc 4.8? that is too old
|
st98915
|
For the next person hitting this issue :–)
the environment was a little odd, it was Ubuntu14, and even though
$ gcc --version
gcc (Ubuntu 5.4.1-2ubuntu1~14.04) 5.4.1 2016090
I had
$ x86_64-linux-gnu-gcc --version
x86_64-linux-gnu-gcc (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4
It took some time to see, not matter what I set to $CC or $CXX always x86_64-linux-gnu-gcc was used for building the extension, and that pointed to the old gcc. update-altenatives also did not list the multiple gccs.
Anyways, this was the fix:
sudo update-alternatives --install /usr/bin/x86_64-linux-gnu-gcc x86_64-linux-gnu-gcc /usr/bin/x86_64-linux-gnu-gcc-5 40 --slave /usr/bin/x86_64-linux-gnu-g++ x86_64-linux-gnu-g++ /usr/bin/x86_64-linux-gnu-g++-5
sudo update-alternatives --config x86_64-linux-gnu-gcc
|
st98916
|
Hi,
This is my simplified model structure
class Encoder(nn.Module):
def __init__(self, ...):
....
encoder = Encoder()
class Decoder(nn.Module):
def __init__(self, encoder, ...):
...
Decoder net has encode model in it.
Then when i train this model, should I optimize both models like
encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate)
or just decoder?
|
st98917
|
Solved by InnovArul in post #2
Print decoder.parameters() shapes to see what parameters, the Decoder contains. If it contains Encoder parameters as well, you may only optimize the Decoder.
|
st98918
|
Print decoder.parameters() shapes to see what parameters, the Decoder contains. If it contains Encoder parameters as well, you may only optimize the Decoder.
|
st98919
|
I have a model which consist of two sub-model, as follow,
import ModelA,ModelB
class model(nn.Module):
def __init__(self, args):
super(model,self).__init__()
self.submodela = ModelA()
self.submodelb = ModelB()
def forward(self,x):
x1 = self.submodela(x)
x2 = self.submodelb(x1)
return x1,x2
Model_A = ModelA()
Model_B = ModelA()
Model_All = model(args)
optimizer_a = optim.SGD(Model_A.parameters(), lr=0.01)
optimizer_b = optim.SGD(Model_B.parameters(), lr=0.01)
optimizer_all = optim.SGD(Model_All.parameters(), lr=0.01)
..................................................
...training...
...optimize as a whole...
optimizer_all.zero_grad()
outputs = Model_All(inputs)
loss= f(outputs)
loss.backward()
torch.nn.utils.clip_grad_norm(Model_All.parameters(),5)
optimizer_all.step()
........................................
...optimize as two parts...
optimizer_a.zero_grad()
optimizer_b.zero_grad()
x1 = Model_A(inputs)
outputs = Model_B(x1)
loss= f(outputs)
loss.backward()
torch.nn.utils.clip_grad_norm(Model_A.parameters(),5)
torch.nn.utils.clip_grad_norm(Model_B.parameters(),5)
optimizer_a.step()
optimizer_b.step()
…
The performance of these two optimization way is different, anybody can tell me why?
|
st98920
|
They should give you exactly the same result.
Is that expected that in one case, your function f takes both submodels outputs (merged case) in the other case, it takes only the output of the second model (splited case).
Also try both cases with different random seed as it can just be that the training is not very stable.
|
st98921
|
Thanks!
The inputs for these two cases are the same, and the training strategy as well. The only difference between the two cases is showed as mentioned above. I’ve run these two cases many times, the difference does exist.
Is there any difference between the following code.
torch.nn.utils.clip_grad_norm(Model_All.parameters(),5)
optimizer_all.step()
and
torch.nn.utils.clip_grad_norm(Model_A.parameters(),5)
torch.nn.utils.clip_grad_norm(Model_B.parameters(),5)
optimizer_a.step()
optimizer_b.step()
If these two case have the same model state before this epoch, and the inputs for these two model are the same as well, can I get two models that are exactly the same after this epoch?
|
st98922
|
QQ截图20181015212434.png692×300 19.4 KB
I guess this is reason which result in the difference.
|
st98923
|
To find the cause of the problem, I did the following experiments. I found something interesting.
import torch
import torch.nn as nn
import torch.optim as optim
class ModelA(nn.Module):
"""docstring for ModelA"""
def __init__(self):
super(ModelA, self).__init__()
self.fc1 = nn.Linear(3, 3)
self.fc2 = nn.Linear(3, 3)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
class ModelB(nn.Module):
"""docstring for ModelA"""
def __init__(self):
super(ModelB, self).__init__()
self.fc1 = nn.Linear(3, 3)
self.fc2 = nn.Linear(3, 3)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
class Model_comb(nn.Module):
"""docstring for Model_com"""
def __init__(self):
super(Model_comb, self).__init__()
self.modelA = ModelA()
self.modelB = ModelB()
def forward(self, x):
x = self.modelA(x)
x = self.modelB(x)
return x
class Model_ALL(nn.Module):
"""docstring for ModelA"""
def __init__(self):
super(Model_ALL, self).__init__()
self.fc1 = nn.Linear(3, 3)
self.fc2 = nn.Linear(3, 3)
self.fc3 = nn.Linear(3, 3)
self.fc4 = nn.Linear(3, 3)
# self.fc5 = nn.Linear(3, 3)
# self.fc6 = nn.Linear(3, 3)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
x = self.fc3(x)
x = self.fc4(x)
return x
model_A = ModelA()
model_B = ModelB()
model_comb = Model_comb()
model_ALL = Model_ALL()
optimizer_A = optim.SGD(model_A.parameters(), lr=0.01)
optimizer_B = optim.SGD(model_B.parameters(), lr=0.01)
optimizer_comb = optim.SGD(model_comb.parameters(), lr=0.01)
optimizer_ALL = optim.SGD(model_ALL.parameters(), lr=0.01)
criterion_A = nn.CrossEntropyLoss()
criterion_B = nn.CrossEntropyLoss()
criterion_comb = nn.CrossEntropyLoss()
criterion_ALL = nn.CrossEntropyLoss()
inputs = torch.rand(2, 3)
labels = torch.tensor([0, 1])
model_comb.modelA.fc1.weight = model_ALL.fc1.weight
model_comb.modelA.fc1.bias = model_ALL.fc1.bias
model_comb.modelA.fc2.weight = model_ALL.fc2.weight
model_comb.modelA.fc2.bias = model_ALL.fc2.bias
model_comb.modelB.fc1.weight = model_ALL.fc3.weight
model_comb.modelB.fc1.bias = model_ALL.fc3.bias
model_comb.modelB.fc2.weight = model_ALL.fc4.weight
model_comb.modelB.fc2.bias = model_ALL.fc4.bias
print('ALL:')
print('ALL parameters before zero_grad:')
for para in model_ALL.parameters():
print(para.data)
print('Combine parameters before zero_grad:')
for para in model_comb.parameters():
print(para.data)
optimizer_ALL.zero_grad()
print('ALL parameters after zero_grad:')
for para in model_ALL.parameters():
print(para.data)
outputs_all = model_ALL(inputs)
print('ALL output:')
print(outputs_all)
loss_ALL = criterion_ALL(outputs_all, labels)
print('ALL loss:')
print(loss_ALL)
loss_ALL.backward()
torch.nn.utils.clip_grad_norm_(model_ALL.parameters(), 1)
print('ALL parameters after clip, before update:')
for para in model_ALL.parameters():
print(para.data)
optimizer_ALL.step()
print('ALL parameters after update:')
for para in model_ALL.parameters():
print(para.data)
print('Combine:')
optimizer_comb.zero_grad()
print('Combine parameters after zero_grad:')
for para in model_comb.parameters():
print(para.data)
outputs_comb = model_comb(inputs)
print('Combine output:')
print(outputs_comb)
loss_comb = criterion_comb(outputs_comb, labels)
print('Combine loss:')
print(loss_comb)
loss_comb.backward()
torch.nn.utils.clip_grad_norm_(model_comb.parameters(), 1)
print('Combine parameters after clip, before update:')
for para in model_comb.parameters():
print(para.data)
optimizer_comb.step()
print('Combine parameters after update:')
for para in model_comb.parameters():
print(para.data)
The results are as follow,
ALL:
ALL parameters before zero_grad:
tensor([[-0.4071, -0.4386, -0.5222],
[-0.0855, 0.2986, -0.0911],
[-0.4010, 0.0293, 0.4651]])
tensor([ 0.2367, 0.1182, -0.5761])
tensor([[ 0.1895, 0.1700, 0.3264],
[-0.4208, 0.0316, -0.4164],
[ 0.0074, -0.1777, 0.2463]])
tensor([-0.1314, -0.2521, 0.0374])
tensor([[ 0.2153, 0.0360, -0.4593],
[-0.1261, -0.5126, -0.1848],
[-0.2757, -0.0736, 0.2502]])
tensor([-0.2666, 0.5407, -0.1402])
tensor([[-0.1796, -0.1530, 0.5736],
[-0.3743, -0.4519, 0.2864],
[-0.2824, -0.3592, 0.2727]])
tensor([ 0.2545, 0.1062, 0.5753])
Combine parameters before zero_grad:
tensor([[-0.4071, -0.4386, -0.5222],
[-0.0855, 0.2986, -0.0911],
[-0.4010, 0.0293, 0.4651]])
tensor([ 0.2367, 0.1182, -0.5761])
tensor([[ 0.1895, 0.1700, 0.3264],
[-0.4208, 0.0316, -0.4164],
[ 0.0074, -0.1777, 0.2463]])
tensor([-0.1314, -0.2521, 0.0374])
tensor([[ 0.2153, 0.0360, -0.4593],
[-0.1261, -0.5126, -0.1848],
[-0.2757, -0.0736, 0.2502]])
tensor([-0.2666, 0.5407, -0.1402])
tensor([[-0.1796, -0.1530, 0.5736],
[-0.3743, -0.4519, 0.2864],
[-0.2824, -0.3592, 0.2727]])
tensor([ 0.2545, 0.1062, 0.5753])
ALL parameters after zero_grad:
tensor([[-0.4071, -0.4386, -0.5222],
[-0.0855, 0.2986, -0.0911],
[-0.4010, 0.0293, 0.4651]])
tensor([ 0.2367, 0.1182, -0.5761])
tensor([[ 0.1895, 0.1700, 0.3264],
[-0.4208, 0.0316, -0.4164],
[ 0.0074, -0.1777, 0.2463]])
tensor([-0.1314, -0.2521, 0.0374])
tensor([[ 0.2153, 0.0360, -0.4593],
[-0.1261, -0.5126, -0.1848],
[-0.2757, -0.0736, 0.2502]])
tensor([-0.2666, 0.5407, -0.1402])
tensor([[-0.1796, -0.1530, 0.5736],
[-0.3743, -0.4519, 0.2864],
[-0.2824, -0.3592, 0.2727]])
tensor([ 0.2545, 0.1062, 0.5753])
ALL output:
tensor([[ 0.1584, -0.0804, 0.4186],
[ 0.1634, -0.0734, 0.4245]])
ALL loss:
tensor(1.2453)
ALL parameters after clip, before update:
tensor([[-0.4071, -0.4386, -0.5222],
[-0.0855, 0.2986, -0.0911],
[-0.4010, 0.0293, 0.4651]])
tensor([ 0.2367, 0.1182, -0.5761])
tensor([[ 0.1895, 0.1700, 0.3264],
[-0.4208, 0.0316, -0.4164],
[ 0.0074, -0.1777, 0.2463]])
tensor([-0.1314, -0.2521, 0.0374])
tensor([[ 0.2153, 0.0360, -0.4593],
[-0.1261, -0.5126, -0.1848],
[-0.2757, -0.0736, 0.2502]])
tensor([-0.2666, 0.5407, -0.1402])
tensor([[-0.1796, -0.1530, 0.5736],
[-0.3743, -0.4519, 0.2864],
[-0.2824, -0.3592, 0.2727]])
tensor([ 0.2545, 0.1062, 0.5753])
ALL parameters after update:
tensor([[-0.4072, -0.4386, -0.5222],
[-0.0855, 0.2985, -0.0911],
[-0.4010, 0.0293, 0.4651]])
tensor([ 0.2367, 0.1181, -0.5761])
tensor([[ 0.1896, 0.1700, 0.3264],
[-0.4208, 0.0316, -0.4164],
[ 0.0074, -0.1777, 0.2462]])
tensor([-0.1316, -0.2523, 0.0376])
tensor([[ 0.2154, 0.0360, -0.4593],
[-0.1261, -0.5126, -0.1848],
[-0.2759, -0.0736, 0.2501]])
tensor([-0.2666, 0.5408, -0.1396])
tensor([[-0.1801, -0.1520, 0.5734],
[-0.3749, -0.4506, 0.2862],
[-0.2813, -0.3616, 0.2731]])
tensor([ 0.2563, 0.1086, 0.5711])
Combine:
Combine parameters after zero_grad:
tensor([[-0.4072, -0.4386, -0.5222],
[-0.0855, 0.2985, -0.0911],
[-0.4010, 0.0293, 0.4651]])
tensor([ 0.2367, 0.1181, -0.5761])
tensor([[ 0.1896, 0.1700, 0.3264],
[-0.4208, 0.0316, -0.4164],
[ 0.0074, -0.1777, 0.2462]])
tensor([-0.1316, -0.2523, 0.0376])
tensor([[ 0.2154, 0.0360, -0.4593],
[-0.1261, -0.5126, -0.1848],
[-0.2759, -0.0736, 0.2501]])
tensor([-0.2666, 0.5408, -0.1396])
tensor([[-0.1801, -0.1520, 0.5734],
[-0.3749, -0.4506, 0.2862],
[-0.2813, -0.3616, 0.2731]])
tensor([ 0.2563, 0.1086, 0.5711])
Combine output:
tensor([[ 0.1613, -0.0768, 0.4129],
[ 0.1664, -0.0698, 0.4188]])
Combine loss:
tensor(1.2415)
Combine parameters after clip, before update:
tensor([[-0.4072, -0.4386, -0.5222],
[-0.0855, 0.2985, -0.0911],
[-0.4010, 0.0293, 0.4651]])
tensor([ 0.2367, 0.1181, -0.5761])
tensor([[ 0.1896, 0.1700, 0.3264],
[-0.4208, 0.0316, -0.4164],
[ 0.0074, -0.1777, 0.2462]])
tensor([-0.1316, -0.2523, 0.0376])
tensor([[ 0.2154, 0.0360, -0.4593],
[-0.1261, -0.5126, -0.1848],
[-0.2759, -0.0736, 0.2501]])
tensor([-0.2666, 0.5408, -0.1396])
tensor([[-0.1801, -0.1520, 0.5734],
[-0.3749, -0.4506, 0.2862],
[-0.2813, -0.3616, 0.2731]])
tensor([ 0.2563, 0.1086, 0.5711])
Combine parameters after update:
tensor([[-0.4072, -0.4386, -0.5222],
[-0.0855, 0.2985, -0.0911],
[-0.4010, 0.0293, 0.4651]])
tensor([ 0.2367, 0.1181, -0.5761])
tensor([[ 0.1896, 0.1700, 0.3264],
[-0.4208, 0.0316, -0.4164],
[ 0.0074, -0.1777, 0.2462]])
tensor([-0.1316, -0.2523, 0.0376])
tensor([[ 0.2154, 0.0360, -0.4593],
[-0.1261, -0.5126, -0.1848],
[-0.2759, -0.0736, 0.2501]])
tensor([-0.2666, 0.5408, -0.1396])
tensor([[-0.1801, -0.1520, 0.5734],
[-0.3749, -0.4506, 0.2862],
[-0.2813, -0.3616, 0.2731]])
tensor([ 0.2563, 0.1086, 0.5711])
We can see that there is a little difference between these two cases after I excute the zero_grad() function. The loss values of two cases have little difference as well. But the final parameters of two moel are the same.
|
st98924
|
Hi all,
I’m trying to write a function that computes the Fast Walsh Hadamard transform using ATen, at some point I have a few lines that make use of advanced indexing. I’ve only found this Github issue regarding advanced indexing in ATen: https://github.com/zdevito/ATen/issues/78 14
I have zero experience in this language and https://pytorch.org/cppdocs/ 6 is not of much help for the moment.
My Python code looks like this:
temp = torch.zeros((N_samples, N // 2, 2), device=x.device) # very important, have to
# initialize the new tensors on the used device
temp[:, :, 0] = x[:, 0::2] + x[:, 1::2]
temp[:, :, 1] = x[:, 0::2] - x[:, 1::2]
res = torch.tensor(temp, device=x.device)
# Second and further stage
for nStage in range(2, int(log(N, 2)) + 1):
temp = torch.zeros((N_samples, G // 2, M * 2), device=x.device)
temp[:, 0:G // 2, 0:M * 2:4] = res[:, 0:G:2, 0:M:2] + res[:, 1:G:2, 0:M:2]
temp[:, 0:G // 2, 1:M * 2:4] = res[:, 0:G:2, 0:M:2] - res[:, 1:G:2, 0:M:2]
temp[:, 0:G // 2, 2:M * 2:4] = res[:, 0:G:2, 1:M:2] - res[:, 1:G:2, 1:M:2]
temp[:, 0:G // 2, 3:M * 2:4] = res[:, 0:G:2, 1:M:2] + res[:, 1:G:2, 1:M:2]
res = torch.tensor(temp, device=x.device)
G = G // 2
M = M * 2
res = temp[:, 0, :]
How do I handle this kind of indexing in ATen?
Thanks in advance for your help
|
st98925
|
So I came up with this code:
at::Tensor fwht_forward(
at::Tensor input
) {
auto n_samples = input.size(0);
auto n_features = input.size(1);
auto G = n_features / 2;
auto M = 2;
at::Tensor temp = at::zeros({n_samples, G, 2});
for (auto i = 0; i < n_samples; i++) {
for (auto j = 0; j < n_features; j = j + 2) {
temp[i][j/2][0] = input[i][j] + input[i][j+1];
temp[i][j/2][1] = input[i][j] - input[i][j+1];
}
}
at::Tensor res = at::zeros({n_samples, G, 2});
res.copy_(temp);
for (auto i = 2; i < std::log2(n_features) + 1; i++) {
temp = at::zeros({n_samples, G / 2, M * 2});
auto res_acc = res.accessor<float, 3>();
for (auto j = 0; j < res.size(0); j++) {
for (auto k = 0; k < G; k = k + 2) {
for (auto l = 0; l < res.size(2); l = l + 4) {
temp[j][k/2][l] = res_acc[j][k][l/2] + res_acc[j][k+1][l/2];
temp[j][k/2][l+1] = res_acc[j][k][l/2] - res_acc[j][k+1][l/2];
temp[j][k/2][l+2] = res_acc[j][k][l/2+1] - res_acc[j][k+1][l/2+1];
temp[j][k/2][l+3] = res_acc[j][k][l/2+1] + res_acc[j][k+1][l/2+1];
}
}
}
res.copy_(temp);
G = G / 2;
M = M * 2;
}
auto temp_acc = temp.accessor<float, 3>();
at::Tensor output = at::zeros({n_samples, n_features});
for (auto m = 0; m < n_samples; m++) {
for (auto n = 0; n < temp.sizes()[2]; n++) {
output[m][n] = temp_acc[m][0][n];
}
}
return output * pow(std::sqrt(n_features), -1);
}
That compiles without errors. But now when I import it and try to use like
import torch
import fwht
fwht.forward(torch.randn(10, 1024))
I get the following error:
RuntimeError: copy_from does not support automatic differentiation; use copy_ instead (_s_copy_from at torch/csrc/autograd/generated/VariableType.cpp:459)
frame #0: at::CPUFloatType::s_copy_(at::Tensor&, at::Tensor const&, bool) const + 0x23b (0x7f009f07d3cb in /home/iacolippo/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #1: at::Type::copy_(at::Tensor&, at::Tensor const&, bool) const + 0x61 (0x7f009f1c5981 in /home/iacolippo/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #2: fwht_forward(at::Tensor) + 0x300 (0x7f009c4beed0 in /home/iacolippo/anaconda3/lib/python3.6/site-packages/fwht-0.0.0-py3.6-linux-x86_64.egg/fwht.cpython-36m-x86_64-linux-gnu.so)
frame #3: <unknown function> + 0x85c0 (0x7f009c4c15c0 in /home/iacolippo/anaconda3/lib/python3.6/site-packages/fwht-0.0.0-py3.6-linux-x86_64.egg/fwht.cpython-36m-x86_64-linux-gnu.so)
frame #4: <unknown function> + 0x100a5 (0x7f009c4c90a5 in /home/iacolippo/anaconda3/lib/python3.6/site-packages/fwht-0.0.0-py3.6-linux-x86_64.egg/fwht.cpython-36m-x86_64-linux-gnu.so)
frame #5: _PyCFunction_FastCallDict + 0x154 (0x5602be0d0364 in /home/iacolippo/anaconda3/bin/python3.6)
frame #6: <unknown function> + 0x19eebc (0x5602be162ebc in /home/iacolippo/anaconda3/bin/python3.6)
frame #7: _PyEval_EvalFrameDefault + 0x30a (0x5602be18462a in /home/iacolippo/anaconda3/bin/python3.6)
frame #8: PyEval_EvalCodeEx + 0x329 (0x5602be15d8d9 in /home/iacolippo/anaconda3/bin/python3.6)
frame #9: PyEval_EvalCode + 0x1c (0x5602be15e67c in /home/iacolippo/anaconda3/bin/python3.6)
frame #10: <unknown function> + 0x1bdf2e (0x5602be181f2e in /home/iacolippo/anaconda3/bin/python3.6)
frame #11: _PyCFunction_FastCallDict + 0x91 (0x5602be0d02a1 in /home/iacolippo/anaconda3/bin/python3.6)
frame #12: <unknown function> + 0x19eebc (0x5602be162ebc in /home/iacolippo/anaconda3/bin/python3.6)
frame #13: _PyEval_EvalFrameDefault + 0x30a (0x5602be18462a in /home/iacolippo/anaconda3/bin/python3.6)
frame #14: <unknown function> + 0x197f24 (0x5602be15bf24 in /home/iacolippo/anaconda3/bin/python3.6)
frame #15: <unknown function> + 0x198dc1 (0x5602be15cdc1 in /home/iacolippo/anaconda3/bin/python3.6)
frame #16: <unknown function> + 0x19ef95 (0x5602be162f95 in /home/iacolippo/anaconda3/bin/python3.6)
frame #17: _PyEval_EvalFrameDefault + 0x30a (0x5602be18462a in /home/iacolippo/anaconda3/bin/python3.6)
frame #18: <unknown function> + 0x197f24 (0x5602be15bf24 in /home/iacolippo/anaconda3/bin/python3.6)
frame #19: <unknown function> + 0x198dc1 (0x5602be15cdc1 in /home/iacolippo/anaconda3/bin/python3.6)
frame #20: <unknown function> + 0x19ef95 (0x5602be162f95 in /home/iacolippo/anaconda3/bin/python3.6)
frame #21: _PyEval_EvalFrameDefault + 0x10c7 (0x5602be1853e7 in /home/iacolippo/anaconda3/bin/python3.6)
frame #22: <unknown function> + 0x19828e (0x5602be15c28e in /home/iacolippo/anaconda3/bin/python3.6)
frame #23: <unknown function> + 0x198dc1 (0x5602be15cdc1 in /home/iacolippo/anaconda3/bin/python3.6)
frame #24: <unknown function> + 0x19ef95 (0x5602be162f95 in /home/iacolippo/anaconda3/bin/python3.6)
frame #25: _PyEval_EvalFrameDefault + 0x10c7 (0x5602be1853e7 in /home/iacolippo/anaconda3/bin/python3.6)
frame #26: <unknown function> + 0x198b8b (0x5602be15cb8b in /home/iacolippo/anaconda3/bin/python3.6)
frame #27: <unknown function> + 0x19ef95 (0x5602be162f95 in /home/iacolippo/anaconda3/bin/python3.6)
frame #28: _PyEval_EvalFrameDefault + 0x30a (0x5602be18462a in /home/iacolippo/anaconda3/bin/python3.6)
frame #29: <unknown function> + 0x198b8b (0x5602be15cb8b in /home/iacolippo/anaconda3/bin/python3.6)
frame #30: <unknown function> + 0x19ef95 (0x5602be162f95 in /home/iacolippo/anaconda3/bin/python3.6)
frame #31: _PyEval_EvalFrameDefault + 0x30a (0x5602be18462a in /home/iacolippo/anaconda3/bin/python3.6)
frame #32: <unknown function> + 0x197f24 (0x5602be15bf24 in /home/iacolippo/anaconda3/bin/python3.6)
frame #33: <unknown function> + 0x198dc1 (0x5602be15cdc1 in /home/iacolippo/anaconda3/bin/python3.6)
frame #34: <unknown function> + 0x19ef95 (0x5602be162f95 in /home/iacolippo/anaconda3/bin/python3.6)
frame #35: _PyEval_EvalFrameDefault + 0x30a (0x5602be18462a in /home/iacolippo/anaconda3/bin/python3.6)
frame #36: <unknown function> + 0x198b8b (0x5602be15cb8b in /home/iacolippo/anaconda3/bin/python3.6)
frame #37: <unknown function> + 0x19ef95 (0x5602be162f95 in /home/iacolippo/anaconda3/bin/python3.6)
frame #38: _PyEval_EvalFrameDefault + 0x30a (0x5602be18462a in /home/iacolippo/anaconda3/bin/python3.6)
frame #39: <unknown function> + 0x197f24 (0x5602be15bf24 in /home/iacolippo/anaconda3/bin/python3.6)
frame #40: <unknown function> + 0x198dc1 (0x5602be15cdc1 in /home/iacolippo/anaconda3/bin/python3.6)
frame #41: <unknown function> + 0x19ef95 (0x5602be162f95 in /home/iacolippo/anaconda3/bin/python3.6)
frame #42: _PyEval_EvalFrameDefault + 0x30a (0x5602be18462a in /home/iacolippo/anaconda3/bin/python3.6)
frame #43: PyEval_EvalCodeEx + 0x329 (0x5602be15d8d9 in /home/iacolippo/anaconda3/bin/python3.6)
frame #44: PyEval_EvalCode + 0x1c (0x5602be15e67c in /home/iacolippo/anaconda3/bin/python3.6)
frame #45: <unknown function> + 0x214ce4 (0x5602be1d8ce4 in /home/iacolippo/anaconda3/bin/python3.6)
frame #46: PyRun_FileExFlags + 0xa1 (0x5602be1d90e1 in /home/iacolippo/anaconda3/bin/python3.6)
frame #47: PyRun_SimpleFileExFlags + 0x1c4 (0x5602be1d92e4 in /home/iacolippo/anaconda3/bin/python3.6)
frame #48: Py_Main + 0x5ff (0x5602be1dcdaf in /home/iacolippo/anaconda3/bin/python3.6)
frame #49: main + 0xee (0x5602be0a38be in /home/iacolippo/anaconda3/bin/python3.6)
frame #50: __libc_start_main + 0xf1 (0x7f00bfae91c1 in /lib/x86_64-linux-gnu/libc.so.6)
frame #51: <unknown function> + 0x1c70da (0x5602be18b0da in /home/iacolippo/anaconda3/bin/python3.6)
So the culprit seems res.copy_(temp);, but I don’t understand how it should be modified. Any ideas?
|
st98926
|
So I’ve heard that the happy people use torch::Tensor, not at::Tensor 29. (What also works, and this is needed when you do stuff in ATen itself, is to grab some input’s t.options() and use that in the factory functions.)
I’ll admit, that it’s a bit hard to find in the C++ docs 19 at the moment, but they’ll likely soon be awesome.
Also, you don’t want to use indexing this way with tensors. As these seem to be pointwise ops, use TensorAccessors (auto t_a = temp.accessor<float, 3>()).
Note that you need to do type dispatching for this, if you want it to work with various dtypes. You’ll find patterns for this in the ATen/native subdirectory of the source code (e.g. I implemented the CPU CTC loss using accessors 13).
Best regards
Thomas
|
st98927
|
Ahahahahah awesome answer, I’ll try to understand these things and probably come back with more questions
Thank you!
|
st98928
|
For your original problem: The function you were looking for is probably slice. For non-advanced, there also is narrow.
Best regards
Thomas
|
st98929
|
Sorry for the probably stupid questions but I’m a total newbie here:
if I try to use torch::Tensor I get
fwht.cpp:7:8: error: ‘Tensor’ in namespace ‘torch’ does not name a type
Do I get to use this torch::Tensor only if I write my extension in the ATen native folder inside the pytorch repo and then recompile?
if I use at::Tensor everything compiles correctly but when I try to use my Python binding fwht.forward(torch.randn(10, 512)) I get
RuntimeError: copy_from does not support automatic differentiation; use copy_ instead (_s_copy_from at /home/iacolippo/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:265)
From the README and the guide on how to write C++ extensions 3 I understood that it’s not really a problem if autodiff breaks provided that you define a backward function, which I’m doing here.
#include <torch/extension.h>
#include<cmath>
#include <iostream>
#include <vector>
at::Tensor fwht_forward(
at::Tensor input
) {
auto n_samples = input.size(0);
auto n_features = input.size(1);
auto G = n_features / 2;
auto M = 2;
at::Tensor temp = at::zeros({n_samples, G, 2});
for (auto i = 0; i < n_samples; i++) {
for (auto j = 0; j < n_features; j = j + 2) {
temp[i][j/2][0] = input[i][j] + input[i][j+1];
temp[i][j/2][1] = input[i][j] - input[i][j+1];
}
}
at::Tensor res = at::empty({n_samples, G, 2});
res.copy_(temp);
for (auto i = 2; i < std::log2(n_features) + 1; i++) {
temp = at::zeros({n_samples, G / 2, M * 2});
auto res_acc = res.accessor<float, 3>();
auto t_a = temp.accessor<float, 3>();
for (auto j = 0; j < res.size(0); j++) {
for (auto k = 0; k < G; k = k + 2) {
for (auto l = 0; l < res.size(2); l = l + 4) {
t_a[j][k/2][l] = res_acc[j][k][l/2] + res_acc[j][k+1][l/2];
t_a[j][k/2][l+1] = res_acc[j][k][l/2] - res_acc[j][k+1][l/2];
t_a[j][k/2][l+2] = res_acc[j][k][l/2+1] - res_acc[j][k+1][l/2+1];
t_a[j][k/2][l+3] = res_acc[j][k][l/2+1] + res_acc[j][k+1][l/2+1];
}
}
}
res.copy_(temp);
G = G / 2;
M = M * 2;
}
auto temp_acc = temp.accessor<float, 3>();
at::Tensor output = at::zeros({n_samples, n_features});
for (auto m = 0; m < n_samples; m++) {
for (auto n = 0; n < temp.sizes()[2]; n++) {
output[m][n] = temp_acc[m][0][n];
}
}
return output * pow(std::sqrt(n_features), -1);
}
at::Tensor fwht_backward(
at::Tensor input
) {
auto n_samples = input.size(0);
auto n_features = input.size(1);
auto G = n_features / 2;
auto M = 2;
at::Tensor temp = at::zeros({n_samples, G, 2});
for (auto i = 0; i < n_samples; i++) {
for (auto j = 0; j < n_features; j = j + 2) {
temp[i][j/2][0] = input[i][j] + input[i][j+1];
temp[i][j/2][1] = input[i][j] - input[i][j+1];
}
}
at::Tensor res = at::empty({n_samples, G, 2});
res.copy_(temp);
for (auto i = 2; i < std::log2(n_features) + 1; i++) {
temp = at::zeros({n_samples, G / 2, M * 2});
auto res_acc = res.accessor<float, 3>();
auto t_a = temp.accessor<float, 3>();
for (auto j = 0; j < res.size(0); j++) {
for (auto k = 0; k < G; k = k + 2) {
for (auto l = 0; l < res.size(2); l = l + 4) {
t_a[j][k/2][l] = res_acc[j][k][l/2] + res_acc[j][k+1][l/2];
t_a[j][k/2][l+1] = res_acc[j][k][l/2] - res_acc[j][k+1][l/2];
t_a[j][k/2][l+2] = res_acc[j][k][l/2+1] - res_acc[j][k+1][l/2+1];
t_a[j][k/2][l+3] = res_acc[j][k][l/2+1] + res_acc[j][k+1][l/2+1];
}
}
}
res.copy_(temp);
G = G / 2;
M = M * 2;
}
auto temp_acc = temp.accessor<float, 3>();
at::Tensor output = at::zeros({n_samples, n_features});
for (auto m = 0; m < n_samples; m++) {
for (auto n = 0; n < temp.sizes()[2]; n++) {
output[m][n] = temp_acc[m][0][n];
}
}
return output * pow(std::sqrt(n_features), -1);
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("forward", &fwht_forward, "FWHT forward");
m.def("backward", &fwht_backward, "FWHT backward");
}
what’s the issue with this copy_?
|
st98930
|
Hi,
I know PyTorch layers expects an additional dimension for batch.
I was wondering how to extend this to a generic nn.Module (with no assumption about what are there inside), since applying a for loop over the batch duplicates the network (generating a siamese one).
Thanks in advance
|
st98931
|
I don’t quite understand your use case.
You usually don’t see the batch dimension as it’s used in the computations in the module.
So usually you won’t find any for loops over the batch dim. Could you post an example of what you are trying to achieve?
|
st98932
|
Hi there,
The fact is that only by-default layers are ready to have a batch as input. When you process data with one of those layers the graph is a single-input /single-output graph and the only thing it changes is the dimensionality (image 2)
However, with a custom nn.Module you cannot handle the problem like that.
im.jpeg1520×1082 52 KB
In my case I have to process videos (audio + sequence of frames)
If I want to create a nn.Module which generates a siamese network to process a video (as sequence of frames) of dimension batch x frame_i x channels x H x W. How can perform in the simplest way that operation?
Because iterating over the batch dimension inside/outside the nn.Module would replicate the branch N times, whereas by-default layers does not replicate the batch but process the whole batch
im2.jpeg1353×1600 76.5 KB
So using a for loop the efect is the case 2 meanwhile by-default layers generate the graph of case 1.
Being able to use nn.Modules as by-default layers seems natural for me (in order to scale the problem to more inputs and more nn.Modules), but here it looks like i have to use batches of frames as input, but at the time of creating bigger modules it can be a nightmare
|
st98933
|
Hi,
What is the best way to prevent a tensor from getting saved to and getting loaded from checkpoint ?
I want to provide explicitly the value of the tensor each time I instantiate my model.
|
st98934
|
I think the easiest way would be to just save and load the complete state_dict and implement a function that resets the specific tensor, e.g.:
model = Net()
model.load_state_dict(torch.load(PATH))
model.reset()
|
st98935
|
Would resetting work if the new value to be used has different shape than the one in the checkpoint?
|
st98936
|
In that case you would have to assign a new nn.Parameter.
Could you explain your use case a bit? Maybe there are some other and better ways.
|
st98937
|
Sure, I have a model that has nn.Embedding, the value of which I pass explicitly. My data changes from time to time, so does the embedding matrix and its shape. I want this tensor to be not part of the checkpoint.
|
st98938
|
I’m transferring a Caffe network into PyTorch. However, when I’m training the network with exactly same protocol, the training loss behaves like this:
untitled.png987×634 20 KB
The loss increasing within each epoch and decreases when starting a new epoch. Thus forms this sawtooth-shaped loss.
Two problems:
The increasing of loss within each epoch seems to be the problem with momentum. I set the momentum to 0 (originally 0.9 by Caffe protocol) and the shape goes away. Is there any difference between Caffe and PyTorch momentum setting?
Let’s assume the problem is with momentum, the loss should not decrease at the start of each epoch, i.e. if the momentum is too large, the loss will always increase. Is there a hidden cleanup operation at the start of each epoch?
Here’s my code:
net.train()
for epoch in range(1, args.epochs + 1):
net.train_step(epoch, log_interval=10)
and in class Net:
def train_step(self, epoch, log_interval=100):
for batch_idx, (data, target) in enumerate(self.train_loader):
if self.is_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
self.optimizer.zero_grad()
loss = self.forward(data, y=target)
loss.backward()
self.optimizer.step()
if batch_idx % log_interval == 0:
print(
Train Epoch: {} [{}/{} ({:.0f}%)]\t'.format(epoch, batch_idx * len(data),
len(self.train_loader.dataset),
100. * batch_idx / len(self.train_loader))
+ '\t'.join(
['{}: {:.6f}'.format(key, self.loss.loss_value[key].data[0]) for key in
self.loss.loss_value.keys()]))
Thanks!
|
st98939
|
Solved by smth in post #2
i dont think this is because of momentum. It is probably because of the way new samples are selected in the dataset.
PyTorch selects samples from dataset without replacement.
Which means, at the beginning of a new epoch, it is likely that you saw a sample in training set that you saw at the end of…
|
st98940
|
i dont think this is because of momentum. It is probably because of the way new samples are selected in the dataset.
PyTorch selects samples from dataset without replacement.
Which means, at the beginning of a new epoch, it is likely that you saw a sample in training set that you saw at the end of the last epoch. But over the epoch, you will never see a repeated sample.
Caffe probably samples with replacement, which means it is equally likely to see the same sample at any part of the epoch.
To verify this theory, you can write a with-replacement sampler, and see if that removes the sawtooth-shape from the loss:
class WithReplacementRandomSampler(Sampler):
"""Samples elements randomly, with replacement.
Arguments:
data_source (Dataset): dataset to sample from
"""
def __init__(self, data_source):
self.data_source = data_source
def __iter__(self):
# generate samples of `len(data_source)` that are of value from `0` to `len(data_source)-1`
samples = torch.LongTensor(len(self.data_source))
samples.random_(0, len(self.data_source))
return iter(samples)
def __len__(self):
return len(self.data_source)
# then change the constructor of train_loader this way
self.train_loader = torch.utils.data.Dataloader(dataset, ..., sampler=WithReplacementRandomSampler(dataset), shuffle=False)
|
st98941
|
I have observed the same loss pattern on an automatic speech recognition task using the default PyTorch dataloader (sampling without replacement). Applying instead the with-replacement random sampler suggested by @smth removes the pattern, as would be expected.
I am aware of results in the literature indicating that sampling without replacement may yield faster convergence on some problems (https://arxiv.org/abs/1603.00570 13, https://arxiv.org/abs/1202.4184v1 3). However, I feel that the question remains as to why we would be okay with the loss pattern induced by sampling without replacement?
Drops at the interface between two epochs could be expected simply due to seeing samples which have already been fitted again for the first (or second or third or …) time. This could indicate that we are indeed learning to better classify that specific sample. However, the reason for the increasing trend in the loss during a single epoch evades me.
One line of thought is that fitting to the examples in the first batch of an epoch worsens the model performance on any other examples in the training set; and so forth for all batches of an epoch. However, wouldn’t this be expected to result in an increasing validation loss? Aren’t we in a sense overfitting to each training example during the epoch?
Thanks in advance.
|
st98942
|
Ordinary CNN (not LSTM), Batch size 1, dataset size is about 40k, these curves are at about epoch 8th. Any one experienced these strange curves, i.e., fluctuating dramatically in a period, especially the valid_acc. Batch size?
strange_curve.png1380×1006 161 KB
here is the training acc curves
train_acc.png1398×478 90.8 KB
|
st98943
|
You should try Increasing your batch size. A batch size of 1 can cause curves to look like this.
|
st98944
|
I am struggling with this tensor.cpu() problem 116
It seems like the latest pytorch upstream code had not solved this completely yet.
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
|
st98945
|
Hi,
This is the expected behaviour. If you want a cuda tensor as a numpy array, you have to first send it to the cpu with .cpu() then you can get the numpy array with .numpy().
|
st98946
|
in this case, how should I modify this affected line of code 116 ? It seems like .cpu() is used in the next line
|
st98947
|
This line seems fine. First, it does the operation on tensor v as v = v / np.sqrt(torch.sum(v * v)) and then net line it will send that to CPU: ... = v.cpu()
|
st98948
|
but look again at the back trace given at https://github.com/jacobgil/pytorch-pruning/issues/16#issuecomment-429922322 60
It is pointing the problem at line 101
|
st98949
|
Someone told me: we can’t mix gpu and numpy variables in an algorithm or chunk of code
So, he told me to find a pytorch alternative for numpy.sqrt()
|
st98950
|
numpy is cpu only. So you can’t use it to perform gpu operations. You can use torch.sqrt() to replace that
|
st98951
|
Hi,
I’ve been running a GAN model in pytorch available here 1, and it’s been running fine for ~9000 iterations, then suddenly breaks. Anyone has seen this error before ?
I’m running on pytorch 0.2 on 4 NVIDIA K80 GPUs in a docker environment that has python 2.7 and NVIDIA-Linux-x86_64-375.66 as driver (started using nvidia-docker).
RuntimeError: cublas runtime error : an internal operation failed at /pytorch/torch/lib/
The full error trace is below.
Thanks!
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 156, in
backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables
)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/init.py”, line 98, in
backward
variables, grad_variables, retain_graph)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py”, line 91, in
apply
return self._forward_cls.backward(self, *args)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/blas.py”, line
43, in backward
grad_matrix1 = torch.mm(grad_output, matrix2.t())
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 579, in
mm
return Addmm.apply(output, self, matrix, 0, 1, True)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/blas.py”, line
26, in forward
matrix1, matrix2, out=output)
RuntimeError: cublas runtime error : an internal operation failed at /pytorch/torch/lib/
THC/THCBlas.cu:246
|
st98952
|
Thanks for the reply! I did try that based on some discussions I saw on other topics/forums.
I can’t say for sure whether it helped or not; I still got the same error thrown, albeit later in the training process.
|
st98953
|
Hi,
I need to calculate the softmax of a large, high-rank tensor and the built-in method seems significantly slower than explicitly calculating it. I measured the speeds using torch.autograd.profiler and am using a single K80 for calculations.
Built-in:
X = torch.randn((1000, 10, 100, 100)).to('cuda')
with torch.autograd.profiler.profile(use_cuda=True) as prof:
S = X.softmax(-1)
Explicit:
X = torch.randn((1000, 10, 100, 100)).to('cuda')
with torch.autograd.profiler.profile(use_cuda=True) as prof:
S = X.exp()
S = S / S.sum(-1, keepdim=True)
Here’s the profiler output for both (top: built-in, bottom: explicit)
Screen Shot 2018-10-16 at 15.19.06.png1946×552 22.6 KB
The results are the same (up to some precision errors) for the same input.
Am I doing something wrong (either with the calculation or with the profiling)? Or is there some inefficiency in the built-in softmax?
System:
Ubuntu 16.04
PyTorch 0.4.1
CUDA 9.2 / cuDNN 7
1x K80
|
st98954
|
The naive implementation is numerically unstable. Our builtin involves e.g. computing a max before the actual softmax which might end up being slightly more expensive.
|
st98955
|
Thanks, that makes sense.
I just did the test again, subtracting the max from X:
X = X - X.max(-1, keepdim=True)[0]
X = X.exp()
X = X / X.sum(-1, keepdim=True)
and I get the same time for the explicit calculation and the builtin version.
|
st98956
|
Hi.
Because of the size of dataset, i decide to load the image in the __init__() function in dataloader class.
def __init__(self, image_file_path, img_transform=None, loader=default_loader):
self.ref = {}
with open(os.path.join(os.path.dirname(image_file_path),"data.csv")) as f:
for line in f:
line = line.strip().split('\t')
folder_name = line[0]
dst = folder_name
pic_name = '-'.join((line[1],line[4]))+'.jpg'
imgdata = base64.b64decode(line[6])
self.ref[os.path.join(dst,pic_name)] = imgdata
self.img_transform = img_transform
self.loader = loader
But when i use num_workers!=0, i find i have used double of the RAM. It seems that the original dataloader class maintains one copy of the data, and the threads share another copy of the data.
Can i release the space the original dataloader class occupied and just let the threads share the data? Or how can i only use one space of the data instead of two.
|
st98957
|
Hello!
How can I give varying inputs to the neural network. I want to implement pointNet, but if I have a data with different number of input pointcloud points , batch size=32, how can I define my input_placeholder in pytoch?
data = Variable(torch.ones(32,3,None))
|
st98958
|
Solved by SimonW in post #2
PyTorch uses dynamic graph. So you can disregard all those confusing symbolic variables from tf. Just put whatever size you get from that batch of data.
|
st98959
|
PyTorch uses dynamic graph. So you can disregard all those confusing symbolic variables from tf. Just put whatever size you get from that batch of data.
|
st98960
|
Hello, can you give some brief example about it?
I am still new at pytorch.
In my case, I want to train a neural network that can output heatmaps (same size with image input). But the problem here is the training image that i have is very variative (1st image: 64x67, 400x74, and so on). I want to keep the images in original size so I don’t distorted the images because the resize procedure.
Many thanks
|
st98961
|
Many fully convolutional vision network can do this. You don’t need to know the shape before hand.
|
st98962
|
The only difference is one of the parameter passed to DataLoader is in type “numpy.array” and the other is in type “list”, but the DataLoader gives totally different results.
You can use the following code to reproduce it:
from torch.utils.data import DataLoader,Dataset
import numpy as np
class my_dataset(Dataset):
def __init__(self,data,label):
self.data=data
self.label=label
def __getitem__(self, index):
return self.data[index],self.label[index]
def __len__(self):
return len(self.data)
train_data=[[1,2,3],[5,6,7],[11,12,13],[15,16,17]]
train_label=[-1,-2,-11,-12]
########################### Look at here:
test=DataLoader(dataset=my_dataset(np.array(train_data),train_label),batch_size=2)
for i in test:
print ("numpy data:")
print (i)
break
test=DataLoader(dataset=my_dataset(train_data,train_label),batch_size=2)
for i in test:
print ("list data:")
print (i)
break
The result is:
numpy data:
[tensor([[1, 2, 3],
[5, 6, 7]]), tensor([-1, -2])]
list data:
[[tensor([1, 5]), tensor([2, 6]), tensor([3, 7])], tensor([-1, -2])]
the original question is Here 2
|
st98963
|
Yes it’s behavior of the default collate_fn. Basically elements of the list are considered as different things, but np array is similar to torch tensor.
|
st98964
|
I am going to define my model. How ever, I encounter the RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation while running backward().
class Detail(nn.Module):
def __init__(self,N,D):
super(Detail, self).__init__()
self.N, self.D = N, D
def forward(self, input):
for i in range(self.N):
temp=input[0][i]
#temp=temp.numpy()
#temp=temp.tolist()
for m in range(self.D):
for n in range(self.D):
if m< self.D-1:
temp[n][m]=temp[n][m]-temp[n][m+1]
if m==self.D-1:
temp[n][m]=temp[n][m-1]
return input
class View(nn.Module):
def __init__(self, *args):
super(View, self).__init__()
if len(args) == 1 and isinstance(args[0], torch.Size):
self.size = args[0]
else:
self.size = torch.Size(args)
def forward(self, input):
return input.view(self.size)
class Net(nn.Module):
def init(self, nclass, backbone=‘resnet18’):
super(Net, self).init()
self.backbone = backbone
# copying modules from pretrained models
if backbone == ‘resnet18’:
self.pretrained = resnet.resnet50(pretrained=True)
self.detail = nn.Sequential(
Detail(512,7),
nn.AvgPool2d(7),
View(-1, 512),
nn.Linear(512, 64),
Normalize()
)
self.pool = nn.Sequential(
nn.AvgPool2d(7),
View(-1, 512),
nn.Linear(512, 64),
Normalize()
)
self.fc = nn.Sequential(
Normalize(),
nn.Linear(64*64, 128),
Normalize(),
nn.Linear(128, nclass)
)
def forward(self, x):
if self.backbone == 'resnet18' or self.backbone == 'resnet101' \
or self.backbone == 'resnet152':
# pre-trained ResNet feature
x = self.pretrained.conv1(x)
x = self.pretrained.bn1(x)
x = self.pretrained.relu(x)
x = self.pretrained.maxpool(x)
x = self.pretrained.layer1(x)
x = self.pretrained.layer2(x)
x = self.pretrained.layer3(x)
x = self.pretrained.layer4(x)
x1 = self.detail(x)
print(x1.size())
x2 = self.pool(x)
print(x2.size())
x1 = x1.unsqueeze(1).expand(x1.size(0),x2.size(1),x1.size(-1))
print(x1.size())
x = x1*x2.unsqueeze(-1)
print(x.size())
x=x.view(-1,x1.size(-1)*x2.size(1))
out = self.fc(x)
return out
|
st98965
|
Hi,
Your Detail module modifies the temp tensor inplace.
Assuming that temp is a DxD-1 matrix, your for loops can be replaced by:
part1 = temp.narrow(1, 0, D-1) - temp.narrow(1, 1, D-1)
part2 = part1.narrow(1, -1, 1)
out = torch.cat([part1, part2], 1))
Also I am not sure if you get the output you want? Maybe you were expecting part2 = temp.narrow(1, -1, 1).
|
st98966
|
thank you very much!I follow your approach, which error no longer appears, I am very curious about the reason. At the same time, I have other mistakes to ask,
RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.FloatTensor for argument #4 ‘mat1’
If you don’t mind, pay attention to your contact information, I want to continue to ask you.
|
st98967
|
The think is that your original code was modifying the temp tensor inplace (when doing temps[n,m] = xxx). The thing is that the original value of this tensor was needed for gradient computation, so inplace modification of it is not allowed !
Your other error is just that a Tensor is not on the gpu. make sure to only perform computation between tensors of the same type and on the same device.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.