id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st98268
|
Since you are training your model, the loss should decrease. Is that the case or how does the loss change?
|
st98269
|
ptrblck:
Since you are training your model, the loss should decrease. Is that the case or how does the loss change?
I’ve learned that the loss should go down but in my case it prints the same value on each training epoch.
Example:
Train Epoch: 1 [128/640 (20%)]
Loss: 159.35
Train Epoch: 1 [256/640 (40%)]
Loss: 149.98
Train Epoch: 1 [384/640 (60%)]
Loss: 146.86
Train Epoch: 1 [512/640 (80%)]
Loss: 145.30
Train Epoch: 1 [640/640 (100%)]
Loss: 144.37
Train Epoch: 2 [128/640 (20%)]
Loss: 140.62
Train Epoch: 2 [256/640 (40%)]
Loss: 140.62
Train Epoch: 2 [384/640 (60%)]
Loss: 140.62
Train Epoch: 2 [512/640 (80%)]
Loss: 140.62
Train Epoch: 2 [640/640 (100%)]
Loss: 140.62
Also, if I decrease the batch size from 128 to 64, the loss starts lower at 70
|
st98270
|
The loss is decreasing in the first epoch and gets stuck in the second.
Maybe your setup is not optimal for your data.
Try to overfit a small sample of your data, i.e. take just a few samples (just up to 10 samples for each class), and try to decrease the training error to approx. zero.
Also, your learning rate might be a bit too high, which often yields an approx. constant loss.
I don’t really like the fact that the loss changes that much regarding the batch size.
Are you seeing the same effect for other batch sizes, i.e. 0.25*batch_size == 0.25*loss?
This should usually not be the case, as the loss is averaged over the batch dimension using the default settings for the loss function.
|
st98271
|
It is not the case that 0.25*batch_size == 0.25*loss. I will create another topic on the issue of the loss
|
st98272
|
The log_abs_det_jacobian 7 in the SigmoidTransform, is implemented as
-(y.reciprocal() + (1 - y).reciprocal()).log()
but I think the correct implementation should be
-(y.reciprocal().log() + (1 - y).reciprocal().log())
or even simpler y.log() + (1-y).log() since log(a) + log(b) does not equal to log(a + b).
|
st98273
|
Hi Xiucheng, I agree it’s a bit confusing although the log_abs_det_jacobian is correct in the current implementation.
The function log_abs_det_jacobian calculates log(|dy/dx|) where y is the output and x is the input. Now, this is equal to -log(|dx/dy|), which is closer to the formula log_abs_det_jacobian uses in this implementation for the sigmoid transform.
For the sigmoid transform we have, dy/dx = sigma(x) * (1 - sigma(x)) = y(1-y). Then the function should return log(y*(1-y)). If we multiple and divide by -1 we get that it’s equal to -log((y*(1-y))^-1). Expanding the fraction into two terms gives, -log(1/y + (1/(1-y))) which is what is calculated in the code.
Alternative, you could start from dx/dy and x = logit(y) to get the same result. Hope this helps!
|
st98274
|
Hi @stefanwebb. Thanks so much for your step by step analysis. But expanding -log((y*(1-y))^-1) into two terms, should not it be -log(1/y) - log(1/(1-y))?
|
st98275
|
The fraction is expanded inside the log to give -log(1/y + (1/(1-y))) using 1/(y*(1-y)) = 1/y + 1/(1-y). Expanding the log into two terms gives an equivalent expression
|
st98276
|
Oh, I see. In this case, log(1/y + (1/(1-y))) actually is equivalent to log(1/y) + log(1/(1-y)).
|
st98277
|
how can I visualize the fully connected layer outputs and if possible the weights of the fully connected layers as well,
|
st98278
|
You could try to plot them with matplotlib:
activations = {}
def get_activation(name):
def hook(model, input, output):
activations[name] = output.detach()
return hook
model = nn.Sequential(
nn.Linear(10, 25),
nn.ReLU(),
nn.Linear(25, 2)
)
weights = model[0].weight.data.numpy()
model[0].register_forward_hook(get_activation('layer0'))
x = torch.randn(1, 10)
output = model(x)
plt.matshow(activations['layer0'])
plt.matshow(weights)
Would this work for you or are you looking for another type of visualization?
|
st98279
|
in this case are you plotting the output of the fully connected layer? i have never used register_forward_hook, what does that do? thank you for your help btw
|
st98280
|
I register get_activation for the first layer and in the function itself I store the output in the activations dict.
The dict stores all outputs with the corresponding keys.
|
st98281
|
then you’re plotting the output of layer one, but how is it being plotted, when it is a flat vector 25*1, are you plotting it as such?
|
st98282
|
ok i see, thank you! I’m trying to visualize the weights as well as the outputs in a convolution like manner, but im having difficulty doing that. That was my main goal of asking this question but i see i did a poor job of communicating it
|
st98283
|
What do you mean by “in a convolution like manner”?
Could you link to a visualization you’ve seen like this?
|
st98284
|
hello there, could this be used to visualize the relu output? i mean each layer has relu, can we get visualization of relu?
regards
thanks
|
st98285
|
Sure! If you are using a nn.Sequential model, you can just register the forward hook to the nn.ReLU module.
In my example just call
model[1].register_forward_hook(get_activation('layer0_relu'))`
...
plt.matshow(activations['layer0_relu'])
|
st98286
|
I want to use high-order gradient with LSTMCell. However, this function is not supported. So I want to implement the cell with linear layer. Also, I want to initialize my lstm cell to a well-trained lstm cell.
But I cannot find the order of the gates. The following is my implementation. I am wondering what is the order of gates in pytorch LSTMCell implementation. Thank you.
gist.github.com
https://gist.github.com/jjery2243542/a26ee50b97ad8f7f7fae486ea798fae3 33
MyLSTMCell.py
class MyLSTMCell(torch.nn.Module):
def __init__(self, input_size, hidden_size):
super().__init__()
self.input_layer = torch.nn.Linear(input_size, hidden_size * 4)
self.hidden_layer = torch.nn.Linear(hidden_size, hidden_size * 4)
self.h = hidden_size
def forward(self, inp, hidden_tuple):
z_tm1, c_tm1 = hidden_tuple
h = self.h
This file has been truncated. show original
|
st98287
|
Solved by tom in post #2
The order within the parameters is the same as for LSTM, so you can consult the Variables section in the LSTM documentation.
Best regards
Thomas
|
st98288
|
The order within the parameters is the same as for LSTM, so you can consult the Variables section in the LSTM documentation 74.
Best regards
Thomas
|
st98289
|
Regards the tf.count_nonzero function from this page: https://www.tensorflow.org/api_docs/python/tf/count_nonzero 13
tf.count_nonzero(x, 1, keepdims=True) # [[1], [2]]
Is there a way that I can do similar things in Pytorch and still keep the dimension?
From the answer in this thread Count nonzeros element along an axis 54 , seems like there is a way to count but it doesn’t keep the dimensions, how should I do this?
Thanks
|
st98290
|
Hi all,
Until today torch.nn.CTCLoss still not exists? where can I see the latest information about CTC update?
This code, still now not work:
!pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html
import torch
from torch import nn
ctc_loss = nn.CTCLoss()
print('Done!!!!!!')
---> 15 ctc_loss = nn.CTCLoss()
16 print('Done!!!!!!')
AttributeError: module 'torch.nn' has no attribute 'CTCLoss'
Why? how I can get the CTC work?
|
st98291
|
Are you sure that you actually installed the nightly? Could you print the version number?
|
st98292
|
I want to get my model inference time by this code:
with torch.no_grad() :
input = np.random.rand(1, 3, 368, 368).astype(np.float32)
input = torch.from_numpy(input)
start = time.time()
for i in range(100):
t1 = time.clock()
_, _ = net(input)
t2 = time.clock()
print('every_time: %04d: '%i, t2 - t1)
end = time.time()
print('total time: ', end - start)
the print result is:
every_time: 0000: 0.37265799999999993
every_time: 0001: 0.32706800000000014
.
.
.
every_time: 0098: 0.32011200000000173
every_time: 0099: 0.3260919999999956
total time: 8.159255981445312
for every_time, it about 0.3~0.4, so the total time should be (0.3~0.4)*100=(30~40), but according to total time , it’s about 8.16. Actually, in my opinion, 8.16 is the correct time. So, why every_time plus 100 not equals total time?
|
st98293
|
Solved by SimonW in post #2
you are not measuring the same thing. see https://stackoverflow.com/questions/85451/pythons-time-clock-vs-time-time-accuracy
|
st98294
|
you are not measuring the same thing. see https://stackoverflow.com/questions/85451/pythons-time-clock-vs-time-time-accuracy 60
|
st98295
|
Hi, I am trying to convert my LSTM model to torch script, then used in a production environment, I did it according to the guide lines in < DEPLOYING A SEQ2SEQ MODEL WITH THE HYBRID FRONTEND> (https://pytorch.org/tutorials/beginner/deploy_seq2seq_hybrid_frontend_tutorial.html 9) , but I encountered the problem as follows:
File “/home/ner/src/model/bilstm.py”, line 76, in init
self.forward_lstm = LatticeLSTM(lstm_input, lstm_hidden, gaz_dropout, gaz_alphabet_size, gaz_emb_dim, gaz_embedding, True, HP_fix_gaz_emb, False)
File “/home/.local/lib/python2.7/site-packages/torch/jit/init.py”, line 891, in init_then_register
_create_methods_from_stubs(self, methods)
File “/home/.local/lib/python2.7/site-packages/torch/jit/init.py”, line 852, in _create_methods_from_stubs
self._create_methods(defs, rcbs, defaults)
File “/home/.local/lib/python2.7/site-packages/torch/jit/init.py”, line 603, in _try_compile_weak_script
entry = _compiled_weak_fns.get(fn)
File “/usr/lib64/python2.7/weakref.py”, line 284, in get
return self.data.get(ref(key),default)
TypeError: cannot create weak reference to ‘builtin_function_or_method’ object
here is part of my code:
class MyLSTM(torch.jit.ScriptModule):
constants = [‘left2right’,‘use_gpu’,‘hidden_dim’]
def init(self, input_dim, hidden_dim, word_drop, word_alphabet_size, word_emb_dim, pretrain_word_emb=None, left2right=True, fix_word_emb=True, use_gpu = False, use_bias = True):
super(LatticeLSTM, self).init()
skip_direction = “forward” if left2right else “backward”
print "build LatticeLSTM… “, skip_direction, “, Fix emb:”, fix_word_emb, " gaz drop:”, word_drop
self.use_gpu = use_gpu
…
@torch.jit.script_method
def forward(self, input, skip_input_list):
# type: (Tensor,Tuple[List[List[List[int]]], bool])
skip_input = skip_input_list[0]
…
Any thoughts?
|
st98296
|
Solved. It is because I mistakenly delete the forward function. If you encountered the same problem,check your forward function and all the functions the forward function calls, make sure to add torch.jit.script decorator to the functions and torch.jit.script_method decorator to your module’s methods.
|
st98297
|
I have a tensor output that is size torch.Size([1, 6, 1]) that I want to have the final shape of torch.Size([6]). I tried doing output.view(-1,6) but that gives me a tensor of size torch.Size([1, 6]). Could someone help me?
|
st98298
|
Solved by kaixin in post #2
output.squeeze() is exactly what you need.
|
st98299
|
I have GTX1080 with 8G onboard
> device = torch.device("cuda")
> Tensor=torch.cuda.HalfTensor(30*1000*1000,128)
Here ok, it less than 8G
> out=torch.mv(Tensor,Tensor[0])
Still ok, used memory still less than 8G
> print(out[0])
And here Cuda FAIL on print(!!) (actually on access to out[0]):
THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorCopy.cpp line=70 error=77 : an illegal memory access was encountered
Traceback (most recent call last):
File “test_pytorch.py”, line 112, in
main()
File “test_pytorch.py”, line 26, in main
print(out[0])
File “/home/integral/.local/lib/python3.5/site-packages/torch/tensor.py”, line 57, in repr
return torch._tensor_str._str(self)
File “/home/integral/.local/lib/python3.5/site-packages/torch/_tensor_str.py”, line 256, in _str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/home/integral/.local/lib/python3.5/site-packages/torch/tensor_str.py", line 82, in init
copy = torch.empty(tensor.size(), dtype=torch.float64).copy(tensor).view(tensor.nelement())
RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /pytorch/aten/src/THC/generic/THCTensorCopy.cpp:70
|
st98300
|
Tried to copy tensor “out” to CPU - same error.
Error raises if I occupy more than 1/2 of GPU memory.
with tensor size 16m or less no error arises.
Tensor=torch.cuda.HalfTensor(16*1000*1000,128)
|
st98301
|
Next.
I can create 2 Tensors of 15m vectors (totaly about ~8G memory, i.e. memory consumption near to 100%)
Then I can make torch.mv on both and combine result.
No errors this case
|
st98302
|
Hi,
I’m trying to implement the same structure described in this squeeze_excitation paper: https://arxiv.org/pdf/1709.01507.pdf 2. The first layer of the structure is a global pooling layer across height and width channels. I searched Caffe2 python API to found a suitable operator to do this but didn’t found any.
There is a ReduceMean op, however, written for C++ API. It calculates the mean value across the last dimension of the input blob.
Does anyone know how I can make this work? Thanks in advance!
|
st98303
|
You looking for:
https://pytorch.org/docs/stable/nn.html#adaptiveavgpool2d 4
You can see it in this implementation of that paper:
github.com
moskomule/senet.pytorch/blob/master/se_module.py 2
from torch import nn
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y
|
st98304
|
I have a loss function that uses the intermediate output at various points in a network. Currently, I just a dummy class that holds the intermediate results, which I can inject at the points of the network I need to:
class Tracker(nn.Module):
def __init__(self):
super(Tracker, self).__init__()
def forward(self, x):
self.x = x
return x
This solution works fine for a single GPU, but it breaks when using data_parallel.
Any suggestions? Thanks.
|
st98305
|
I ended up using list indexed by the device id of the tensor. This way I did not have to modify the structure of the network at all. Thanks.
|
st98306
|
I get following after one forward pass through a network where my loss function is calculated by using torch.svd.
MAGMA gesdd : the updating process of SBDSDC did not converge. Does anyone have any idea what might be wrong?
|
st98307
|
Your matrix probably is very ill-conditioned that the underlying lapack operation couldn’t converge. You can try to catch that error and print the input in that case and see what the matrix looks like exactly.
|
st98308
|
Thanks. Actually when i see the input to svd in try catch block. The input has lot of Nan values. I am unable to find out what made the output of network to become nan.
|
st98309
|
Can very similar singular values make the gradient thorugh svd Nan.
Below are the printed singular values for a single batch.
tensor([ 12.1675, 12.0520, 10.0973, 10.0002, 9.9813, 9.8853,
7.3679, 7.2976, 0.3496, 0.2729, 0.2605, 0.2010], device=‘cuda:0’)
tensor([ 12.1267, 12.0520, 10.0581, 10.0000, 9.9442, 9.8853,
7.3371, 7.2973, 0.3157, 0.2456, 0.2269, 0.1650], device=‘cuda:0’)
tensor([ 12.2977, 12.0520, 10.1992, 10.0880, 10.0000, 9.8853,
7.4365, 7.2974, 0.3493, 0.2985, 0.2740, 0.2152], device=‘cuda:0’)
tensor([ 12.1671, 12.0520, 10.0945, 10.0000, 9.9820, 9.8853,
7.3627, 7.2973, 0.3041, 0.2643, 0.2420, 0.1975], device=‘cuda:0’)
tensor([ 12.2938, 12.0520, 10.2006, 10.0838, 10.0000, 9.8853,
7.4468, 7.2974, 0.3078, 0.2585, 0.2148, 0.1598], device=‘cuda:0’)
tensor([ 12.3265, 12.0520, 10.2277, 10.1063, 10.0001, 9.8853,
7.4629, 7.2974, 0.3733, 0.3435, 0.2603, 0.2503], device=‘cuda:0’)
tensor([ 12.5809, 12.0520, 10.4342, 10.3264, 10.0000, 9.8853,
7.6103, 7.2973, 0.4250, 0.3824, 0.3240, 0.2292], device=‘cuda:0’)
tensor([ 12.1654, 12.0520, 10.0932, 10.0001, 9.9825, 9.8853,
7.3627, 7.2976, 0.4197, 0.3485, 0.3112, 0.2711], device=‘cuda:0’)
tensor([ 12.2557, 12.0521, 10.1630, 10.0525, 10.0000, 9.8854,
7.4115, 7.2973, 0.4303, 0.3432, 0.3237, 0.2445], device=‘cuda:0’)
tensor([ 12.5036, 12.0520, 10.3740, 10.2545, 10.0000, 9.8853,
7.5665, 7.2973, 0.2738, 0.2367, 0.2166, 0.1782], device=‘cuda:0’)
tensor([ 12.0985, 12.0520, 10.0394, 10.0001, 9.9249, 9.8853,
7.3254, 7.2974, 0.3275, 0.2966, 0.2596, 0.2185], device=‘cuda:0’)
tensor([ 12.5316, 12.0520, 10.4002, 10.2913, 10.0000, 9.8853,
7.5864, 7.2974, 0.3936, 0.3172, 0.2886, 0.2097], device=‘cuda:0’)
tensor([ 12.6889, 12.0520, 10.5327, 10.4080, 10.0000, 9.8853,
7.6836, 7.2974, 0.2977, 0.2453, 0.2348, 0.1856], device=‘cuda:0’)
tensor([ 12.1498, 12.0520, 10.0823, 10.0000, 9.9642, 9.8854,
7.3586, 7.2974, 0.4269, 0.3754, 0.3256, 0.2559], device=‘cuda:0’)
tensor([ 12.1555, 12.0520, 10.0841, 10.0001, 9.9664, 9.8853,
7.3581, 7.2974, 0.3186, 0.2729, 0.2538, 0.2028], device=‘cuda:0’)
tensor([ 12.3647, 12.0521, 10.2616, 10.1446, 10.0001, 9.8854,
7.4893, 7.2974, 0.3760, 0.2918, 0.2656, 0.2396], device=‘cuda:0’)
epoch 0[0/556], loss: 26438.254, coord_loss: 26437.879, conf_objloss: 0.074, conf_noobjloss: 0.211 cls_loss: 0.375 (8.85 s/batch, rest:1:21:58)
image_size [640 640]
Traceback (most recent call last):
U, S, Vh = torch.svd(XwX)
RuntimeError: MAGMA gesdd : the updating process of SBDSDC did not converge (error: 11) at /pytorch/aten/src/THC/generic/THCTensorMathMagma.cu:364
Process finished with exit code 1
|
st98310
|
Yeah, lapack can’t deal with NaNs unfortunately. You can try the anormaly detection mode to fine where NaN happens: https://pytorch.org/docs/master/autograd.html#anomaly-detection 150
|
st98311
|
I get the same error:
RuntimeError: MAGMA gesdd : the updating process of SBDSDC did not converge (error: 1) at /pytorch/aten/src/THC/generic/THCTensorMathMagma.cu:364
What can be the reason for ‘Nan’ while applying SVD? Is there an example that you can point to for applying anomaly detection for such scenario?
Thanks !
|
st98312
|
I am unable to regenerate the same error in torch using the below:
`a=torch.tensor([[float(‘NaN’), 0.0, 0.0, 9.57, -3.49, 9.84],
[9.93, 6.91, -7.93, 1.64, 4.02, 0.15],
[9.83, 5.04, 4.86, 8.83, 9.80, -8.99],
[5.45, -0.27, 4.85, 0.74, 10.00, -6.02],
[0.0, 7.98, 3.01, 5.80, 4.27, -5.31]]).t()
u, s, v = torch.svd(a)`
The current error is:
Lapack Error gesvd : 4 superdiagonals failed to converge. at /pytorch/aten/src/TH/generic/THTensorLapack.c:470
It would be great @kk1153 and @SimonW, if you can telling me how do I go about generating the same error and fix it.
In my original code, the error generates at 500th epoch and 13 hours of training, so I do need to replicate the error and fix it.
Thanks,
Shikha
|
st98313
|
You have a nan in your svd input… The reason for nan is usually the output of the previous function has a nan. The anonmly detection doc has a pretty good example on how to apply it on a module. It should be helpful in finding where it happens.
|
st98314
|
I would suggest you to look at the gradient at the previous step. In my case, the gradient were exploding in the previous iteration of training due to some normalization step i was doing after the svd. This made the input ‘Nan’, which causes this error when Nan goes into torch.svd.
So, what you can do is check the the value of gradients of variables which are output of svd, if they are becoming very large, You can look for gradients using backward hook. Also look for places, where you are dividing by some variable which can become close to zero (if there are any such variables, just add an epsilon to it)
|
st98315
|
@kk1153 @SimonW, thanks for your responses.
My reran my code and it worked this time. For some reason, the jobs failed on v100
|
st98316
|
I’m noticing that when using “.cuda()” the memory (not GPU memory) used by python jumps significantly.
For example if I run the commands:
import torch
torch.tensor([1]).cuda()
memory usage jumps from 10s of MB to 1.5+ GB (memory usage on the GPU goes up by less than 500MB).
Is this expected behavior? If not, any recommendations on finding/fixing the issue? I’m using the most recent stable version of Pytorch on Ubuntu 18.04 with Python 3.6, CUDA 9.2, and installed via conda.
Thanks!
|
st98317
|
Ah thanks, I had been looking around for something like that but apparently didn’t do a very good job.
|
st98318
|
Hello All,
I recognize that 1.0.0 is not yet released. However, is there a plan for LTS? In particular with regard to python 2.
Numpy will stop new feature development in python 2 on Jan 1. I’m not sure how that will affect the roadmap for pytorch given the dependency.
If I were to build against 2.7 how much of a trap would I be in say during June of next year?
|
st98319
|
We dont plan to drop python 2.7 yet.
The last time we discussed this, we thought about dropping it when either of the two happens:
We see < 10% Python 2.7 downloads
The pain threshold for maintaining 2.7 support is so great that PyTorch as a product loses significant development time
Neither of them have happened, or are expected to happen by June of next year as far as I can tell.
My answer, as you see, is not giving you a commitment, but giving you the thought process of the team.
We haven’t committed to an LTS release at this point (we aren’t even 1.0 stable until December this year)
|
st98320
|
Hello,
Thank you the fair team for your work, Pytorch is a pleasure to use.
I have a few questions:
If I correctly understand, we should expect a stable 1.0 version for december?
If I start a project using the release candidate version, will the migration to the forthcoming stable version represents a lot of work?
|
st98321
|
Does the loss given by criterion(output, target) includes the regularization loss? If so, is it possible to extract them. If not, how do I inspect the regularization loss?
|
st98322
|
Depends which criterion and which regularization you are using.
In pytorch, weight decay is usually (for sgd / adam) included in the optimizer directly and not the loss. This is because this term does not need to be computed explicitly.
|
st98323
|
Yup for such cases like sgd/adam, how can i get the regularization term? Is there a way to do it?
|
st98324
|
You cannot get it as it is never computed.
Indeed, the gradient corresponding to it will always be proportionnal to the weights themselves before the update. So the step is implemented directly. For example, for the sgd optimizer, you can see at this 93 line that it only adds the weights themselves to the gradients.
|
st98325
|
I am trying to implement position sensitive roi pooling (PSROIPooling)which is proposed in RFCN work.
PSROIPooling is basically ROIPooling + average pooling.
I am using the roi_pooling.py that is written in pytorch and provided here 18.
and trying to change this part of the code 14 to be completely in pytorch (please note that the current version is in cuda, but i need to do some modification, so that is why im trying to change it to be in pytorch)
so I change that file 14 from:
import torch
from torch.autograd import Function
from .._ext import psroi_pooling
class PSRoIPoolingFunction(Function):
def __init__(self, pooled_height, pooled_width, spatial_scale, group_size, output_dim):
self.pooled_width = int(pooled_width)
self.pooled_height = int(pooled_height)
self.spatial_scale = float(spatial_scale)
self.group_size = int(group_size)
self.output_dim = int(output_dim)
self.output = None
self.mappingchannel = None
self.rois = None
self.feature_size = None
def forward(self, features, rois):
batch_size, num_channels, data_height, data_width = features.size()
num_rois = rois.size()[0]
output = torch.zeros(num_rois, self.output_dim, self.pooled_height, self.pooled_width)
mappingchannel = torch.IntTensor(num_rois, self.output_dim, self.pooled_height, self.pooled_width).zero_()
output = output.cuda()
mappingchannel = mappingchannel.cuda()
psroi_pooling.psroi_pooling_forward_cuda(self.pooled_height, self.pooled_width, self.spatial_scale, self.group_size, self.output_dim, \
features, rois, output, mappingchannel)
self.output = output
self.mappingchannel = mappingchannel
self.rois = rois
self.feature_size = features.size()
return output
def backward(self, grad_output):
assert(self.feature_size is not None and grad_output.is_cuda)
batch_size, num_channels, data_height, data_width = self.feature_size
grad_input = torch.zeros(batch_size, num_channels, data_height, data_width).cuda()
psroi_pooling.psroi_pooling_backward_cuda(self.pooled_height, self.pooled_width, self.spatial_scale, self.output_dim, \
grad_output, self.rois, grad_input, self.mappingchannel)
return grad_input, None
to be like this:
import torch
from torch.autograd import Function
from .._ext import psroi_pooling
from .ROI_Pooling_PyTorch import *
from .ROI_Pooling_PyTorch import roi_pooling
from torch.autograd import Variable
class PSRoIPoolingFunction(Function):
def __init__(self, pooled_height, pooled_width, spatial_scale, group_size, output_dim):
self.pooled_width = int(pooled_width)
self.pooled_height = int(pooled_height)
self.spatial_scale = float(spatial_scale)
self.group_size = int(group_size)
self.output_dim = int(output_dim)
self.output = None
self.mappingchannel = None
self.rois = None
self.feature_size = None
def forward(self, features, rois):
batch_size, num_channels, data_height, data_width = features.size()
num_rois = rois.size()[0]
output = torch.zeros(num_rois, self.output_dim, self.pooled_height, self.pooled_width)
# mappingchannel = torch.IntTensor(num_rois, self.output_dim, self.pooled_height, self.pooled_width).zero_()
# ROI Pooling
out2 = roi_pooling(features, rois, size=(self.pooled_height,self.pooled_width),
spatial_scale = self.spatial_scale)
# AVerage pooling for Position Sensitive
output = Variable(output.cuda())
chan= 0
for i in range(0,out2.size(1),self.pooled_height*self.pooled_width):
output[:,chan,:,:] = torch.mean(out2[:,i:i+self.pooled_height*self.pooled_width,:,:],1,keepdim=True)
chan += 1
# mappingchannel = mappingchannel.cuda()
self.output = output
# self.mappingchannel = mappingchannel
self.rois = rois
self.feature_size = features.size()
return output.data
def backward(self, grad_output):
# =============================================================================
# What should i put here?????
# =============================================================================
the forward pass sounds like working, but the backward pass does not, and i have no idea what to do with it, in fact i have no idea what should i put there at all.
Can anyone pleaze help me
ps. just for the record I am using pytorch version 0.3.1 and i cannot switch to 0.4.1 at the moment
|
st98326
|
I use conda installation but since version ~0.3 I follow the code as it appears in github;
I got an error on git pull on master branch -
maybe its something dumb, or maybe that eigen module should not be pass locked
thanks
$ git pull
…
From https://github.com/pytorch/pytorch
89bf98a…e475d3e master -> origin/master
…
Fetching submodule third_party/eigen
Username for ‘https://github.com’: …
Password for …
remote: Invalid username or password.
fatal: Authentication failed for ‘https://github.com/RLovelett/eigen.git/ 1’
|
st98327
|
Hi,
The eigen repo moved so the one you try to pull does not exist anymore.
You can delete the eigen folder and re-update the submodules:
rm -r third_party/eigen
git submodule update --init
|
st98328
|
What is the difference between [None, ...] and .unsqueeze(0) when adding a new dimension to data?
In my test, both methods share the original storage and .unsqueeze(0) is slightly faster. Is [None, ...] just a wrapper of .unsqueeze()?
Thanks.
|
st98329
|
Solved by albanD in post #2
Hi,
[None, ...] is numpy notation for .unsqueeze(0). The two are equivalent. The version with advanced indexing might be a bit slower because it has more checking to do to find out exactly what you want to do.
|
st98330
|
Hi,
[None, ...] is numpy notation for .unsqueeze(0). The two are equivalent. The version with advanced indexing might be a bit slower because it has more checking to do to find out exactly what you want to do.
|
st98331
|
Hi,
After experimenting with different language models I wanted to make a change on the pytorch word_language_model example.
I’m not sure if my idea is in principle possible or good but it should be possible anyway…
Basically I want to take out the encoder(embedding)/decoder part and train the language model using an existing word embedding, which is trained beforehand with gensim. (the source code for that is in my repo (link below).
I train a Word2Vec model with the size 200 before training the language model. The corpus tensors will have the shape:
ntoken * 200 and each batch bptt(sequence length) * batch_size * 200
The Model therefore looks like rather simple:
The input gets staright into the RNN and after that into Linear Layer to bring it back to size 200 in order to be compared with the target.
class RNNModel(nn.Module):
"""Container module with an encoder, a recurrent module, and a decoder."""
def __init__(self, rnn_type, nhid, nlayers, dropout=0.5, word_embedding = None):
super(RNNModel, self).__init__()
self.wem = word_embedding
# self.drop = nn.Dropout(dropout)
if rnn_type in ['LSTM', 'GRU']:
self.rnn = getattr(nn, rnn_type)(word_embedding.vector_size, nhid, nlayers, dropout=dropout)
else:
try:
nonlinearity = {'RNN_TANH': 'tanh', 'RNN_RELU': 'relu'}[rnn_type]
except KeyError:
raise ValueError( """An invalid option for `--model` was supplied,
options are ['LSTM', 'GRU', 'RNN_TANH' or 'RNN_RELU']""")
self.rnn = nn.RNN(word_embedding.vector_size, nhid, nlayers, nonlinearity=nonlinearity, dropout=dropout)
self.LOut = nn.Linear(nhid, word_embedding.vector_size)
self.rnn_type = rnn_type
self.nhid = nhid
self.nlayers = nlayers
def forward(self, input, hidden):
# self.rnn.flatten_parameters()
output, hidden = self.rnn(input, hidden)
# output = self.drop(output)
# print('output',output.data.shape)
output = self.LOut(output)
return output,hidden
There are also some changes in the main module.
First I had to change the Loss function, since the target tensor does not contain classes (word ids) but word vectors. I used the MSELoss function.
2nd I tried to replace the manual parameter adaption
for p in model.parameters():
p.data.add_(-lr, p.grad.data)
with optim.SGD as I have seen this been used in other language model examples. I Tried several learning rates: The default 20 of the example, 1 and 0.005 or something in that range.
The result is that the model basically doesn’t learn anything . It starts with a average loss of 1.15 and maybe gets down to 0.94. The word output is anyway pretty random :).
I played a bit the hyperparameters without any success.
Before giving up I wanted to ask, if somebody could give me a hint of what goes wrong.
My changes are on this forked repo, wem_model branch 3.
It includes the word2vec model, which can also be created with the embedding.py script and only takes a couple of minutes on the wiki-2 dataset.
Obviously all this needs gensim.
I also included a notebook, that basically does all the things in main.py step by step, with which I tried to understand what is going on.
What’s going on?!
|
st98332
|
Ok this is quite old topic, but I though that people might still wonder about this, so here goes.
I’ve tried what you proposed and if I am not wrong what you should do, is keep an index to weights dict when you initially encode your strings to ids (in data.py) and later in the model instead of creating the encoder (which is an nn.Embeddings module) weights dynamically, you pass the weights of the indexes you accumulated.
So one should replace:
github.com
pytorch/examples/blob/81f47e8ea49c74494d2aa8dc1c9c4ddc6c0eca73/word_language_model/model.py#L40 1
self.decoder.weight = self.encoder.weight
self.init_weights()
self.rnn_type = rnn_type
self.nhid = nhid
self.nlayers = nlayers
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
def forward(self, input, hidden):
emb = self.drop(self.encoder(input))
output, hidden = self.rnn(emb, hidden)
output = self.drop(output)
decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))
return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden
with something along the lines (suppose the vectors is the idx to weights dict)
w = list(vectors.values())
weights = np.asarray(w, dtype=np.float32)
self.encoder.weight.data.copy_(torch.from_numpy(weights))
|
st98333
|
Would you suggest me the reason why s_copy_ is NOT profiling?
Even in current code, RecordFunction is not exist for s_copy_ https://github.com/pytorch/pytorch/blob/master/torch/csrc/autograd/VariableTypeManual.cpp#L246 4
I think it (s_copy_) is very important method for profiling.
|
st98334
|
I have a CNN acting as a regressor to predict a value between 0-1 for images where 1 represents a nice ordered disk looking object and 0 represents almost random disorderly moving object.
I have 1,525,760 images to train on but I’m restricting the number of samples selected per epoch to be 23840. This number was chosen because the batch size is 64 and it means that one new epoch under this sampling method is only 1/64th of a true epoch.
To do this I’m using WeightedRandomSampler with the weights of each image set equally to 1. In this respect, I believe that the CNN will draw purely random samples to make each new batch.
The reason I’m doing this is to observe how the data is fitting while seeing the images for the first time, as in tests with ordinary epochs I found the validation set to stagnate from epoch 1 onwards.
It looks like the validation loss drops and then stagnates before it hits the end of epoch one when using the new sampling system as seen below:
Please note that the Y-axis is MSE loss.
Perhaps the model is solving part of the regression problem (maybe the k=1 end which has lower diversity in image appearance) and then struggling on after that?
My question is:
Is there a way to improve the accuracy of the CNN before the stagnating validation accuracy.
Things to consider:
I don’t think this is classic overfitting as the CNN hasn’t seen all of the data by epoch 0.4 at which point the validation and training accuracy diverge.
I’ve tried using dropout in case it was overfitting but all that does is shift the training and validation curves higher in MSE loss.
My model:
class RegressionalNet(torch.nn.Module):
def __init__(self):
super().__init__()
self.feature_extractor = torch.nn.Sequential(
torch.nn.Conv2d(1,64,3,padding=1),
torch.nn.ReLU(),
torch.nn.Conv2d(64,128,3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2),
torch.nn.Conv2d(128,256,3,padding=1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(2))
self.classifier = torch.nn.Sequential(
torch.nn.Linear(256*16*16,256),
torch.nn.ReLU(),
torch.nn.Linear(256,256),
torch.nn.ReLU(),
torch.nn.Linear(256,256),
torch.nn.ReLU(),
torch.nn.Linear(256,1))
def forward(self,x):
features = self.feature_extractor(x)
output = self.classifier(features.view(int(x.size()[0]),-1))
return output
I am using:
Adam optimizer with initial learning rate = 1e-3
MSELoss
Any help would be greatly appreciated!
|
st98335
|
if your hypothesis is that the CNN solves the class 1 problem because it is simpler, you can verify this by seeing loss or error of each class individually.
|
st98336
|
That’s a good idea, thanks. Do you think it is more informative that for the validation accuracy rather than the training?
|
st98337
|
Not really sure but if i were to guess:
train falls to 0 but validation does not --> classic overfitting
train doesn’t fall to 0 but validation does --> validation is a subset of training, possible data leakage
both fall to 0 —> the case you are saying that one class is solved easily while the other is not.
|
st98338
|
I’m trying to implement the binned errors so I’ll get back to you on that once it’s working
|
st98339
|
Could you explain your idea using the WeightedRandomSampler a bit?
I’m not sure I understand it properly.
It seems you are restricting the number of samples per epoch using the Dataset's __len__ method?
If you use WeightedRandomSampler with equal weights and replacement=True, you might get some duplicated samples.
If you just want to iterate your dataset, you don’t have to use a special sampler, as the default one will just use each sample once.
|
st98340
|
Of course,
So I’m using WeightedRandomSampler with replacement = False and setting the weights for each image in the dataset to equal 1. This way the DataLoader will select all the images over one epoch but given that I can restrict the num_samples, I can observe how the CNN performs on a test set before it’s seen all the data.
This was the only way that I could restrict the number of samples because I couldn’t find a num_samples option in DataLoader.
Does this make sense?
|
st98341
|
I have implemented the binned validation and it looks like the network struggles with predicting low values of k on the face of it.
The whole thing really stagnates before 1 epoch (that’s epoch’ 64 on the plot below) but strangely there’s some good fitting to say k=0.3 too so not entirely sure why it under-performs on some low k images but not others. To me this would suggest a problem with the dataset no? Or an under representation of the test set?
Validation_loss_017.png1642×836 144 KB
|
st98342
|
can you get the training curves as well? if the training is doing well on all groups then there is an under-representation of that group in the test set . The whole idea is difference in trends between train and test.
Also you do have equal number of samples from all groups?
|
st98343
|
I can do that yes, it will take a few days though just a fair warning. I don’t have an equal number of samples because as k approaches zero the diversity in images increases exponentially, so the number of samples with k<0.6 is heavily over populated whereas the number of samples for say k~0.9 is less because the network has been shown to predict these images well in unseen data.
|
st98344
|
In retrospect, I don’t know if I can do this for training as the only way I could do it with validation was by splitting my validation set up into groups and running them through the network group by group… but the train loader randomly selects the data and so isolating the groups within a batch and then re-putting them through the network and cataloguing the accuracy seems very tricky
|
st98345
|
Looking at the plot, I’m noticing some anti correlation between predictions for classes below and above k~0.5.
It makes improving the global test accuracy difficult as high k labelled images only improve in test score when the low k labelled images drop in test scoring.
This is undoubtedly because of the imbalanced dataset but is there a way to stop this happening without collecting more data? As the frequency of occurrences of low k values images is very very low.
I wonder if anyone has any advice on dealing with regression problems where the diversity of images is very high in a class who’s frequency of occurrence is very low?
|
st98346
|
Hi,
I’m currently working on finetuning a large CNN for semantic segmentation and due to GPU memory limitations I can only use a batch size of one.
The CNN I’m using has a bunch of batch normalization layers, which I wan’t to fix during training (since batch normalization with batch size 1 does not make sense). The idea is to set the mode of the batchnorm layers to eval during training. I use the following code to do this
net.train()
for module in net.modules():
if isinstance(module, torch.nn.modules.BatchNorm1d):
module.eval()
if isinstance(module, torch.nn.modules.BatchNorm2d):
module.eval()
if isinstance(module, torch.nn.modules.BatchNorm3d):
module.eval()
But I can’t seem to get the training to work.
I’ve debugging by doing the same on training code that works fine with a large batch size and not freezing the batch normalization layers. Fixing the batch norm layers will then make the training diverge.
Am I missing something here, does the batchnorm layers backprop function work in eval mode? Is there anything else I need to be doing?
Thanks!
|
st98347
|
If your BatchNorm layers are set to affine=True, they have the weight and bias parameters (gamma and beta in the paper), which are initialized with some values. If you don’t train these layers at all, both parameters will stay in this initial state.
If you still want to standardize your activations with the running_mean and running_var, I would suggest to set affine=False and test it again.
Let me know, if this helps!
|
st98348
|
Thank you for your reply!
They are set to affine=True. Since I’m loading a pretrained model I believe that I need these parameters to be able to get the correct pretrained network.
What I’m trying to achieve is to keep the gamma and beta from the pretrained network. As well as the estimate of the running_mean and running_var. I then want to train the network without updating running_mean and running_var (since I can’t fit a large enough batch in the GPU memory). This seems to break down the training in some way.
|
st98349
|
@maunz
I had same problem. It seems I setting eval mode wrong.
This thread helped me Freeze BatchNorm layer lead to NaN 503
def set_bn_to_eval(m):
classname = m.class.name
if classname.find(‘BatchNorm’) != -1:
m.eval()
net.apply(set_bn_to_eval)
if you want to set_bn_to_eval of some subnet or some basenetwork then just use.
net..apply(set_bn_to_eval)
hope this helps.
|
st98350
|
Hi!
I noticed an error with my pretrained network unrelated to the batchnorm layers. When I fixed that setting the batchnorm layers to .eval() worked,
|
st98351
|
I am working on some medical data with a very limited dataset.
I have some MRI as training images and CT as the corresponding images that I want from MRI images (this is going to be a GAN network to make fake CTs with U-Net based generator).
I am coding the generator part and wanted to train my UNet to see whether it is working okay.
But I am confused on how to correctly load images; my issue is this:
I have more MRI images than CT. I am thinking maybe for every epoch, assuming each patient to be a batch, I want to load the maximum possible images at random from each patient and then train my UNet. How can I do this using PyTorch data.Dataset ? Do I need to have 1:1 MRI and CT images to train the network?
Thank you in advance!
|
st98352
|
How imbalanced is your data and how many samples to you have?
I think you might apply some data augmentation (e.g. different level/window settings for the images) to artificially create some more samples. Also affine transformations might be a good strategy. Especially for MRI images I would think you might be brave in using augmentations techniques.
However, let’s dig a bit into the Dataset implementation. How is your data currently stored?
Do you have a folder for each patient with DICOM images in it? If so, do you have different sessions of just different slices?
How many patients, scans, slices do you have?
Based on these information, I’m certain we will come up with a good approach.
|
st98353
|
Hello! Thank you for your response!
So I have my data as both DICOM and Analyze right now. But I am using a MATLAB script to convert Analyze to .jpg that I use for the UNET (This is so wasteful, sorry pretty new to this). Picking up data directly through DICOM/Analyze would be very helpful now and in the future for sure. I have a folder for each patient with all the DICOM images in it.
Currently I am testing with a few patients (10) and only a certain part of the brain to see whether I can make network work properly. This is like 120 images for each modality. I would say this very small data is not too imbalanced and I have like 10 extra MRI images compared to CT.
If this network works properly then I can use my bigger data.
Thanks for your help
|
st98354
|
Hi,
I have a question about torch.gels 4. In the example, why does the result X has a dimension of 5x2, instead of 3x2, which suits the dimension constrains
in Ax=b?
Thanks,
|
st98355
|
From the docs on the link you mentioned:
Returned tensor X has shape (max(m,n)×k). The first n rows of X contains the solution. If m>=n, the residual sum of squares for the solution in each column is given by the sum of squares of elements in the remaining m−n rows of that column.
So the answer is only in the first 3 rows, but the errors are in the last 2 rows.
|
st98356
|
Hi All,
I am trying to freeze a lot of layers except few layers (in below came fc_cat and fc_kin_lin are two linear layers) while keeping the dropout layers on and Batch norm layer turned off. Is this a right way to do it? I know there are different posts with each of them but not together.
def set_bn_eval(m):
classname = m.__class__.__name__
if classname.find('BatchNorm') != -1:
m.eval()
model.train()
if args.freeze_bn:
model.apply(set_bn_eval)
state = model.state_dict()
# pdb.set_trace()
for name, param in state.items():
if name.find('fc_cat') > 0 or name.find('fc_kin_lin') > 0:
print('\n\n', name, 'layer parameters will be trained\n\n')
else:
print(name, 'layer parameters are being frozen')
if isinstance(param, Parameter):
param.requires_grad = False
pdb.set_trace()
optimizer = torch.optim.SGD(model.parameters(), args.lr,
momentum=args.momentum,
weight_decay=args.weight_decay)
|
st98357
|
Replying to my own question
I realized that I could use name_paratmer instead of state_dict
parameter_dict = dict(model.named_parameters())
for name, param in parameter_dict.items():
if name.find('fc_cat')>0 or name.find('fc_kin_lin')>0:
print('\n\n', name, 'layer parameters will be trained\n\n')
else:
if isinstance(param, Parameter):
print(name, 'layer parameters are being frozen')
param.requires_grad = False
That does the job for parameter freezing. BUT, I am getting problem with with BN layer off mode using the above solution gives me nan.
Thanks for any suggestion!
|
st98358
|
hi @Gurkirt,
I am also stuck in a somewhat similar issue. Can you please suggest me if you have solved yours?
Here is my issue:
Use global statistics of BatchNorm in training mode
I am working on a edge detection code and my batch_size=1 (due to GPU RAM’s limitation). I am using a pretrained ResNet as my backbone architecture. While training I want to use the global statistics of batch norm layer rather than batch statistics (as it won’t make much sense because of batch_size=1).
One option is to set model.eval() in my training part so the network will use the global statistics of the batch norm layer. Given my network doesn’t have any dropout layers, is it the right thin…
|
st98359
|
Hey @deepak242424
You can use model.apply(function_name) here for batch norm layers
`def set_bn_eval(m):
classname = m.__class__.__name__
if classname.find('BatchNorm') != -1:
m.eval()`
then call this function by .apply before starting the training.
model.train()
model.apply(set_bn_eval)
Now you can train your model in the same manner as you were. BN layers won’t be trained.
After intermediate validation where you might have used model.eval() then switch to training via model.train() follow that with model.apply(set_bn_eval).
I hope it helps.
|
st98360
|
I try to implement a two-layer bidirectional LSTM with torch.nn.LSTM.
I made a toy example: a batch of 3 tensors, which are exactly the same (see my code below). And I expected the outputs of the BiLSTM to be the the same along the batch dimension, i.e. out[:,0,:] == out[:,1,:] == out[:, 2, :].
But it seemed not to be the case. According to my experiments, 20%~40% of the time, the output were not the same. So I wonder where I got it wrong.
# Python 3.6.6, Pytorch 0.4.1
import torch
def test(hidden_size, in_size):
seq_len, batch = 4, 3
bilstm = torch.nn.LSTM(input_size=in_size, hidden_size=hidden_size,
num_layers=2, bidirectional=True)
# create a batch with 3 exactly the same tensors
a = torch.rand(seq_len, 1, in_size) # (seq_len, 1, in_size)
x = torch.cat((a, a, a), dim=1)
out, _ = bilstm(x) # (seq_len, batch, n_direction * hidden_size)
# expect the output should be the same along the batch dimension
assert torch.equal(out[:, 0, :], out[:, 1, :])
assert torch.equal(out[:, 1, :], out[:, 2, :])
if __name__ == '__main__':
count, total = 0, 0
for h_size in range(1, 51):
for in_size in range(1, 51):
total += 1
try:
test(h_size, in_size)
except AssertionError:
count += 1
print('percentage of assertion error:', count / total)
|
st98361
|
Solved by novice in post #2
Solved. The issue was due to the impreciseness of float number calculation.
|
st98362
|
Hi everyone,
I went over the PyTorch tutorials and haven’t been able to understand when would I want to use a LogSoftmax layer (declared in __init__), or the log_softmax function from torch.nn.functional.
Thanks!
|
st98363
|
Solved by justusschock in post #2
They are basically the same. The functional version can’t be used in structures like torch.nn.Sequential etc. Beside that it should be your own choice whether to use the layer (initialize it and call it) or call the functional API.
|
st98364
|
They are basically the same. The functional version can’t be used in structures like torch.nn.Sequential etc. Beside that it should be your own choice whether to use the layer (initialize it and call it) or call the functional API.
|
st98365
|
Facing this error through a FNN with embedding model
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-53-0b3e51f98dc8> in <module>()
17
18 # Getting gradients w.r.t. parameters
---> 19 loss.backward()
20
21 # Updating parameters
~/miniconda3/envs/amn/lib/python3.5/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
91 products. Defaults to ``False``.
92 """
---> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph)
94
95 def register_hook(self, hook):
~/miniconda3/envs/amn/lib/python3.5/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
87 Variable._execution_engine.run_backward(
88 tensors, grad_tensors, retain_graph, create_graph,
---> 89 allow_unreachable=True) # allow_unreachable flag
90
91
RuntimeError: No grad accumulator for a saved leaf!
Any idea?
class FeedforwardNeuralNetModel(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim):
super(FeedforwardNeuralNetModel, self).__init__()
# Embedding layer
self.embedding = nn.Embedding(input_dim, embedding_dim)
self.fc1 = nn.Linear(embedding_dim*embedding_dim, hidden_dim)
self.sigmoid = nn.Sigmoid()
self.fc2 = nn.Linear(hidden_dim, output_dim)
self.sigmoid_out = nn.Sigmoid()
def forward(self, x):
# Embedding
embedded = self.embedding(x)
embedded = embedded.view(-1, embedding_dim*embedding_dim)
out = self.fc1(embedded)
out = self.sigmoid(out)
out = self.fc2(out)
out = self.sigmoid_out(out)
return out
for epoch in range(num_epochs):
for i, (samples, labels) in enumerate(train_loader):
# Load samples
samples = samples.view(-1, max_len).requires_grad_()
labels = labels.view(-1, 1)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(samples)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
|
st98366
|
Does changing the sample creation to samples = samples.view(-1, max_len).clone().requires_grad_() solves the problem? (notice the extra .clone() before requires_grad_()).
|
st98367
|
Thanks for the reply @albanD . Still doesn’t solve it. Really weird, can’t figure out. Seems like a bug.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.