id
stringlengths
3
8
text
stringlengths
1
115k
st99768
Ah OK. I guess you want to use the same data in your Dataset? If so, you could just fake a new length and use a modulo operation in index to avoid an out of range error: class BarDataset(Dataset): def __init__(self): self.data = list(np.random.rand(9)) pass def __getitem__(self, idx): # some ops idx = idx % len(self.data) out = np.array([self.data[idx] * 2, self.data[idx] * 3]) return out def __len__(self): return 2 * len(self.data)
st99769
Is there a way to implement 3-D scatter_/scatter_add_ in pytorch? Specifically, I have a tensor X (shape (B, M, N)), an index tensor I of shape (B, C, D) giving indices into the X, and a values tensor V the same shape of as indices, (B, C, D). For indices (b, c, d) in I, I want to add V[b,c,d] to X[b,c,d]. Is there a way to implement this? The regular scatter_ function only indices using a single dim.
st99770
Is I holding the indices or are the actual indices stored in b, c and d? In the latter case, would this work? b = 10 m, n = 5, 6 c, d = 4, 2 x = torch.zeros(b, m, n) b_idx = torch.empty(b, 1, 1, dtype=torch.long).random_(b) c_idx = torch.empty(c, 1, dtype=torch.long).random_(c) d_idx = torch.empty(d, dtype=torch.long).random_(d) v = torch.randn(b, c, d) x[b_idx, c_idx, d_idx] += v[b_idx, c_idx, d_idx] If not, could you post a small sample, what’s stored in I?
st99771
I have a question about the LSTM model. If initialize the hidden state vector during the forward method and proceed to train the model with training data, then if I were to evaluate model performance with some test data, would the hidden state vector be re-initialized as zeros even if I were to call with torch.no_grad() ? A simple example that I have is the one from pytorch examples time series prediction 11 with a slight modification as follows: import torch import numpy as np import pandas as pd import matplotlib.pyplot as plt import torch.optim as optim import torch.utils.data import torch.nn as nn # helper function to convert numpy array to tensor def to_tensor(numpy_array, dtype = 'float'): if dtype == 'long': return torch.from_numpy(numpy_array).long() else: return torch.from_numpy(numpy_array).float() # helper function to convert tensor to variable def to_variable(tensor): if torch.cuda.is_available(): tensor = tensor.cuda() return torch.autograd.Variable(tensor) class SequenceLSTM(nn.Module): def __init__(self): super(SequenceLSTM, self).__init__() self.lstm1 = nn.LSTM(1, 51, 2) self.linear = nn.Linear(51,1) def forward(self, input): self.h = torch.zeros(2, 1, 51).cuda() self.c = torch.zeros(2, 1, 51).cuda() output, h = self.lstm1(input.view(input.size(1),input.size(0),1), (self.h, self.c)) output = self.linear(output) return output.squeeze(2) # Generate training data np.random.seed(2) T = 20 L = 1000 N = 100 x = np.empty((N, L), 'int64') x[:] = np.array(range(L)) + np.random.randint(-4 * T, 4 * T, N).reshape(N, 1) data = np.sin(x / 1.0 / T).astype('float64') # split training set input = torch.from_numpy(data[3:, :500]).float() target = torch.from_numpy(data[3:, 1:501]).float() # split test set test_input = torch.from_numpy(data[:3, 500:700]).float() test_target = torch.from_numpy(data[:3, 501:701]).float() # initialize model criterion = nn.MSELoss().cuda() seq = SequenceLSTM() seq.cuda() dataset = torch.utils.data.TensorDataset(input, target) test_dataset = torch.utils.data.TensorDataset(test_input, test_target) data_loader = torch.utils.data.DataLoader(dataset, batch_size = 1) test_data_loader = torch.utils.data.DataLoader(test_dataset, batch_size = 1) optimizer = optim.Adam(seq.parameters(), lr=1e-3) losses = [] timer = [] for epoch in range(1): for i, (train, test) in enumerate(dataset): train = train.view(-1, train.size(0)) # tensor of size (1, 999) optimizer.zero_grad() train, test = to_variable(train), to_variable(test) output = seq(train) loss = criterion(output, test.view(test.size(0), 1)) loss.backward() losses.append(loss.item()) optimizer.step() print('Loss is : ', loss.item()) y_pred_train = [] ypred_test = [] with torch.no_grad(): for test_inp, test_target in test_data_loader: future = 1000 test_target = test_target.view(test_target.size(1), 1) test_inp = test_inp.view(-1, test_inp.size(1)) pred = seq(test_inp.cuda()) loss = criterion(pred, test_target.cuda()) print('test loss:', loss.item()) y = pred.cpu().data.numpy() ypred_test.append(y) # sanity check to see predictions against actual ypred_test = np.hstack(ypred_test) y_df = pd.DataFrame(ypred_test.T[0, :], columns = ['predicted']) y_df['actual'] = data[0, 501 : 701] y_df.plot() plt.show() So my question is about the details of what happens during the with torch.no_grad(): when I pass in the test data, does it uses the hidden state parameters learned from the training period or is my model doing something redundant ? I understand that the hidden state needs to be initialized as zeros in the very beginning but am struggling to wrap my head around this because the prediction output, based on the test input looks as follows, and it seems like that it is getting quite good results so I am slightly confused. I am a bit new to sequence models with pytorch so I would greatly appreciate any clarifications.
st99772
I am trying to find out which PyTorch ops are supported for ONNX export in the latest master. I see that there are ops listed on this page: https://pytorch.org/docs/master/onnx.html 8 Is this an up-to-date list or is there some other place or way I can find the updated list? Thanks.
st99773
Hi, I have a very basic question about Pytorch graph, if I have a condition like this if STATEMENT: calculate the loss by a Pytorch API like BCELoss else: loss = 0 Does such a condition separate the loss from the graph, and ruin gradients and parameters update?
st99774
If the STATEMENT is false, it occurs no loss computation(and also no gradients). So if I read rightly, that will work!
st99775
That’s basically right. If you use loss after the condition to call loss.backward() on it, it will create an error though. I would rather zero out the loss: loss = calculate the loss loss = loss * 0. Alternatively your training code would have to be in the if STATEMENT condition.
st99776
Yes, I am using the loss after the if statement in the form of loss.bachward(). Please correct me if I am wrong, You are suggesting: loss = calculate the loss if ~STATEMENT: loss = loss * 0 Am I correct?
st99777
Yes, otherwise you are assigning an int to loss, which will throw an error, if you try to call loss.backward().
st99778
Hello everybody. I am confused about a question. I’ve classed a small CONV net like this: import torch as t import torch.nn as nn ts1=t.tensor([[0.0,0.0,0.0,1.0],[0.0,0.0,1.0,0.0],[0.0,1.0,0.0,0.0],[1.0,0.0,0.0,0.0]]).view(1,1,4,4) class Net(nn.Module): def __init__(self): super(Net,self).__init__() self.c1=nn.Conv2d(1,1,kernel_size=(3,3)) def forward(self,x): x=self.c1(x) return (x) mynet=Net() print(mynet.forward(ts1)) and it can print this answer: tensor([[[[-0.1707, -0.4558], [-0.4558, 0.1861]]]], grad_fn=<ThnnConv2DBackward>) but if I put this net into GPU like this: if t.cuda.is_available(): mynet.cuda() ts1.cuda() print(mynet.forward(ts1)) it does not work and shows: Traceback (most recent call last): File "F:/learning/new.py", line 14, in <module> print(mynet.forward(ts1)) File "F:/learning/new.py", line 9, in forward x=self.c1(x) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\conv.py", line 301, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight' I don’t understand. Why does self.c1 expectes object of type torch.FloatTensor after I put it into my GPU?
st99779
While you can call mynet.cuda() in place, you have to re-assign tensors as ts1 = ts1.cuda(). As a small side note: you should call the model directly with your input instead of using forward: output = mynet(ts1) , as this will make sure all hooks are properly set.
st99780
I follow the tutorial here:https://pytorch.org/tutorials/beginner/deploy_seq2seq_hybrid_frontend_tutorial.html 6 and successfully export a Torch Script. But when I want to load it the script like this: import torch traced_encoder = torch.jit.load(‘traced_encoder.pth’) print(traced_encoder) test_seq = torch.LongTensor(10, 1).random_(0, 1000) test_seq_length = torch.LongTensor([test_seq.size()[0]]) out,hidden = traced_encoder(test_seq,test_seq_length) hidden = hidden[:2] traced_decoder = torch.jit.load(‘traced_decoder.pth’) print(traced_decoder) test_decoder_input = torch.LongTensor(1, 1).random_(0, 1000) d_out,d_hidden = traced_decoder(test_decoder_input,hidden,out) print(d_out,d_out.shape) #print(d_hidden,d_hidden.shape) chatbot = torch.jit.load(‘traced_chatbot.pth’) print(chatbot) /----------------------------------------------------------------------------- Output: ScriptModule( (embedding): ScriptModule() (gru): ScriptModule() ) ScriptModule( (embedding): ScriptModule() (gru): ScriptModule() (concat): ScriptModule() (out): ScriptModule() ) tensor([[2.5321e-03, 7.4835e-05, 1.5321e-05, …, 4.5948e-05, 1.4532e-05, 1.1714e-04]], grad_fn=) torch.Size([1, 7826]) Traceback (most recent call last): File “load_chatbot.py”, line 15, in chatbot = torch.jit.load(‘traced_chatbot.pth’) File “/home/twang/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/jit/init.py”, line 85, in load torch.C.import_ir_module(module_lookup, filename) RuntimeError: it != value_type_map.end() ASSERT FAILED at /pytorch/torch/csrc/jit/import.cpp:276, please report a bug to PyTorch. (buildIntermediateValue at /pytorch/torch/csrc/jit/import.cpp:276) frame #0: + 0x49a8c0 (0x7f592ca738c0 in /home/twang/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/lib/libtorch.so.1) frame #1: + 0x49ad52 (0x7f592ca73d52 in /home/twang/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/lib/libtorch.so.1) frame #2: + 0x49b983 (0x7f592ca74983 in /home/twang/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/lib/libtorch.so.1) frame #3: + 0x49bd32 (0x7f592ca74d32 in /home/twang/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/lib/libtorch.so.1) frame #4: + 0x49c604 (0x7f592ca75604 in /home/twang/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/lib/libtorch.so.1) frame #5: torch::jit::import_ir_module(std::function<std::shared_ptr<torch: :jit::script::Module> (std::vector<std::string, std::allocator<std:: string> > const&)>, std::string const&) + 0x2f (0x7f592ca760bf in /home/twang/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/lib/libtorch.so.1) frame #6: + 0x4799b1 (0x7f596464a9b1 in /home/twang/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so) frame #7: + 0x1b3c0d (0x7f5964384c0d in /home/twang/anaconda3/envs/pytorch1.0/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so) frame #21: __libc_start_main + 0xf5 (0x7f5989aecf45 in /lib/x86_64-linux-gnu/libc.so.6) frame #22: python() [0x4009e9] /------------------------------------------------------------------------------------------------------------------------- I can load the encoder and decoder, but failed at the complete chatbot. Anyone can help me ? Thank you!
st99781
Is there any official documentation on torch.backends? I only recently discovered the existence of torch.backends.cudnn.benchmark. And now I’m wondering what else I’m missing. What’s more, there doesn’t seem to be anything in the documentation on torch.backends (searching for cudnn is the documentation turns up no results), and any Google searches I’ve tried only reveals disconnected forum answers and Github issues. Is there any real documentation for backends? If not, does anyone know of an unofficial tutorial or overview of backends that exists?
st99782
I agree that we need to darken the font color in general. (especially the ones in the side bars)
st99783
Yeah and probably it seems that whole pytorch docs are migrating to this theme/fonts (from master docs) It is somewhat dull and less readable in my opinion.
st99784
Hi, I am doing segmantic segmentation with large class imbalances(5 classes). So I am passing in a weight array into my loss as in loss_fn = torch.nn.CrossEntropyLoss(weight=loss_weights) Now since, it is very hard to assign these weights, I am trainning for 200 epochs and then setting the loss_weights then as trainable parameters. But when I do this I get the error : RuntimeError: the derivative for ‘weight’ is not implemented How can I get around this and any other suggestions to deal with such class imbalance?
st99785
Hi, for semantic segmentation I wouldn’t use cross entropy. See Sudre et al. 29 (see also Crum et al. 12 ) that has a generalized dice coefficient, weighted for class imbalance. It’s a good start. There exist TF implementation of the generalized dice coefficient, that you can easily port in pytorch, here 62.
st99786
I am actually using weighted dice as well as CE. Using them together would fine i guess?
st99787
I think it depends a lot on the relative weighting of the two losses: they clearly take different range of values, therefore, one may dominate the other.
st99788
I tried fine-tuning InceptionV3 model. by this page https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html?highlight=fine%20tuning# 12 but, I couldn’t see the activation layer like softmax. Is this normal ? or torchvision’s models don’t have the activation layer ? In Keras, I always put activation layer on the last. So, it’s strange for me. Thank you!
st99789
Using activation function is optional, though, I also consider adding activation function in the last layer is natural way. You can add nn.Softmax() in the sequential-formed InceptionV3 model to get normalized output.
st99790
Just as a side note in case you are trying to fine tune the model: the usual loss functions for classification expect log probabilities (nn.LogSoftmax + nn.NLLLoss) or logits (no non-linearity + nn.CrossEntropyLoss). The nn.Softmax layer is still fine to get the normalized probabilities as @kenmikanmi suggested.
st99791
Thanks ! I understood how this model learn labels in pytorch. Thanks for answer.
st99792
Dear all, I get a dataset with about 60 pictures(size 500*500) and the mask labels with the same size, and I want to randomly crop each picture into 40*40 patches, so that I can get more data. But the data.Dataset confused me, because I have to load one image and do crop one time in transform, so that during one epoch I can only get 60 patches. I want to get more patches in one epoch and using data loader. The code is as follows: class Data(data.Dataset): def __init__(self, imgs_root, gt_root, mode="train"): imgs = sorted(glob.glob(os.path.join(imgs_root, '*.png'))) gts = [os.path.join(gt_root, img.split("/")[-1]) for img in imgs] self.mode = mode if self.mode == "train": self.imgs = imgs[:int(0.7 * len(imgs))] self.gts = gts[:int(0.7 * len(imgs))] elif self.mode == "val": self.imgs = imgs[int(0.7) * len(imgs):] self.gts = gts[int(0.7 * len(imgs)):] elif self.mode == "test": self.imgs = imgs self.gts = gts else: print("the mode must be train / val / test.") exit() def transform(self, image, mask): grayscale = T.Grayscale() image = grayscale(image) # (584, 565) mask = grayscale(mask) pad = T.Pad(padding=PATCH_SIZE // 2) image = pad(image) # (605, 624) mask = pad(mask) i, j, h, w = T.RandomCrop.get_params(image, output_size=(PATCH_SIZE, PATCH_SIZE)) image = F.crop(image, i, j, h, w) # (40, 40) mask = F.crop(mask, i, j, h, w) totensor = T.ToTensor() image = totensor(image) # torch.Size([1, 40, 40]) mask = totensor(mask) return image, mask def __getitem__(self, index): # get path img_path = self.imgs[index] label_path = self.gts[index] # get data data = Image.open(img_path) label = Image.open(label_path) # transforms data, label = self.transform(data, label) return data, label def __len__(self): return len(self.imgs) Can anyone help me? Regards, Pt
st99793
Firstly why not running multiple epochs, as you’ll crop randomly over period of few of epochs you’ll get different crops and would make more sense in my opinion. But if you still wanna pursue it, you can do this small hack def __getitem__(self, index): index = index % len(self.imgs) """ Rest same as before Now you'll pass through each sample 20 times, for jth image all indices 20i + j (0<=i<20) you'll get same image Make sure you follow proper validation protocol, do evaluation per image only once And also maybe it'll be better to keep shuffle on while training """ def __len__(self): return 20 * len(self.imgs)
st99794
Dear all, deep mind just released the symplectic gradient adjustment code 8 in TF. This looks very promising for GAN training. Would it please be possible for someone to help and create a pytorch optimizer for this? In particular I am interested for this part of the code: #@title Defining the SGA Optimiser def list_divide_scalar(xs, y): return [x / y for x in xs] def list_subtract(xs, ys): return [x - y for (x, y) in zip(xs, ys)] def jacobian_vec(ys, xs, vs): return tf.contrib.kfac.utils.fwd_gradients( ys, xs, grad_xs=vs, stop_gradients=xs) def jacobian_transpose_vec(ys, xs, vs): dydxs = tf.gradients(ys, xs, grad_ys=vs, stop_gradients=xs) dydxs = [ tf.zeros_like(x) if dydx is None else dydx for x, dydx in zip(xs, dydxs) ] return dydxs def _dot(x, y): dot_list = [] for xx, yy in zip(x, y): dot_list.append(tf.reduce_sum(xx * yy)) return tf.add_n(dot_list) class SymplecticOptimizer(tf.train.Optimizer): """Optimizer that corrects for rotational components in gradients.""" def __init__(self, learning_rate, reg_params=1., use_signs=True, use_locking=False, name='symplectic_optimizer'): super(SymplecticOptimizer, self).__init__( use_locking=use_locking, name=name) self._gd = tf.train.RMSPropOptimizer(learning_rate) self._reg_params = reg_params self._use_signs = use_signs def compute_gradients(self, loss, var_list=None, gate_gradients=tf.train.Optimizer.GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None): return self._gd.compute_gradients(loss, var_list, gate_gradients, aggregation_method, colocate_gradients_with_ops, grad_loss) def apply_gradients(self, grads_and_vars, global_step=None, name=None): grads, vars_ = zip(*grads_and_vars) n = len(vars_) h_v = jacobian_vec(grads, vars_, grads) ht_v = jacobian_transpose_vec(grads, vars_, grads) at_v = list_divide_scalar(list_subtract(ht_v, h_v), 2.) if self._use_signs: grad_dot_h = _dot(grads, ht_v) at_v_dot_h = _dot(at_v, ht_v) mult = grad_dot_h * at_v_dot_h lambda_ = tf.sign(mult / n + 0.1) * self._reg_params else: lambda_ = self._reg_params apply_vec = [(g + lambda_ * ag, x) for (g, ag, x) in zip(grads, at_v, vars_) if at_v is not None] return self._gd.apply_gradients(apply_vec, global_step, name) Am newbie in pytorch, I have difficulty understanding how the function tf.contrib.kfac.utils.fwd_gradients can be implemented in pytorch. Is there something similar? Kind regards, Foivos
st99795
So, I ran this: for epoch in range(epochs): epoch += 1 inputs = Variable(torch.from_numpy(x_train)) labels = Variable(torch.from_numpy(y_train)) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs,labels) loss.backward() optimizer.step() print('epoch {}, loss {} '. format(epoch, loss.data[0])) and got this error: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number Remove the CWD from sys.path while we load stuff. Can someone be kind enough to explain the statement above? If possible, indicating the problem in my code. Thanks!
st99796
for epoch in range(epochs): epoch += 1 # <-- you dont need to increse epoch by one, this is done automatically by the for statement above inputs = Variable(torch.from_numpy(x_train)) labels = Variable(torch.from_numpy(y_train)) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs,labels) loss.backward() optimizer.step() print('epoch {}, loss {} '.format(epoch, loss.data[0])) # ^ # | This is the problem I guess, replace loss.data[0] with loss.data.item()
st99797
Thanks a lot! It worked but I didn’t get the underlying problem. What did changing it to .item() do? And regarding the epoch, I wanted the epoch to start from 1 instead of the usual 0. So thus the epoch += 1
st99798
Your loss.data is a “tensor” which is not a vector/matrix anymore but only contains a single value. Think of it like trying to get an index of a an int in python, which makes no sense right? E.g.: a = 3 b = a[0] # <-- getting an index of a single ITEM makes no sense (will crash your program) To stick as close to the python feel as possible, they implemented this Warning you were getting.
st99799
Regarding the for loop: If you want to start at a specific index, you can just use for epoch in range(1, epochs).
st99800
Here is the snippet: for ii,line in enumerate(lines): if line['evidence'][-1][-1][-1] != None: #feats, labels = tf_idf_claim(line) feats, ev_sent, _ = fasttext_claim(line,ft_model,db,params) labels = ind2indicator(ev_sent,feats.shape[-1]) all_labels.append(labels.numpy()) pred_labels, scores = model(feats.unsqueeze_(0).transpose_(1,2).cuda(device_id)) all_scores[ii] = scores I see that the gpu errors out OOM after a while. This is running on Tesla K80. If I change the last line to all_scores[ii] = scores.detach().cpu().numpy(), it gets fixed. This is only process running on the gpu core.
st99801
I’m not sure how scores was calculated, but it could still hold a reference to the computation graph. If that’s the case, you are storing the whole computation graph in your list in each iteration, which eventually fills up your GPU memory. Detaching the tensor should fix this issue as you’ve already mentioned.
st99802
scores come from softmax outputs. The scores are Tensors of shape [1,10,100]. Does it mean any tensor calculated by the model holds a reference to the whole graph? I am new to pytorch, so my question might be dumb. Thanks for your help.
st99803
In this case the softmax output needs the computation graph to be able to calculate the gradients in the backward pass. You can check it by printing the grad_fn: print(scores.grad_fn) # Should return something like <SoftmaxBackward at 0x7f371721a668> If you store something from your model (for debugging purpose) and don’t need to calculate gradients with it anymore, I would recommend to call detach on it as it won’t have any effects if the tensor is already detached.
st99804
Hey all, I’m trying to implement a model someone coded up in Keras by using PyTorch. However, my implementations seems to be doing poorly compared to the Keras. I’ll list both codes: Keras: BATCH_SIZE = 50 MAX_EPOCHS = 200 model = Sequential() model.add(LSTM(256, input_shape=(64, 49), activation=‘tanh’, return_sequences=True)) model.add(Dense(49)) model.add(Activation(‘softmax’)) model.compile(loss=‘categorical_crossentropy’, optimizer=Adam(0.01)) callbacks = [] early_stopping = EarlyStopping(monitor=‘loss’, min_delta=0.01, patience=10) callbacks.append(early_stopping) model.fit(training_data, label_data, batch_size=50, epochs=200, callbacks = callbacks) PyTorch: class my_model(nn.Module): def init(self, hidden_sz, note_range, seq_len): super(model,self).init() self.Encoder = nn.LSTM(note_range, hidden_sz) self.Decoder = nn.Sequential(nn.Linear(hidden_sz, note_range),nn.Softmax(dim=-1)) self.seq_len = seq_len self.hidden_sz = hidden_sz self.note_range = note_range def train(self, training_data, label_data, lr_rate = 1e-2,epochs = 200, batch_sz = 128): #Set useful constant. seq_len = self.seq_len #Set optimizer and loss function. optimizer = optim.Adam(self.parameters(), lr = lr_rate) for epoch in range(epochs): N = training_data.shape[0] perm = torch.randperm(N) training_data = training_data[perm] label_data = label_data[perm] for i in range(N//batch_sz): batch_train = training_data[i*batch_sz:(i+1)*batch_sz] batch_label = label_data[i*batch_sz:(i+1)*batch_sz] encd = self.Encoder(batch_train)[0] encd = self.Decoder(encd) loss = -torch.sum(batch_labeltorch.log(encd))/(batch_szseq_len) #Cross Entropy #Zero gradient, calculate gradient, then gradient step. optimizer.zero_grad() loss.backward() optimizer.step() model = my_model(hidden_sz=256, note_range = 49, seq_len = 64) model.train(training_data,label_data) In both cases I use the same data. I used similar notation to highlight the similarities. But the first one has a cross entropy loss below 1 by the 100th epoch, but the pytorch one never even reaches there. Any ideas? Also, some help with editing the code on the forums so it doesn’t look like a mess would be great!
st99805
Hello, I am trying to re-work the pytorch time series example [Time Series Example], which uses LSTMCells, and I want to redo the example using LSTM. (https://github.com/pytorch/examples/tree/master/time_sequence_prediction 269) The original version using LSTMCells is: class Sequence(nn.Module): def __init__(self): super(Sequence, self).__init__() self.lstm1 = nn.LSTMCell(1, 51) self.lstm2 = nn.LSTMCell(51, 51) self.linear = nn.Linear(51, 1) def forward(self, input, future = 0): outputs = [] h_t = torch.zeros(input.size(0), 51, dtype=torch.double) c_t = torch.zeros(input.size(0), 51, dtype=torch.double) h_t2 = torch.zeros(input.size(0), 51, dtype=torch.double) c_t2 = torch.zeros(input.size(0), 51, dtype=torch.double) for i, input_t in enumerate(input.chunk(input.size(1), dim=1)): h_t, c_t = self.lstm1(input_t, (h_t, c_t)) h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2)) output = self.linear(h_t2) outputs += [output] for i in range(future):# if we should predict the future h_t, c_t = self.lstm1(output, (h_t, c_t)) h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2)) output = self.linear(h_t2) outputs += [output] outputs = torch.stack(outputs, 1).squeeze(2) return outputs How would one modify this using the nn.LSTM ? Thanks
st99806
Even if EPOCHS is not 1, the loop only runs once. A little bit of effort showed that python quits at the line loss = torch.tensor(0, dtype=torch.float, device=device) on the second run. Even more strangely, I tried adding del loss at the end of the loop and once that was run, python exited immediately. Another person trying running my code and it worked fine but his computer uses Linux and does not have a GPU. Does anyone know how to make the loop run for the full EPOCH times? # main training loop for i in range(EPOCHS): xs, ts, vs, ws = generateTrajectory() # move tensors in list to GPU # change dtype and add batch dimension xs = list(map(lambda x: x.unsqueeze(0).to(device, torch.float), xs)) vs = list(map(lambda v: v.unsqueeze(0).to(device, torch.float), vs)) # convert to 1x1 torch tensors ws = list(map( lambda w: torch.tensor([[w]], dtype=torch.float, device=device), ws)) ts = list(map( lambda t: torch.tensor([[t]], dtype=torch.float, device=device), ts)) superlist = zip(xs, ts, vs, ws) loss = torch.tensor(0, dtype=torch.float, device=device) # initialize hidden, cell vectors hidden, cell = gridnet.init_hidden_cell(vs[0], ws[0]) for x, t, v, w in superlist: guess, hidden, cell = gridnet(v, w, hidden, cell) ground_truth = torch.cat([x, t], dim=1) loss += loss_fn(guess, ground_truth) print(loss) optimizer.zero_grad() loss.backward() optimizer.step()
st99807
Can you show us the error you are getting? I tend to just set loss = 0 and that works fine
st99808
Not sure how I solved it. After a while my code just started working correctly. It’s possible the 0.4.1 release fixed something for me.
st99809
Hi, i get the error “Trying to backward through the graph a second time, but the buffers have already been freed.” even though i use hidden = hidden.detach() after loss.backward(). I split a long sequence into many batches, where each batch is a slice of the long sequence at a time-step with a given slice-length. I then run the slice through my network and get output and hidden. However i dont want to BPTT each timestep but rather at some interval, lets say each 20 time steps. I then run through 20 batches and update output, hidden each step. At step 20 i calculate loss, and do loss.backward(). To clear computational graph for next 20 steps i detach hidden states using hidden = hidden.detach(). However at next loss.backward() i get the error. Is there a way to completely delete the computational graph? My training loop: def train(epoch): print("Training Initiated!") model.train() losses = [] #Run through all sequences: for step, (data, target) in enumerate(train_set_all): print("Sequence #%d"%(step)) X = data y = target #MSELoss: y = y[:,0:1,:] y = y.view(-1,3) max_seq_size = 4000 #If track is less than max_seq_size samples, skip track if X.size(2) < max_seq_size: print("Sequence too short, skipped!") continue stride_length = 8 #with stride 8, and 3 poolings 1 sample in output matches 8 samples in input batch_size = 400 #Clip track X = X[:,:,:max_seq_size] # Clip data to max_seq_size #Zero-pad start of sequence so first prediction only uses first 8 samples for classification m = nn.ConstantPad1d((batch_size-stride_length, 0), 0) X = m(X) #Generate batches from sequence: batches = [] for start_pos in range(0, max_seq_size, stride_length): end_pos = start_pos + batch_size batch = np.copy(X[:,:,start_pos:end_pos]) batches.append(torch.from_numpy(batch)) #Initialize hidden state once for each sequence: hidden = Net.init_hidden(model) #How often to BPTT TBTT_step = 10 # The following two lists hold output of the model, and # the target. predicted, true = [], [] #Run one batch at a time: for idx, nbatch in enumerate(batches): start_time = time.time() print("Batch number #%d" %(idx)) #print(nbatch.size()) #print(nbatch) output, hidden = model(nbatch, hidden, batch_size) predicted.append(output[0]) true.append(y[0]) #Update optimizer and calculate loss every TBTT_step batches: (hidden gets updates each sample) if idx % TBTT_step == 0: #NLL #loss = criterion(output, y[:,nbatch*batch_size+batch_size-1:nbatch*batch_size+batch_size].reshape(1).long()) #MSE loss = torch.mean(criterion(torch.cat(predicted),torch.cat(true))) losses.append(loss.data[0].item()) #print(loss.item()) #print(loss) # Reset gradient optimizer.zero_grad() loss.backward() optimizer.step() #Release hidden hidden = hidden.detach() elapsed_time = time.time() - start_time print(elapsed_time) if step > 500: #Only run through 500 sequences(tracks) return losses
st99810
Solved by Ditlev_Jorgensen in post #2 I solved it by also deleting predicted and true. Adding the following line after calculating loss: #Release predicted and true predicted, true = [], []
st99811
I solved it by also deleting predicted and true. Adding the following line after calculating loss: #Release predicted and true predicted, true = [], []
st99812
You can also .detach() the tensors and append them to your lists, if you need them for further calculations.
st99813
Hi everyone, I am trying to implement the Cosine LR Annealing paper (https://openreview.net/pdf?id=BJYwwY9ll 504), but changing LR in optimizer doesn’t seem to make any difference in the final model. End result is same as keeping the LR constant. I am updating the LR with this function: optimizer = torch.optim.Rprop( MyModel.parameters(), lr=INITIAL_LR ) class CosLR(): def UpdateLR( epoch, optimizer ): NewLR = # Long equation goes here for param_group in optimizer.param_groups: param_group['lr'] = NewLR return NewLR train loop: every iteration calls UpdateLR() function. My question is: How can I find out current LR inside an optimizer? Am I updating the LR correctly without disturbing optimizer state info and recreating it from scratch?
st99814
in your training loop read and print the lr from your optimizer: optimizer = optim.SGD(filter(lambda p: p.requires_grad, net.parameters()), lr=0.001, momentum=0.9, weight_decay=0.0005) LR = StepLR([ (0, 0.001), (41000,0.0001), (51000,0.00001), (61000,-1)]) ### in your training loop #### # learning rate schduler ------- lr = LR.get_rate(i) if lr<0 : break adjust_learning_rate(optimizer, lr) rate = get_learning_rate(optimizer)[0] # read lr for debugging print(i, rate) where ## rates ------------------------------ def get_learning_rate(optimizer): lr=[] for param_group in optimizer.param_groups: lr +=[ param_group['lr'] ] return lr def adjust_learning_rate(optimizer, lr): for param_group in optimizer.param_groups: param_group['lr'] = lr examples of lr schduler ## simple stepping rates class StepLR(): def __init__(self, pairs): super(StepLR, self).__init__() N=len(pairs) rates=[] steps=[] for n in range(N): s, r = pairs[n] if r <0: s= s+1 steps.append(s) rates.append(r) self.rates = rates self.steps = steps def get_rate(self, epoch=None): N = len(self.steps) lr = -1 for n in range(N): if epoch >= self.steps[n]: lr = self.rates[n] return lr def __str__(self): string = 'Step Learning Rates\n' \ + 'rates=' + str(['%7.4f' % i for i in self.rates]) + '\n' \ + 'steps=' + str(['%7.0f' % i for i in self.steps]) + '' return string ## https://github.com/pytorch/tutorials/blob/master/beginner_source/transfer_learning_tutorial.py class DecayLR(): def __init__(self, base_lr, decay, step): super(DecayLR, self).__init__() self.step = step self.decay = decay self.base_lr = base_lr def get_rate(self, epoch=None, num_epoches=None): lr = self.base_lr * (self.decay**(epoch // self.step)) return lr def __str__(self): string = '(Exp) Decay Learning Rates\n' \ + 'base_lr=%0.3f, decay=%0.3f, step=%0.3f'%(self.base_lr, self.decay, self.step) return string # 'Cyclical Learning Rates for Training Neural Networks'- Leslie N. Smith, arxiv 2017 # https://arxiv.org/abs/1506.01186 # https://github.com/bckenstler/CLR class CyclicLR(): def __init__(self, base_lr=0.001, max_lr=0.006, step=2000., mode='triangular', gamma=1., scale_fn=None, scale_mode='cycle'): super(CyclicLR, self).__init__() self.base_lr = base_lr self.max_lr = max_lr self.step = step self.mode = mode self.gamma = gamma if scale_fn == None: if self.mode == 'triangular': self.scale_fn = lambda x: 1. self.scale_mode = 'cycle' elif self.mode == 'triangular2': self.scale_fn = lambda x: (0.5)**(x-1) self.scale_mode = 'cycle' elif self.mode == 'exp_range': self.scale_fn = lambda x: gamma**(x) self.scale_mode = 'iterations' else: self.scale_fn = scale_fn self.scale_mode = scale_mode self.clr_iterations = 0. self.trn_iterations = 0. self.history = {} self._reset() def _reset(self, new_base_lr=None, new_max_lr=None, new_step=None): """Resets cycle iterations. Optional boundary/step size adjustment. """ if new_base_lr != None: self.base_lr = new_base_lr if new_max_lr != None: self.max_lr = new_max_lr if new_step != None: self.step = new_step self.clr_iterations = 0. def clr(self): cycle = np.floor(1+self.clr_iterations/(2*self.step)) x = np.abs(self.clr_iterations/self.step - 2*cycle + 1) if self.scale_mode == 'cycle': return self.base_lr + (self.max_lr-self.base_lr)*np.maximum(0, (1-x))*self.scale_fn(cycle) else: return self.base_lr + (self.max_lr-self.base_lr)*np.maximum(0, (1-x))*self.scale_fn(self.clr_iterations) def get_rate(self, epoch=None, num_epoches=None): self.trn_iterations += 1 self.clr_iterations += 1 lr = self.clr() return lr def __str__(self): string = 'Cyclical Learning Rates\n' \ + 'base_lr=%0.3f, max_lr=%0.3f'%(self.base_lr, self.max_lr) return string
st99815
I believe you have to re-instantiate a new optimizer every time you change your lr if you want to achieve this. This also means you’ll have to store the optimizer.state_dict and then load it into the new optimizer on each iteration.
st99816
A line of code: Hx.scatter(1, indices, kmatrix) in my project threw the error: “RuntimeError: invalid argument 3: Index tensor must have the same size as input tensor”. The error is invoked only on GPU, but not CPU. This is weird, especially it seems to relate to the variable size, which should not differ on different devices. What’s the reason for this?
st99817
Hi, @albanD The code below is an example. import torch import torch.nn as nn import torch.nn.functional as F import torch.nn.parameter as Parameter from torch.autograd import Variable BS, C = 256, 8 sn = torch.randn(1)[0] gate_rand = Variable(torch.rand(BS, C).cuda(), requires_grad=True) noise_rand = Variable(torch.rand(BS, C).cuda(), requires_grad=True) gate = gate_rand + 1.0e-3 noise = noise_rand + 1.0e-4 Hx = gate + sn*noise print(gate.requires_grad, noise.requires_grad) topk, indices = torch.topk(Hx, C//2) _, neg_indices = torch.topk(-Hx, C//2) topk1, indices1 = torch.topk(Hx, C//2+1) thresh_k = topk.min(1, keepdim=True)[0] thresh_k1 = topk1.min(1, keepdim=True)[0] Hx_mask = Hx.scatter(1, neg_indices, float('-inf')) kmatrix = thresh_k.repeat(1, C) kmatrix1 = thresh_k1.repeat(1, C) print(kmatrix.size(), kmatrix1.size(), Hx.size(), indices.size()) kth_excluding = kmatrix.scatter(1, indices, kmatrix1) print('Hx: ', Hx_mask.size()) print('kth_excluding: ', kth_excluding.size())
st99818
Using the current master branch, I get the following output for your code. $ python sample.py (True, True) ((256, 8), (256, 8), (256, 8), (256, 4)) ('Hx: ', (256, 8)) ('kth_excluding: ', (256, 8)) This code crashes on your machine? What version of pytorch are you using?
st99819
I used pytorch-0.2.0_4, and it threw RumtimeError. Now the problem is solved after upgrading to 0.4.0, as you pointed out. Thank you
st99820
From the master branch, this still runs fine for me. What error are you getting?
st99821
The function producing the error is the following from the Ignite library: def to_onehot(indices, num_classes): onehot = torch.zeros(indices.size(0), num_classes, device=indices.device) return onehot.scatter_(1, indices.unsqueeze(1), 1) The error I get is: RuntimeError: invalid argument 3: Index tensor must have same dimensions as output tensor at c:\programdata\miniconda3\conda-bld\pytorch_1533096106539\work\aten\src\thc\generic/THCTensorScatterGather.cu:295
st99822
So this is not the same error as this one? Anyway, most certainly a problem where you have too many dimensions? indices should be of size batch, not batch x 1.
st99823
Ah yes the error is the same (at least the same kind). Right now “indices” has shape [32x112x112]. It doesn’t change the error even if I remove the “indices.unsqueeze(1)” and instead just use indices instead. Changing the index dimension from 1 to 0 doesn’t help either.
st99824
From the code that you sent, indices should be a 1D tensor and num_classes an int.
st99825
hmm the Pytorch documentation seems to suggest that it doesn’t need to be 1d (just the same shape as the tensor it operates on): doc link 8
st99826
Yes, But the to_onehot function that you send call scatter on a 2D tensor that it creates and add one dimension to indices before giving it to scatter. So the original indices tensor given to to_onehot should be 1D.
st99827
Ah I see. Thanks! I will try to do something else then. This is part of the official pytorch Ignite library btw - that’s why I was quite sure it should have been working.
st99828
I got it working now - guess I ultimately was just confused by the documentation. The output measured should be a 1xN tensor for it to work. Thanks for your help!
st99829
Hi, I have a training set of 70 classes and 40 images/class (2800 in total), and a testing set of 350 in total. What happens is that the loss becomes 0 when testing accuracy is still 58 %, and everything remains constant from this point. I’m using batchsize=5, learningrate=0.001, momentum=0.9. I’ve tried changing the three parameters but results get worst (loss becoming 0 with 30% accuracy, or loss never decreasing). How can I solve this? Just trying other values for this parameters? Thank you! [1, 560] loss: 4.250 Accuracy of the network on the test images: 2 % [2, 560] loss: 4.210 Accuracy of the network on the test images: 2 % [3, 560] loss: 3.903 Accuracy of the network on the test images: 5 % [4, 560] loss: 3.469 Accuracy of the network on the test images: 15 % [5, 560] loss: 2.995 Accuracy of the network on the test images: 20 % [6, 560] loss: 2.351 Accuracy of the network on the test images: 25 % [7, 560] loss: 1.795 Accuracy of the network on the test images: 40 % [8, 560] loss: 1.247 Accuracy of the network on the test images: 40 % [9, 560] loss: 0.865 Accuracy of the network on the test images: 44 % [10, 560] loss: 0.572 Accuracy of the network on the test images: 45 % [11, 560] loss: 0.376 Accuracy of the network on the test images: 46 % [12, 560] loss: 0.279 Accuracy of the network on the test images: 46 % [13, 560] loss: 0.163 Accuracy of the network on the test images: 44 % [14, 560] loss: 0.151 Accuracy of the network on the test images: 46 % [15, 560] loss: 0.107 Accuracy of the network on the test images: 54 % [16, 560] loss: 0.015 Accuracy of the network on the test images: 58 % [17, 560] loss: 0.001 Accuracy of the network on the test images: 58 % [18, 560] loss: 0.000 Accuracy of the network on the test images: 58 % [19, 560] loss: 0.000 Accuracy of the network on the test images: 59 % [20, 560] loss: 0.000 Accuracy of the network on the test images: 59 % [21, 560] loss: 0.000 Accuracy of the network on the test images: 59 % [22, 560] loss: 0.000 Accuracy of the network on the test images: 59 % [23, 560] loss: 0.000 Accuracy of the network on the test images: 59 %
st99830
Your dataset is quite small. Are you training your model from scratch? If so, could you try to use a pretrained model and fine tune it? Since your data is so small you might need a lot of regularization, e.g. weight decay.
st99831
Thanks for your answer. Yes, I’m training from scratch, using the following net definition (inputs are faces of size 96x96x3): class Net(nn.Module): def init(self): super(Net, self).init() self.pool = nn.MaxPool2d(2, 2) self.conv1 = nn.Conv2d(3, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 21 * 21, 512) self.fc2 = nn.Linear(512, 128) self.fc3 = nn.Linear(128, 70) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 21 * 21) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x The problem is that when training accuracy is 100%, testing accuracy is just 59%. Modifying the net layers (increasing or decreasing number of parameters) could improve testing accuracy, or just regularization or retraining work in this case?
st99832
It looks like your model is overfitting. There are not many solutions to this other than adding regularization such as weight decay, dropout layers, etc or increase your training data. It is a hot subject within ML and data science and thus there are many blogs and articles on the matter. I would start with adding a nn.dropout() and if that doesn’t improve much try using a bathnormalization layer as this layer also can works as a regulator. (nn.BathNorm2d()).
st99833
Hi, I have a tensor x whose shape is 233, and I want to do fft2 to x with size 4*4. This means that do right and down zeros padding to x[0,:,:], x[1,:,:], then do fft2 to them respectively. I wonder how can I realize above efficiently. Thank you for your apply.
st99834
torch.nn.FractionalMaxPool2d,How does this function calculate the result?Is there a formula?
st99835
You can find in the doc here 38 all the information and the link to the original paper.
st99836
I have a greyscale image batch (batch size = 16) of shape (16, 1, 240, 240) and a tensor of shape (16, 2, 100) with 100 points (x,y) which I want to use as index output [b] [i] [j] = image_batch (index[b] [i] [j]) How to do this?
st99837
Is there a way for me to make the following transformation with only tensor ops? I.e. without writing my own for loop: input: tensor([[0, 0], [0, 1], [1, 1], [1, 1], [1, 1], [2, 1]]) output: tensor([[0, 0], [0, 1], [1, 1], [2, 1]]) in other words, we have unique() to give me the unique scalar values but can i get the unique pairs in dim 0?
st99838
for anyone reading, turns out there is an undocumented dim arg that can be passed to the unique funciton: github.com/pytorch/pytorch Adds `dim` argument to `torch.unique` by ptrblck on 10:28PM - 10 Aug 18 13 commits changed 6 files with 273 additions and 9 deletions. https://pytorch.org/docs/stable/_modules/torch/functional.html#unique 15
st99839
Thanks for pointing this out! I’ve probably forgotten to add the doc to the merge request. Will fix it as soon as possible.
st99840
I am currently freezing an RNN, which incorporates dropout, during the training. If I freeze the RNN, will that layer still use the dropout? If not, how do I also turn off the dropout?
st99841
You can turn off the Dropout layer by calling .eval() of the layer or the model. If you want to freeze your parameters, you would have to set .requires_grad_(False) on the parameters.
st99842
You can also give a dropout probability of 0.0 (a float value), if you don’t want any dropout to be applied during training (i.e. so that it won’t zero-out some of the tensors) See here: https://github.com/pytorch/examples/blob/master/word_language_model/main.py#L34 231
st99843
Hi, i just ran several different versions of my network and i saved my models using torch.save(). i now want to reproduce the experiment with the best accuracy but i don’t remember which version of my network’s Class was used in that experiment. when i tried to load the model with torch.load() i got the following massage (which exactly describes the situation i’m in): “you can retrieve the original source code by accessing the object’s source attribute or set Module.dump_patches = True` and use the patch tool to revert the changes.” problem is i don’t see any “source attribute” in my loaded object and don’t know what the patch tool is. does enyone know i can i accsess the source code of the network’s class definition that was used when the saved model was created? thanks alot
st99844
For what it’s worth, if you run a Python interactive session and do torch.nn.Module.dump_patches = True model=torch.load("mymodel.model") …pytorch will save one or more *.patch files which you can read in a text editor. They read like a typical file diff, showing the differences between your current code and what was loaded. You can use the command-line tool “patch” if you want to automatically change your current source files to be the same as the the loaded code - or you can make any changes you feel are necessary manually based on the diff. Or just ignore the warning, if the diff looks like only superficial changes have been made.
st99845
As the title goes. My network classify all samples as the last class it was trained on. It doesnt matter if it trained for 12 hours, 12 minutes, using 100000 samples or 10 samples etc. It always only classify the last class it was trained on. So if last training loop was a class 0 all classifications are vecome 0. Is there a known fix to this issue? How to i find what causes this?
st99846
How are your training and validation accuracies during training? Do you set your model to eval using model.eval() before testing it?
st99847
Thanks for the reply! ptrblck: Do you set your model to eval using model.eval() before testing it? I set model.train() during training and model.eval() during testing. (I have dropout layer) I dont calculate any validation accuracies during training. I’ll look into doing that. What insight will validation accuracies give me?
st99848
If your model behaves strange during training, the training accuracies should be high, since your model seems to “learn” the last classes, while the validation set should score really low.
st99849
Hi again I’ve finally tested it and you’re totally right. I get very high training accuracies (89 %) and low validation accuracies (38%). I think my problem is that I want the network to give me a classification each 5 time samples. Which means that the same class is classified many times in a row for long time series. I can see that the first 10-11 classifications attempts are wrong but then the model learns to classify the correct class and for the remaining part of the time series the class is correct. However, it resets and start over when a new time series is used. I calculate loss and update optimizer each time it gives a classification. Since it give many classifications for each time series the model quickly learn the single time series but forget everything else. loss = criterion(output, y) # Reset gradient optimizer.zero_grad() loss.backward() optimizer.step() Anything to combat this? Does it make sense to simply call model() and from that calculate the predicted class and not update optimizer and calculate loss? output = model(nbatch,batch_size) And then maybe calculate loss and update optimizer at fewer intervals (like every 100 or 1000?)
st99850
I’m running into the following error while calling torch.load() on a checkpointed model file: >>> torch.load('model_best.pth.tar') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/anaconda3/envs/pytorch_source/lib/python3.7/site-packages/torch/serialization.py", line 358, in load return _load(f, map_location, pickle_module) File "/home/ubuntu/anaconda3/envs/pytorch_source/lib/python3.7/site-packages/torch/serialization.py", line 549, in _load deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly) RuntimeError: storage has wrong size: expected -7659745797817883467 got 512 I’m pretty sure the model file is not corrupted, and was produced as a result of a distributed run across many machines. Any pointers as to what might be wrong greatly appreciated!
st99851
I have a variable a = Variable(torch.Tensor(5,5)), is there any way to calculate the determinant of that variable?
st99852
import torch import numpy as np from torch.autograd import Variable a = Variable(torch.randn(5,5)) np.linalg.det(a.data.numpy())
st99853
In fact, I want to get the gradient of the det w.r.t. each elements in the matrix.
st99854
I believe the results would be numerically unstable for a large matrix. There was some discussion on TensorFlow github issues to that effect.
st99855
It’s sad that PyTorch does not have a determinant function. This basically makes it impossible (ok, very hard) to implement Gaussian mixture density networks with a full covariance matrix.
st99856
Hi @dpernes, the happy news is that between the last post and yours, we added documentations to make the Cholesky functions easier to find, so torch.potrf(a).diag().prod() gives you the determinant. (The functions are exactly the same as in 0.1.12, too, but be sure to use the master docs.) If you want a differentiable version, you could make a Cholesky layer by combination with inverse (in lieu of having triangular solving in autograd) and then do Cholesky.apply(v).diag().prod(). Although the notebook is not as finished as I would like, there is a Cholesky layer in my notebook doing most basic Gaussian Process regression 187. Best regards Thomas
st99857
when i use torch.potrf(a).diag().prod() i got an TypeError: Type Variable doesn’t implement stateless method potrf error
st99858
tom: torch.potrf(a).diag().prod() i can not use this function with new version of pytorch
st99859
Either use the above on Tensors or Cholesky.apply with the linked austograd Function. Best regards Thomas
st99860
tom: use the above on Tensors sure i can use this code torch.potrf(a).diag().prod() when a is a tensor but i need to do the operation to with autograd when i call backward() function. Would you please help me solve this problems
st99861
Hi, the Cholesky class from the notebook linked above is a (not terribly good because it uses “inverse” instead of a triangular solver on a matrix we know to be triangular) autograd.Function that does the same as potrf does on Tensors. diag and prod should work on Variables. Note that this only works for positive definite matrices (e.g. covariance matrices). Best regards Thomas
st99862
tom: Cholesky class from the notebook Thanks for your help and i found that i got the square of the determinant with the code.
st99863
If anyone is looking at this thread: Note that potrf has gained differentiability in master/0.3. Best regards Thomas
st99864
There is torch.potrs if you have a system with the symmetric matrix and the general solver torch.gesv if you want to solve something with the factor as matrix (a triangular solve would be nixe, of course). In my candlegp Gaussian Process lubrary 20 I caught myself forgetting to specify upper=False to get the lower factor - that was a greater nuisance than the solver… Best regards Thomas
st99865
@tom I got many problems when using these Lapack functions (for example 8). The Cholesky decomposition usually throws out the leading minor of order ... is not positive definite error, even with high jitter level (1e-5, sometimes for 1e-4). Your candlegp library is really nice! What do you think about using pyro 4 with it? I make a simple version here (https://github.com/fehiepsi/pytorch-notebooks/blob/master/executable/GaussianProcess.ipynb 24, sorry for not putting any comment in the code ).
st99866
Just for anyone who is still looking for this: after version 0.4, we can compute the determinant of a matrix using torch.det(A)
st99867
Why PyTorch does not use cudnnPoolingForward/Backward? It is only used in Caffe2. Is there any problem when you switch from cuda to cudnn? (It should be performance improvement generally when you switch from cuda to cudnn)