id
stringlengths
3
8
text
stringlengths
1
115k
st102700
Hey guys, I have fixed it by providing the iterator with a way to compare the examples e.g. data.Iterator.splits( (train_data, eval_data), batch_size=batch_size, ..., sort_key=lambda x: len(x.text)) Here is the documentation in the source code: https://github.com/pytorch/text/blob/2ce827edd10a92de83a2578cea9cf1b0e3b3f1af/torchtext/data/iterator.py#L27 24
st102701
Another way to fix it is by adding the argument sort=False train_iter, dev_iter = twitter_data(text_field, label_field, device=-1, repeat=False, sort=False)
st102702
Lurking for some transparency on grand design decisions of PyTorch and Caffe2 teams It seems Caffe2 is faster on introducing new neural net layers into the core, and there also seem no systematic benchmarks on speed of ops by two frameworks Is there convergence planned for both PyTorch and Caffe2 provide the same ops running on the same backends? Is PyTorch looking to generally port all main ops to native ATen ops? What is the general idea of going forward for torch.script? Will it evolve into a general compiler / optimizer working on graphs or some industry-standardized IR (like what LLVM became to be; then support for future languages is easier, say Julia or even JavaScript) ? Thanks!
st102703
Road to 1.0 65 should answer nearly all your questions. For the benchmarks you may look at this 21, this 16 or this 16 link. They usually are not comparing single ops but to get a general overview it is still usable, even if they don’t use the current stable release. @smth does not even compare with Pytorch but with torch which should be equally fast as huge parts of the backends are/were the same (not sure what has been changed by migration to ATen).
st102704
I did read the blog post before posting this, but thanks for the link though. Unfortunately, I could not understand specifics from the blog post, I am re-phrasing them below: What does we decided to marry PyTorch and Caffe2 which gives the production-level readiness for PyTorch mean exactly? Does it mean that more of backend ops from Caffe2 and PyTorch are going to converge? Yes, from the blog post it is clear that the export and interchange is going to be easier, but I could not understand if more convergence is planned (at least for NVidia GPU / CPU ops). For torch.script - yes, from the blog post it is clear it helps to optimize PyTorch (or imported models), but does it aim a larger place in the ecosystem (maybe even separated from PyTorch itself?)?
st102705
vadimkantorov: What does we decided to marry PyTorch and Caffe2 which gives the production-level readiness for PyTorch mean exactly? I read on github, that there is a new backend called C10 in progress which combines features and backends from ATen and Caffe2. This backend should be a more generic one which means that adding new tensor types and similar stuff will be easier (the actual discussion was about introducing complex tensors). vadimkantorov: For torch.script - yes, from the blog post it is clear it helps to optimize PyTorch (or imported models), but does it aim a larger place in the ecosystem (maybe even separated from PyTorch itself?)? For me it reads like this will become a larger ecosystem (together with everything else from the jit-part) and the traced models will will be python- (and thus also pytorch-independent) but still have strong backend-dependencies (which will most likely be C10 at that time).
st102706
Hi, I have been working with Torch7 and just switched to Pytorch recently. So I might be missing something very basic. I wanted to port some codes to python. While I could easily get most of the network and related stuff ported without any issues, I wanted some information regarding the criterion. One of the code that I needed to port is a scale invariant mse criterion. The original code uses the mse errors internally, but does some processing in the forward and backward passes of the criterion. The code is: -- weighted MSE criterion, scale invariant, and with mask local WeightedMSE, parent = torch.class('nn.WeightedMSE', 'nn.Criterion') function WeightedMSE:__init(scale_invariant) parent.__init(self) -- we use a standard MSE criterion internally self.criterion = nn.MSECriterion() self.criterion.sizeAverage = false -- whether consider scale invarient self.scale_invariant = scale_invariant or false end -- targets should contains {target, weight} function WeightedMSE:updateOutput(pred, targets) local target = targets[1] local weight = targets[2] -- scale-invariant: rescale the pred to target scale if self.scale_invariant then -- get the dimension and size local dim = target:dim() local size = target:size() for i=1,dim-2 do size[i] = 1 end -- scale invariant local tensor1 = torch.cmul(pred, target) local tensor2 = torch.cmul(pred, pred) -- get the scale self.scale = torch.cdiv(tensor1:sum(dim):sum(dim-1),tensor2:sum(dim):sum(dim-1)) -- patch NaN self.scale[self.scale:ne(self.scale)] = 1 -- constrain the scale in [0.1, 10] self.scale:cmin(10) self.scale:cmax(0.1) -- expand the scale self.scale = self.scale:repeatTensor(size) -- re-scale the pred pred:cmul(self.scale) end -- sum for normalize self.alpha = torch.cmul(weight, weight):sum() if self.alpha ~= 0 then self.alpha = 1 / self.alpha end -- apply weight to pred and target, and keep a record for them so that we do not need to re-calculate self.weighted_pred = torch.cmul(pred, weight) self.weighted_target = torch.cmul(target, weight) return self.criterion:forward(self.weighted_pred, self.weighted_target) * self.alpha end function WeightedMSE:updateGradInput(input, target) self.grad = self.criterion:backward(self.weighted_pred, self.weighted_target) if self.scale then self.grad:cdiv(self.scale) -- patch NaN self.grad[self.grad:ne(self.grad)] = 0 end return self.grad * self.alpha end The full source code is at https://github.com/shi-jian/shapenet-intrinsics/blob/master/train/Criterion.lua 3 But I couldn’t find any way to extend a criterion in pytorch. Is it the same as the nn.module, only this will be treated as a loss function? I would be really grateful if someone could help me with this problem. Thanks!
st102707
The fastest thing to do is to write a new loss function in Pytorch. This can either be done through writing an autograd Function in python, or just defining some python function that does the math you want it to. For example, def mse_loss(input, target): return (input - target) ** 2 would give you some mse loss.
st102708
Hi, Thanks for your reply, but my requirement isn’t about just implementing an MSE loss. I wanted to know how I could define a custom backward and forward for a criterion. In the code that I needed to port, you can see that there are some operations that are needed to be done in the backwards.
st102709
Hi, I am working on deploying a pre-trained LSTM model using ONNX. I have obtained the .onnx file following the tutorial of Transfering a model from PyTorch to Caffe2 and Mobile using ONNX 17. But for my own model, which is a simple 1-layer LSTM, the error occurs like this: Traceback (most recent call last): File "test.py", line 42, in <module> get_onnx_file() File "test.py", line 40, in get_onnx_file torch_out = torch.onnx.export(model, (x, hc), onnx_filename, verbose=True) File "/home/me/anaconda2/lib/python2.7/site-packages/torch/onnx/__init__.py", line 75, in export _export(model, args, f, export_params, verbose, training) File "/home/me/anaconda2/lib/python2.7/site-packages/torch/onnx/__init__.py", line 102, in _export torch._C._jit_pass_onnx(trace) File "/home/me/anaconda2/lib/python2.7/site-packages/torch/onnx/__init__.py", line 133, in _run_symbolic_method return symbolic_fn(*args) File "/home/me/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 357, in symbolic raise RuntimeError("hack_onnx_rnn NYI") RuntimeError: hack_onnx_rnn NYI The code that can reproduce my error is like this: import torch import torch.nn as nn from torch.autograd import Variable import torch.nn.functional as F ## build a simple LSTM network class SimpleLSTM(nn.Module): def __init__(self, in_size, hidden_size): super(SimpleLSTM, self).__init__() self.nlayers = 1 self.nhid = hidden_size self.ninp = in_size self.rnn = nn.LSTM(in_size, hidden_size, self.nlayers) def init_hidden(self, bsz, volatile=False): weight = next(self.parameters()).data return (Variable(weight.new(self.nlayers, bsz, self.nhid).zero_(), volatile=volatile), Variable(weight.new(self.nlayers, bsz, self.nhid).zero_(), volatile=volatile)) def forward(self, x, hidden): out, hidden_tp1 = self.rnn(x, hidden) return out, hidden_tp1 in_size, hidden_size = 6, 128 def get_model(): model = SimpleLSTM(in_size, hidden_size) return model model = get_model() ## get onnx file using `export` def get_onnx_file(): onnx_filename = './lstm.onnx' # input: x x = Variable(torch.rand(1, 1, in_size), volatile=True) # input: hidden hc = model.init_hidden(1, True) torch_out = torch.onnx.export(model, (x, hc), onnx_filename, verbose=True) # try to output onnx file, where error occurs get_onnx_file() Could anybody give me a hint? Thanks! PS: I found this in documentation page of torch.onnx: (Here 45) We plan on expanding support to more operators; RNNs are high on our priority list. So RNN(i.e. RNN, LSTM/GRU) has not been supported by ONNX yet?
st102710
Got the same problem here. On the docs, v0.3 29 should’ve already supported RNN operation, but I still get the same error. Do we need to install from master?
st102711
I think you can have a try. I gave up using ONNX at that moment. I exported the weight of the LSTM model to MXNet and used tvm.
st102712
How did you export the weight to MXNet? Did you re-trained the net with MXNet directly?
st102713
I didn’t retrain the model using MXNet. Just fetched the weights from the LSTM model and filled them into MXNet params. It is straightforward.
st102714
You’d better first try to re-build pytorch by the source code of master branch. Glad to help you.
st102715
Hey guys, I’m implementing some RL and got stuck at a, in my opinion, weird behaviour. I’ll use DataParallel and the device-tag to move my Nets/ Data to the available device(s). Using CPU and one CUDA device everything works fine, but if I use more than one device, I’ll get the following error: File “_test_script.py”, line 229, in main() File “_test_script.py”, line 177, in main mae = ddpg.validate(states_val, labels_val, mean_train, std_train).item() File “RL/DDPG/ddpg_linear_baseline.py”, line 415, in validate forecast = self.actor_target(state) File “env/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 491, in call result = self.forward(*input, **kwargs) File “env/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py”, line 114, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File “env/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py”, line 124, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File “env/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py”, line 65, in parallel_apply raise output File “env/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py”, line 41, in _worker output = module(*input, **kwargs) File “env/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 491, in call result = self.forward(*input, **kwargs) File “RL/Nets/actor/actor_linear_baseline.py”, line 28, in forward x = F.relu(self.input_layer(x)) File “env/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 491, in call result = self.forward(*input, **kwargs) File “env/lib/python3.5/site-packages/torch/nn/modules/linear.py”, line 55, in forward return F.linear(input, self.weight, self.bias) File "env/lib/python3.5/site-packages/torc RuntimeError: size mismatch, m1: [1 x 144], m2: [288 x 256] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:249 The input layer of actor_target is some simple linear layer: self.input_layer = nn.Linear(288, 256) I already checked the batch count, such that the data can be distributed evenly among the devices. Does anyone know what I’m doing wrong? Thanks! Bene
st102716
Solved by ptrblck in post #7 Could you try to add a batch dimension to your data? For a batch size of 1, your input shape should be [1, in_features]. I assume 72 is your feature dimension. If so, nn.DataParallel might split on the wrong dimension.
st102717
Could you post your forward function? I guess you are using view, which might yield the wrong shape.
st102718
This is my forward function: def forward(self, state): # no batch normalization if there is no batch no_batch = True if len(state[0].shape) == 0 else False x = self.dropout_layer_1(state) x = F.relu(self.input_layer(x)) x = self.batch_norm_1(x) if not no_batch else x x = self.dropout_layer_2(x) x = F.relu(self.hidden_layer_1(x)) x = self.batch_norm_2(x) if not no_batch else x x = self.dropout_layer_3(x) x = F.relu(self.hidden_layer_2(x)) x = self.batch_norm_3(x) if not no_batch else x x = self.dropout_layer_4(x) x = self.output_layer(x) # TODO: clamp between useful values return x Why could this lead to a problem? Especially if it works on CPU/ one device? Thanks!
st102719
Apparently it was a wrong assumption. Could you print the shape of state in forward?
st102720
I inserted def forward(self, state): # no batch normalization if there is no batch print(state.shape) no_batch = True if len(state[0].shape) == 0 else False x = self.dropout_layer_1(state) and got (while running on 4 devices): torch.Size([72]) torch.Size([72]) torch.Size([72]) torch.Size([72]) After this it crashed
st102721
It looks like it is failing during validation, so batch size is 1 - which could explain the shape of 72 (72*4=288) but I called net.eval() before… . Do I have to do some other calls before calling forward where the batch size is smaller than the amount of devices? Additionally, I’m implementing DDPG where I have to iterativly evaluate/ call the net with one state/ a batch of one state… .
st102722
Could you try to add a batch dimension to your data? For a batch size of 1, your input shape should be [1, in_features]. I assume 72 is your feature dimension. If so, nn.DataParallel might split on the wrong dimension.
st102723
My number of features is 288. I did state.view(1,-1) such that the dimension of the state is now [1, 288] but not I get the following error: {ValueError}Expected more than 1 value per channel when training, got input size [1, 256] If i do state.view(-1,1), such that the dimension of the state is [288,1] then I’ll get: {RuntimeError}size mismatch, m1: [288 x 1], m2: [288 x 256] at c:\programdata\miniconda3\conda-bld\pytorch-cpu_1524541161962\work\aten\src\th\generic/THTensorMath.c:2033
st102724
Seems to be working - thank you very much! (It’s running on a cluster now -> need some time until I get the log…)
st102725
I am new to pytorch and I tried creating my own custom loss. This has been really challenging. Below is what I have for my loss. class CustomLoss(nn.Module): def __init__(self, size_average=True, reduce=True): """ Args: size_average (bool, optional): By default, the losses are averaged over observations for each minibatch. However, if the field size_average is set to ``False``, the losses are instead summed for each minibatch. Only applies when reduce is ``True``. Default: ``True`` reduce (bool, optional): By default, the losses are averaged over observations for each minibatch, or summed, depending on size_average. When reduce is ``False``, returns a loss per input/target element instead and ignores size_average. Default: ``True`` """ super(CustomLoss, self).__init__() def forward(self, S, N, M, type='softmax',): return self.loss_cal(S, N, M, type) ### new loss cal def loss_cal(self, S, N, M, type="softmax",): """ calculate loss with similarity matrix(S) eq.(6) (7) :type: "softmax" or "contrast" :return: loss """ self.A = torch.cat([S[i * M:(i + 1) * M, i:(i + 1)] for i in range(N)], dim=0) self.A = torch.autograd.Variable(self.A) if type == "softmax": self.B = torch.log(torch.sum(torch.exp(S.float()), dim=1, keepdim=True) + 1e-8) self.B = torch.autograd.Variable(self.B) total = torch.abs(torch.sum(self.A - self.B)) else: raise AssertionError("loss type should be softmax or contrast !") return total When I run the following; loss = CustomLoss() (loss.loss_cal(S=S,N=N,M=M)) loss.backward() I get the following error: C:\Program Files\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py in run_cell_magic(self, magic_name, line, cell) 2113 magic_arg_s = self.var_expand(line, stack_depth) 2114 with self.builtin_trap: -> 2115 result = fn(magic_arg_s, cell) 2116 return result 2117 <decorator-gen-60> in time(self, line, cell, local_ns) C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magic.py in <lambda>(f, *a, **k) 186 # but it's overkill for just that one bit of state. 187 def magic_deco(arg): --> 188 call = lambda f, *a, **k: f(*a, **k) 189 190 if callable(arg): C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magics\execution.py in time(self, line, cell, local_ns) 1178 else: 1179 st = clock2() -> 1180 exec(code, glob, local_ns) 1181 end = clock2() 1182 out = None <timed exec> in <module>() C:\Program Files\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __getattr__(self, name) 530 return modules[name] 531 raise AttributeError("'{}' object has no attribute '{}'".format( --> 532 type(self).__name__, name)) 533 534 def __setattr__(self, name, value): AttributeError: 'CustomLoss' object has no attribute 'backward'
st102726
Trying to get data from a tensor with tensor.data[0] and I get AttributeError: 'DoubleTensor' object has no attribute 'data' Any idea why this is ?
st102727
Tensors don’t have a data attribute (Variables do). Just use tensor[0]. (Variable is a wrapper around tensor that supports automatic differentiation. Variable.data is the underlying tensor)
st102728
Now I realize what the problem is. One of my objects is a variable, the other is a tensor. <class 'torch.autograd.variable.Variable'> <class 'torch.FloatTensor'> How can I convert from the former to the later (autograd variable to floatTesnor) ? EDIT: Oh , now I understand, to go from variable to tensor, you just use variable.data
st102729
extra info, FWIW: you will get an error in 0.5 onwards if you try tensor(0) UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number
st102730
I think for pytorch0.4 the tensor and tensor.data is the same thing now. In [6]: type(x) Out[6]: torch.Tensor In [7]: type(x.data) Out[7]: torch.Tensor In [8]: x.__class__ Out[8]: torch.Tensor In [9]: x.data.__class__ Out[9]: torch.Tensor
st102731
.data should be used carefully, as it detaches the tensor from the computation graph and might lead to wrong results. It still has similar semantics as in the previous versions. It’s safer to use tensor.detach() instead.
st102732
Thanks! I’d like to ask one more question. what’s the meaning of ‘might lead to…’. So .data is not exactly the same as .detach?
st102733
Yes, that’s correct. Both share the underlying data of the tensor and have requires_grad=False. While using x.data is unrelated to the computation graph, x.detach will have its in-place changes reported by autograd if x is needed in backward and will raise an error if necessary. There is an example in the Migration Guide 245 in the “What about .data?” section.
st102734
Thanks, @ptrblck! So to summarize, they are both used to detach tensor from computation graph and returns a tensor that shares the same data, the difference is x.detach() adds another constrain that when the data is changed in-place, the backward wont’t be done. So why we still need x.data, it this just a historical reason?
st102735
It’s still used in e.g. optimizers to update the parameters. Although it’s not recommended to use it, there are still valid use cases for .data.
st102736
Hi, I’m trying to learn and play with pytorch. But encountered a Stack overflow exception. Below are my code snippets. The last line of the codes below will throw a “Windows fatal exception: stack overflow” at some point during training. While If I change to use torch.nn.RNN, things are working just file. Any help will be appreciated. class VanillaRNNModule(torch.nn.Module): def __init__(self, input_size, hidden_size): super(VanillaRNNModule, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.rnn = torch.nn.RNN(input_size, hidden_size, num_layers=1, nonlinearity='tanh', batch_first=True, dropout=0, bidirectional=False) self.output_layer = torch.nn.Linear(hidden_size, 1) def forward(self, input, input_lengths): packed_input = torch.nn.utils.rnn.pack_padded_sequence(input, input_lengths, batch_first=True) hidden = self.get_init_hidden() out, hiddens = self.rnn(packed_input) # unpacked_output, unpacked_lens = torch.nn.utils.rnn.pad_packed_sequence(out, batch_first=True) output = self.output_layer(hiddens) return output def get_init_hidden(self): return torch.zeros(1, self.hidden_size) class LSTMRNNModule(torch.nn.Module): def __init__(self, input_size, hidden_size): super(LSTMRNNModule, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.lstm = torch.nn.LSTM(input_size, hidden_size, num_layers=1, batch_first=True, dropout=0, bidirectional=False) self.output_layer = torch.nn.Linear(hidden_size, 1) def forward(self, input, input_lengths): packed_input = torch.nn.utils.rnn.pack_padded_sequence(input, input_lengths, batch_first=True) out, (hidden, _) = self.lstm(packed_input) # unpacked_output, unpacked_lens = torch.nn.utils.rnn.pad_packed_sequence(out, batch_first=True) output = self.output_layer(hidden) return output def get_init_hidden(self): return torch.zeros(1, self.hidden_size) class RNNBinaryClassifierTrainer(object): def __init__(self, data_set, hidden_size, batch_size = 1): self.data_set = data_set self.batch_size = batch_size # self.rnn = RNNModule(data_set.n_words, hidden_size) # self.rnn = VanillaRNNModule(data_set.n_words, hidden_size) self.rnn = LSTMRNNModule(data_set.n_words, hidden_size) self.all_losses = [] self.criterion = torch.nn.BCEWithLogitsLoss(size_average=True, reduce=True) self.optimizer = torch.optim.SGD(self.rnn.parameters(), lr = 0.001, momentum = 0.9) def train(self, max_iter, evl_every): current_loss = 0 for i in range(1, max_iter + 1): st = time.time() for X, L, Y in self.data_set.next_train_batch(self.batch_size): current_loss += self.train_iter(X, L, Y) ed0 = time.time() if i % evl_every == 0: avg_loss = current_loss / evl_every self.all_losses.append(avg_loss) # correct_count, total_count = self.evaluate() auc = self.evaluate_auc(self.batch_size) ed1 = time.time() # print("Elapsed: {4}, Iter: {0}, Loss: {1:8.3f}, AUC: {}, Total: {2}, Correct: {3}".format(i, avg_loss, total_count, correct_count, timeSince(time_start), )) print("Epoch Used: {0:0.3f}, Eval Used: {1:0.3f}".format(ed0-st, ed1-st)) print("Elapsed: {3}, Iter: {0}, Loss: {1:8.3f}, AUC: {2:0.3f}".format(i, avg_loss, auc, timeSince(time_start))) current_loss = 0 def train_iter(self, input, input_lengths, target): self.optimizer.zero_grad() output = self.predict(input, input_lengths) output = output[0] loss = self.criterion(output, target) loss.backward() self.optimizer.step() # return loss.data.item() # return loss.item() return float(loss)
st102737
I am facing the same issue while using LSTM. I used the Visual Studio 2017 debug to find this error message: Unhandled exception at 0x00007FFF493FD7B5 (ucrtbase.dll) in python3.exe: 0xC00000FD: Stack overflow (parameters: 0x0000000000000001, 0x000000C094C13FE8). occurred OS: Windows 10 Python: 3.6 PyTorch: 0.4.0 Is this something fixed in this PR#6873 15 ?
st102738
Hey, i think im having the same issue. Ive attached a simple example, which reproduces the error. I would appreciate any help. import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.autograd import Variable class LSTM(nn.Module): def __init__(self, input_size, hidden_size, num_layers): super(LSTM, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.rnn = nn.LSTM( input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, bidirectional=False, ) def forward(self, x): outputs, _ = self.rnn(x) return Variable(x.view(x.size()[0], 1), requires_grad=True) model = LSTM( input_size=1, hidden_size=5, num_layers=1 ) criterion = nn.MSELoss() optimizer = optim.SGD( params=model.parameters(), lr=0.01 ) trainX = torch.randn(5000, 1, 1) trainY = torch.randn(5000, 1) for i in range(5): outTrain = model(trainX) loss = criterion(outTrain, trainY) model.zero_grad() loss.backward() optimizer.step() Produces error: 0xc00000fd
st102739
Thanks.I have a question. I’m writing a loss function. def forward(self, input, target): y = one_hot(target, input.size(-1)) Psoft = torch.nn.functional.softmax(input).cpu() Loss=0.0 t1=target.view(1,target.size(0)).cpu() for i in range(0,target.size(0)-1): t2=t1[0,i] for j in range(1,t2+1): P1=Psoft[i,:j] y1=y[i,:j] Loss += sum(P1-y1)**2 Loss=Loss/target.size(0) return Loss and there’ll be an error in Line:for j in range(1,t2+1): TypeError: ‘Variable’ object cannot be interpreted as an integer if I write it as def forward(self, input, target): y = one_hot(target, input.size(-1)) Psoft = torch.nn.functional.softmax(input).cpu() Loss=0.0 t1=target.data.view(1,target.size(0)).cpu() for i in range(0,target.size(0)-1): t2=t1[0,i] for j in range(1,t2+1): P1=Psoft[i,:j] y1=y[i,:j] Loss += sum(P1-y1)**2 Loss=Loss/target.size(0) return Loss and there’ll be an error The type of “Loss” is float.It doesn’t have gradient What should i do?
st102740
If t2 contains a single integer value that you want to use as the loop boundary, you can use t2.item() to get a python number from the content of the tensor. For loop boundaries, you might need to do int(t2.item()).
st102741
Hi, I’m trying to just export base recurrent model to ONNX, but seems like I’m missing something about the dimensions ordering of inputs or so. I have no problems with simple forward pass but do have one at torch.onnx.export. This is my code: import torch import torch.onnx model = torch.nn.GRU(input_size=3, hidden_size=16, num_layers=1) x = torch.randn(10, 1, 3) h = torch.zeros(1, 1, 16) print(model(x, h)) # produces no errors, prints outputs torch.onnx.export(model, (x, h), 'temp.onnx', export_params=True, verbose=True) # produces the RuntimeError below Error at running the last line: Traceback (most recent call last): File "pytorch-to-caffe2-via-onnx.py", line 64, in <module> run(args) File "pytorch-to-caffe2-via-onnx.py", line 44, in run torch.onnx.export(model, args=(x, h), f=onnx_proto_output)#, export_params=True, verbose=True) File "/home/lysukhin/distr/anaconda3/envs/pytorch-nightly/lib/python3.6/site-packages/torch/onnx/__init__.py", line 25, in export return utils.export(*args, **kwargs) File "/home/lysukhin/distr/anaconda3/envs/pytorch-nightly/lib/python3.6/site-packages/torch/onnx/utils.py", line 84, in export _export(model, args, f, export_params, verbose, training, input_names, output_names) File "/home/lysukhin/distr/anaconda3/envs/pytorch-nightly/lib/python3.6/site-packages/torch/onnx/utils.py", line 134, in _export trace, torch_out = torch.jit.get_trace_graph(model, args) File "/home/lysukhin/distr/anaconda3/envs/pytorch-nightly/lib/python3.6/site-packages/torch/jit/__init__.py", line 255, in get_trace_graph return LegacyTracedModule(f, nderivs=nderivs)(*args, **kwargs) File "/home/lysukhin/distr/anaconda3/envs/pytorch-nightly/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/lysukhin/distr/anaconda3/envs/pytorch-nightly/lib/python3.6/site-packages/torch/jit/__init__.py", line 288, in forward out = self.inner(*trace_inputs) File "/home/lysukhin/distr/anaconda3/envs/pytorch-nightly/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self._slow_forward(*input, **kwargs) File "/home/lysukhin/distr/anaconda3/envs/pytorch-nightly/lib/python3.6/site-packages/torch/nn/modules/module.py", line 479, in _slow_forward result = self.forward(*input, **kwargs) File "/home/lysukhin/distr/anaconda3/envs/pytorch-nightly/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 178, in forward self.check_forward_args(input, hx, batch_sizes) File "/home/lysukhin/distr/anaconda3/envs/pytorch-nightly/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 130, in check_forward_args self.input_size, input.size(-1))) RuntimeError: input.size(-1) must be equal to input_size. Expected 3, got 16
st102742
Solved by ptrblck in post #2 Your code works for my builds 0.5.0a0+4028ff6 and 0.4.0. Which PyTorch version are you using?
st102743
Your code works for my builds 0.5.0a0+4028ff6 and 0.4.0. Which PyTorch version are you using?
st102744
UPD: had an error on my side in some other piece of code, sorry. This is already a working example, thanks @ptrblck.
st102745
Good morning guys. I am trying to run a bunch of (complicated) code but I keep getting the following weird error: INFO:root:Random state initialized with seed 28794055 INFO:root:Ani1 will be loaded... INFO:root:create splits... INFO:root:load data... INFO:root:calculate statistics... INFO:root:cached statistics was loaded... INFO:root:training... Traceback (most recent call last): File "schnetpack_ani1.py", line 296, in <module> train(args, model, train_loader, val_loader, device) File "schnetpack_ani1.py", line 156, in train trainer.train(device) File "/home/kim.a.nicoli/Projects/Schnetpack_release/src/schnetpack/train.py", line 175, in train raise e File "/home/kim.a.nicoli/Projects/Schnetpack_release/src/schnetpack/train.py", line 118, in train loss = self.loss_fn(train_batch, result) File "schnetpack_ani1.py", line 149, in loss diff = batch[args.property] - result[0] ==TypeError: sub() received an invalid combination of arguments - got (map), but expected one of: * (Tensor other, float alpha) * (float other, float alpha)== What I don’t get is that if I insert some print statement at the level where it crashes I get that the types of the objects are e.g. Tensors, and not map. Does any of you have any suggestion on what to look for or what might be the problem? I didn’t find too much about issues with map type. Thanks in advance for the help
st102746
result comes from here y = self.atom_pool(yi, atom_mask) result = [y] props = ['y'] at_func = namedtuple('atomwise', props) return at_func(*result) as for the batch is just a tensor of a given batch_size created from the Dataloader. I feel, anyway, that the error might be related to how result is created.
st102747
Thanks for the update. Could you print out the type of result0] just before the error is thrown? print(type(result[0]) print(result[0].type())
st102748
I did it already. That’s actually why I was confused. This is the output, although it doesn’t make sense to me that it raises the error since result seems to be of type tensor! torch.FloatTensor torch.FloatTensor
st102749
I assume you’ve already checked the type of batch[args.property]. Could you create a small executable code snippet so that I could run it on my machine?
st102750
Bug found. It seems that there are conflict with the DataParallel. If I run the code on a CPU on a single GPU it works. When I try to go parallel on multiple GPUs it raises the error I reported previously. I doubled checked and my guess was right. It seems the issue is related to the type returned in the code I showed you. This means that if I simply return result as a list it works (with DataParallel on multiple GPUs) whereas when I return a tuple through the named tuple method it breaks. Nonetheless to return a list it not that nice, I would like to keep returning the namedtuple object. Any clue on how I could overcome the issue?
st102751
can i run pytorch v.0.3.1 on cuda 9.0 on windows 10 ?. Actually, in the site i found, it is supposed to be installed with cuda 8.0. but, i just installed cuda 9.0 on my laptop.
st102752
Hi, The current version is 0.4.0. You can use the instructions on the website to install it for cuda 9.0.
st102753
I run the code of a training network on the GPU, at the same time I run another simple code which including data.cuda(). Then my computer crashed.
st102754
did not worked ,still the same error.can i have path for pytorch installation on anaconda.
st102755
In file: CUDAGenerator.cpp, it looks like the seed() routine calls THCRandom_initialSeed uint64_t CUDAGenerator::seed() { return THCRandom_initialSeed(context->getTHCState()); } i.e. it returns the initial seed rather than resetting as in the CPU version: uint64_t CPUGenerator::seed() { return THRandom_seed(generator); } Thanks
st102756
Seems like there are some in-place functions in my loss functions. Would help greatly if someone could help me locate them. I can’t find them at all! import torch class DetectionFocalLoss(torch.nn.Module): def __init__(self, alpha=0.25, gamma=2.0): super(DetectionFocalLoss, self).__init__() self.alpha = alpha self.gamma = gamma def forward(self, classification, target): torch.nn.modules.loss._assert_no_grad(target) # Gather ancho states from target # anchor state is used to check how loss should be calculated # -1: ignore, 0: negative, 1: positive anchor_state = target[:, :, -1] target = target[:, :, :-1] # Filter out ignore anchors indices = anchor_state != -1 if torch.sum(indices) == 0: # Return 0 if ignore all return torch.zeros_like(classification[0, 0, 0]) classification = classification[indices].clone() target = target[indices].clone() # compute focal loss bce = -(target * torch.log(classification) + (1.0 - target) * torch.log(1.0 - classification)) alpha_factor = torch.ones_like(target) alpha_factor = alpha_factor * self.alpha alpha_factor[target != 1] = 1 - self.alpha focal_weight = classification focal_weight[target == 1] = 1 - focal_weight[target == 1].clone() focal_weight = alpha_factor * focal_weight ** self.gamma cls_loss = focal_weight * bce # Compute the normalizing factor: number of positive anchors normalizer = torch.sum(anchor_state == 1).float() normalizer = max(normalizer, 1) return torch.sum(cls_loss) / normalizer import torch class DetectionSmoothL1Loss(torch.nn.Module): def __init__(self, sigma=3.0): super(DetectionSmoothL1Loss, self).__init__() self.sigma_squared = sigma ** 2 def forward(self, regression, target): torch.nn.modules.loss._assert_no_grad(target) regression_target = target[:, :, :4] # anchor state is used to check how loss should be calculated # -1: ignore, 0: negative, 1: positive anchor_state = target[:, :, 4] # filter out "ignore" anchors indices = anchor_state == 1 if torch.sum(indices) == 0: # Return 0 if ignore all return torch.zeros_like(regression[0, 0, 0]) regression = regression[indices].clone() regression_target = regression_target[indices].clone() # compute smooth L1 loss # f(x) = 0.5 * (sigma * x)^2 if |x| < 1 / sigma / sigma # |x| - 0.5 / sigma / sigma otherwise regression_diff = regression - regression_target regression_diff = torch.abs(regression_diff) to_smooth = regression_diff < 1.0 / self.sigma_squared regression_loss = torch.zeros_like(regression_diff) regression_loss[to_smooth] = 0.5 * self.sigma_squared * regression_diff[to_smooth].clone() ** 2 regression_loss[to_smooth == 0] = regression_diff[to_smooth == 0].clone() - 0.5 / self.sigma_squared # compute the normalizer: the number of positive anchors normalizer = torch.sum(anchor_state == 1).float() normalizer = max(normalizer, 1) return torch.sum(regression_loss) / normalizer
st102757
What’s the error message you are getting? You could try in debugger after each line whether you can call backward on the line’s result
st102758
My code is as following, when alpha is set to 0 in the first function and train the network, I expect to get similiar behavior when using second function for training. But I get totally different results!!! Setting alpha to 0 leads to wrong results. This bothers me a lot. def loss_fn_kd(outputs, labels, teacher_outputs, alpha, T): """ Compute the knowledge-distillation (KD) loss given outputs, labels. "Hyperparameters": temperature and alpha """ loss1 = nn.KLDivLoss(size_average=False)(F.log_softmax(outputs/T, dim=1), F.softmax(teacher_outputs/T, dim=1)) * (alpha * T * T) loss2 = F.cross_entropy(outputs, labels, size_average=False) * (1. - alpha) KD_loss = loss1 + loss2 return KD_loss / outputs.size(0) def loss_fn_kd(outputs, labels, teacher_outputs, alpha, T): """ Compute the knowledge-distillation (KD) loss given outputs, labels. "Hyperparameters": temperature and alpha """ KD_loss = F.cross_entropy(outputs, labels, size_average=False) * (1. - alpha) return KD_loss / outputs.size(0)
st102759
With alpha = 0, both loss_fn_kd are exactly equivalent. I cannot help further with the given information.
st102760
I don’t really understand the proper way if there is one, on how to progressively add layers neural networks for something like progressive autoencoder or gan. First of all, should you create the entire network first and block access to the bigger input layers and just grab results from the inner layers first? Or do you make a small network first and when it hit some threshold we would add a new layer to the network? Finally, how do we add new layers to the network during training? I would like to experiment with a progressive autoencoder, for input its easy we can just transforms.resize() but how do you add outer network layers that takes new inputs on the fly? Edited: Also if its possible to add new layers during training do I need to call model = model() again? Will that reset all the previous weight parameters?
st102761
Solved by Carl in post #2 The simplest way to do it is to train certain layers while have the other layers act like the identity function. You can select what parameters you want to train when making your optimizer (https://pytorch.org/docs/stable/optim.html#per-parameter-options). To speed things up, you can avoid computing…
st102762
The simplest way to do it is to train certain layers while have the other layers act like the identity function. You can select what parameters you want to train when making your optimizer (https://pytorch.org/docs/stable/optim.html#per-parameter-options 12). To speed things up, you can avoid computing gradients for the modules that you don’t train (https://pytorch.org/docs/stable/notes/autograd.html#excluding-subgraphs-from-backward 8). Dynamically adding/removing modules is relatively easy in Tensorflow/Keras, since a graph of the model is available on the python side. In PyTorch, you cannot traverse the graph of your model to insert modules. What you could do is re-instantiate a model with more layers, but then you will have trouble loading the state_dict. (It’s still doable; you can first get the state_dict of the new bigger model, and copy the available key/values from the state_dict of the smaller model. Then you can load the mutated state_dict in the new model. That’s what I myself do for replacing modules inside a network.) Good luck!
st102763
Thank you I am currently looking at a progressive gan implementation in pytorch and just like you said they would re-instantiate their network, copy and paste back the state_dict every time when they want to grow their network. It looks easy enough to implement I might just do that first.
st102764
This is the current batchnorm calculation: y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta I want to formulate it as y=kx+b(as is shown in the picture below). I am wandering how can I get the value of Var[x] and mean[x]?? Is using model.state_dict().values() function a good idea? But how can I use it while training a model? Do you have any examples to show me? I know it’s a weird question, but if you have some suggestions, please let me know. Thank you very much.
st102765
Do you want to use the parameters from another BatchNorm layer and just add another layer on top of it or do you want to rewrite it completely? In the former case you could try something like this: class MyBatchNorm(nn.Module): def __init__(self, num_features, eps=1e-05, momentum=0.1, affine=True): super(MyBatchNorm, self).__init__() self.bn = nn.BatchNorm1d(num_features, eps=eps, momentum=momentum, affine=affine) def forward(self, x): x = self.bn(x) mu = self.bn.running_mean var = self.bn.running_var gamma = self.bn.weight beta = self.bn.bias eps = self.bn.eps k = gamma.data / torch.sqrt(var + eps) x.data = k * x.data + beta.data return x mybn = MyBatchNorm(10) x = Variable(torch.randn(16, 10)) x_ = mybn(x) Note that I calculated mu, var, ... separately just to show how to get them. Of course you can simplify the code and just call e.g. self.bn.weight for your calculations. Let me know, if this meets your need or if I misunderstood your question.
st102766
when I run the code, I got this error: RuntimeError: invalid argument 3: sizes do not match at /pytorch/torch/lib/THC/generated/…/generic/THCTensorMathPointwise.cu:351 Do you know where the problem is?? Thanks a lot
st102767
Which Pytorch version are you using? Could you check it with print(torch.__version__)? Maybe your Pytorch version is a bit older and doesn’t support broadcasting yet.
st102768
I am using 0.3.0.post4. I think it is the latest Pytorch version I can get now. Besides, I got another question quite confused me. When I use Blockquote for child in model.named_children(): print(child) layer_name = child[0] layer_params = {} for param in child[1].named_parameters(): #print(param) param_name = param[0] param_value = param[1].data.numpy() layer_params[param_name] = param_value save_name = layer_name + '.npy’ np.save(save_name,layer_params) to save the parameters, I get gamma with 8 decimal places, and when using gamma =self.bn.weight, I can only get gamma with 4 decimal places. How’s that?? My English is not so good, so let me know if you are confused by my questions. Thank you very much.
st102769
The Tensor and numpy array are using the same data, so it’s just a representation issue. Try torch.set_printoptions(precision=10) and print gamma again. Also, could you give me the line throwing the RuntimeError?
st102770
Oh, I see, that’s huge help! Thank you so much. And the line throwing the error is: x.data = k * x.data + beta.data I comment out the line and it seems fine. Blockquote self.BinarizedConv2d2 = BinarizeConv2d(128, 128, kernel_size=3, padding=1, bias=False) self.MaxPool2d2 = nn.MaxPool2d(kernel_size=2, stride=2) self.BatchNorm2d2 = BatchNorm(128) self.Hardtanh2 = nn.Hardtanh(inplace=True) Blockquote This is part of my model. Could you tell me how to use the bn.running_mean directly on the model, please? Again, you have been great help to me.Thank you so much.
st102771
You’re welcome Hmm, commenting out the line is not the solution, since it’s the formula you are looking for. Could you please print the shapes of all Tensors? The quoted code seems to be the __init__ function of your Module? Try to adapt my code snippet into the forward function.
st102772
Yes ,you are right. The quoted code is the _init_function of my model. I defined some of the layers in one file, and call these layers in another file. image.png1920×1040 187 KB image.png1920×1040 204 KB This is how I use your code, is it right? And do I need to print the shape of tensors in all layers or just tensors in BatchNorm layers? Or could you please tell me the specific tensors I need to check? I am quite confused now. Appreciate your help
st102773
I changed the self.bn 5 = nn.BatchNorm1d into self.bn 5 = nn.BatchNorm2d Could this change cause the mismatch?
st102774
It may have something to do with my input?? I think I need to use the Conv2d layer’s output as the input of the BatchNorm2d layer. What do you think? Thank you for your reply.
st102775
Yes, you should use the output of your conv layers as the input to your batch norm. It should also work with BatchNorm2d, if the shapes are right. Could you print the shapes of all Tensors used in the calculation which causes the error?
st102776
Is this what you need to check? I’m afraid I misunderstand what your mean. BTW, the output of Conv2d is also (30L, 128L, 32L, 32L)
st102777
I see, it was my mistake. Haven’t checked the sizes properly. Try to change the calculation to this line: x.data = k.view(1, -1, 1, 1).expand_as(x) * x.data + beta.view(1, -1, 1, 1).expand_as(x).data I’m sure it’s not the optimal way to calculate it, so maybe someone can propose a better way. @SimonW Yeah you are right! The proposed calculations in the first post should work however, even though BN is nonlinear, or am I missing something?
st102778
@ptrblck Oh I’m sorry. I was just answering the question in title. No intention to say that your code is wrong
st102779
@SimonW No worries! I overlooked the title more or less and was wondering if my solution still makes any sense. @LJ_Mason You’re welcome!
st102780
I have the same question about the batchnorm , I also need to extracte the parameters in the batchnorm. But i looked at the method you wrote MyBatchNorm, the final calculation of x.dta is incomplete, if compared with this batchnorm caluation y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta , the lack of part of this {- mean[x] * gamma}{\sqrt{Var[x] + \epsilon}}. I added the program as shown in the figure below, while the program runs without error, but the cause of network loss is very serious, and so i think it is wrong to modify if, to ask you how to rewrite this part of the expression of the code, for batchnorm1d and batchnorm2d, should respectively how to write, I hope you can give answer, thank you. 1.PNG1283×483 14.6 KB
st102781
At inference time, or when doing hyper-parameter search, I would like to have multiple processes running one pytorch model each (independently from each others). I’m trying to achieve this on CPU. In the case of hyperparameter search for instance, I wrote some code to have a pool of processes trying different hyperparameters configurations at the same time. I experimented with pytorch and a basic scikit logreg: When using the scikit logistic regression, there is a clear gain in using the multiprocessing pool. The time spent with one process is about twice the time using two processes. However, when using pytorch models, there is no gain in using multiprocessing at all. In fact the more processes I use, the slower it becomes. Therefore I was wondering: Could MKL be responsible of this behavior? Since the matrix operations are multi-threaded, could it be possible that multiple processes running MKL are competing too much with each others, reducing the efficiency of the training or inference ?
st102782
Solved by smth in post #2 if you want to have pytorch CPU models run in multiple processes, and see speedups, set the environment variables export MKL_NUM_THREADS=1; export OMP_NUM_THREADS=1 in your shell, before starting python. This will make sure that the multithreading pools in multiple process are disabled and hence don…
st102783
if you want to have pytorch CPU models run in multiple processes, and see speedups, set the environment variables export MKL_NUM_THREADS=1; export OMP_NUM_THREADS=1 in your shell, before starting python. This will make sure that the multithreading pools in multiple process are disabled and hence dont fight with each other.
st102784
Hey, I am working on a yolo implementation that has similar parts as the implementation 2 by ayooshkathuria. In one of those places he sigmoids a number of elements in a tensor by doing it 3 times for 3 different indexes. I did the same operation with advanced indexing. However, I noticed that the results were different from his so I did some tests I got the tensor from my implementation (x1) and the tensor from his implementation (x2) just before the step that made them have slightly different values. I saved these tensors for testing and ran the tests in a notebook. Screen Shot 2018-07-23 at 17.11.15.png1662×1836 345 KB As you can see, the tensors have the exact same values before the sigmoid operation is performed. I could only replicate the results with these tensors. I tried creating x1 with torch.rand and then copying it to x2 but then the results are correct. I also tried using just x1 where x2 = x1.detach() but those results were also correct. The incorrect results only show when using the CPU and advanced indexing. The difference is not large enough to affect the results. I am just curious as to why this happens. Am I using advanced indexing incorrectly?
st102785
The 1e-07 difference is probably just floating point error. If you want even more precise values then you should use DoubleTensor instead of FloatTensor.
st102786
Yea the error is no problem as its extremely small, I was a bit curious for why it happens only when using advanced indexing but it doesnt really matter! Another question: Lets say I want to sigmoid index [0, 1, 4] and the slice [6:]. Is there a speed difference between doing it all in one operation like this: x = ... # vector indices = [0, 1, 4] + list(range(6, x.size(0))) x[indices] = F.sigmoid(x[indices]) or like this: x = ... # vector indices = [0, 1, 4] x[indices] = F.sigmoid(x[0, 1, 4]) x[6:] = = F.sigmoid(x[6:]) Or is the operation on advanced indexing the same as with slicing, ie: x[:4] = F.sigmoid(x[:4]) is the same as x[[0, 1, 2, 3]] = F.sigmoid(x[[0, 1, 2, 3]]) I know these produce the same result but I am wondering if they are performed differently “under the hood”. If slices are more effective as data might be contiguous, are a list of indexes such as [0, 1, 2, 3] viewed as a slice? I know its quite a weird question and I could not find an answer for it. I dont think it makes any noticeable difference for my implementation, this is more out of curiosity.
st102787
I’m trying to build a regression network that has 16 outputs with one of the 16 outputs weighted 3 times as high (or X times as high in the general case) for loss purposes as the other 15 outputs. I have built a network that works for the 16 outputs when they are all equal weighted, but how would I go about up-weighting one of the outputs above the others? I feel like there should be a simple way of doing this that I’m not thinking of. Thanks for the help and ideas in advance! So basically I need to create a custom loss function that takes the MSE of (predictions, targets) where the loss with respect to target_1 is X times the other targets. Is there a way to alter the top level MSE_loss function to do this like this? : class MSELoss(_Loss): def __init__(self, size_average=True, reduce=True): super(MSELoss, self).__init__(size_average, reduce) def forward(self, input, target): _assert_no_grad(target) #Weight the first item in the input list and , target list or accept a variable to choose which item return F.mse_loss(input, target, size_average=self.size_average, reduce=self.reduce)
st102788
Would this work? batch_size = 2 model = nn.Linear(20, 16) x = torch.randn(batch_size, 20) y = torch.randn(batch_size, 16) criterion = nn.MSELoss(reduce=False) weight = torch.ones(16) weight[1] = 3 output = model(x) loss = criterion(output, y) loss = loss * weight loss.mean().backward()
st102789
Maybe ! let me see if that works in my code. Thanks for the reply. I was hoping it was something nice and simple like a vector of weights
st102790
I figured it out. Here is the working code for how to do this in the fast.ai library which is what I use on top of pytorch. m.crit is set by default in fast.ai for regression to F.mse_loss just for your reference def weighted_mse_loss(input,target): #alpha of 0.5 means half weight goes to first, remaining half split by remaining 15 weights = Variable(torch.Tensor([0.5,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15])).cuda() pct_var = (input-target)**2 out = pct_var * weights.expand_as(target) loss = out.mean() return loss
st102791
Hi All, My question is that is it necessary to store the gradient/feature maps of the frozen (require gradient = False) non-linear intermediate layers of a Conv Neural Network? My question comes from pre-training a network based on the following observations. If I fix (freeze) the low-level layers of a network and only update the weight of the higher-level layers, the Pytorch frees some memory (no need to save the feature and gradient maps for the frozen layers). So it will save some memory by fixing the low-level layers. If I only fix the intermediate layers (neither high-level nor low-level layers), the memory usage is the same as when I update all weights (including low, intermediate and high-level layers). So my question is would it be possible for Pytorch to free the gradient/feature maps of these frozen intermediate layers to save some memory? My initial thoughts are if all the frozen intermediate layers are linear operations (though often not this case of a Conv Neural Nets), so we don’t need to save the gradient/feature maps since the whole intermediate layers basically only do a linear operation. But if there are some non-linear operations (conv, relu, etc) in the frozen layers, is it still necessary to store the gradient/feature maps of these non-linear frozen layers? Best, Zhuotun
st102792
Solved by albanD in post #2 Hi, To be able to compute gradients for the previous layers, a layer that is frozen still need to compute the gradient wrt it’s input. Each layer should be implemented such that it only saves what it will need to compute the backward pass for the inputs that require gradients.
st102793
Hi, To be able to compute gradients for the previous layers, a layer that is frozen still need to compute the gradient wrt it’s input. Each layer should be implemented such that it only saves what it will need to compute the backward pass for the inputs that require gradients.
st102794
Hi there, I’m trying to compile the PyTorch framework on a server that has two version of cmake. By default, setuptools uses the version 2.8.* but you support only the cmake version 3.0 or higher. On this machine, there is a cmake3 executable but I don’t know how can I switch them before starting the compilation. Do you have any ideas? Thank you in advance!
st102795
could you pls answer more detail , if I have cmake3.12 in my home path ,like /home/somebody/install_cmak/bin/cmake_3.12 ,otherwise , cmake_2.8 is preserved by system . how can I use specific cmake path in python setup.py ? CMAKE_ROOT=/home/somebody/install_cmak/ CMAKE_VERSION=cmake3 python setup.py installl it’s not right ,I had tested.
st102796
great ! it’s working export PATH=$PATH:/home/somebody/install_cmak/bin/ for linux_centos_7, it’s priority execute order is from front to end ,so the below code is right export PATH=/home/somebody/install_cmak/bin/:$PATH was inspired by @Deepali ,thank you very much
st102797
When I try to mesure only CPU consumption time in MNIST, but failed to get with following error on PyTorch 0.4.0. Of course, without measurement works fine. How to avoid this issue? $ python main.py --no-cuda (it works fine.) $ python -m torch.utils.bottleneck main.py --no-cuda (it is failed with following error.) bottleneck is a tool that can be used as an initial step for debugging bottlenecks in your program. It summarizes runs of your script with the Python profiler and PyTorch’s autograd profiler. Because your script will be profiled, please ensure that it exits in a finite amount of time. For more complicated uses of the profilers, please see https://docs.python.org/3/library/profile.html 9 and http://pytorch.org/docs/master/autograd.html#profiler 3 for more information. Running environment analysis… THCudaCheck FAIL file=/pytorch/aten/src/THC/THCTensorRandom.cu line=25 error=2 : out of memory Traceback (most recent call last): File “/opt/conda/lib/python3.6/runpy.py”, line 193, in _run_module_as_main “main”, mod_spec) File “/opt/conda/lib/python3.6/runpy.py”, line 85, in _run_code exec(code, run_globals) File “/opt/conda/lib/python3.6/site-packages/torch/utils/bottleneck/main.py”, line 280, in main() File “/opt/conda/lib/python3.6/site-packages/torch/utils/bottleneck/main.py”, line 259, in main torch.cuda.init() File “/opt/conda/lib/python3.6/site-packages/torch/cuda/init.py”, line 143, in init _lazy_init() File “/opt/conda/lib/python3.6/site-packages/torch/cuda/init.py”, line 161, in _lazy_init torch._C._cuda_init() RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/THCTensorRandom.cu:25 References MNIST source code https://github.com/pytorch/examples/blob/master/mnist/main.py 3 Bottleneck description https://pytorch.org/docs/stable/bottleneck.html 9 Related issue (source code is not attached.) RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/THCTensorRandom.cu:25
st102798
This case is occured when the CUDA memory cannot allocated, but the CUDA device exist. Of course, when we release the CUDA memory, it can run it. But this problem occurs even when we plan to profile CPU only (not planned to take CUDA profile.
st102799
I guess it’s due to the cuda checks in bottleneck.py 31. The torch.cuda.init() seems to be called if cuda is available. Maybe an argument to disable CUDA profiling at all would help. As far as I know, @richard worked on this feature. Maybe he can give his opinion on this topic, if disabling CUDA would be a good idea.