id
stringlengths
3
8
text
stringlengths
1
115k
st181900
I have a SoftmaxTree implementation in C++ extension 3, that I cannot convert to ONNX. I can change the implementation to use recursive jit script, but before that I wanted to see if then it can be exported to ONNX. I have tried simple @torch.jit.script with an if-statement can be exported and results in an if node 5 in hte graph. But anything more complicated I am not sure.
st181901
It depends on your code, the TorchScript compiler always makes the same type of graph, and there are various limitations on what can be exported to ONNX. Looking at the link you posted, TorchScript currently doesn’t support custom autograd Functions, but this is something we are working on supporting.
st181902
Is it possible to share parameters of multiple JIT traces? I have a module that has two methods that I’d like to call from C++. Both methods touch the same hierarchy of submodules of the module, but in different ways (e.g. different order). I also need to optimize the parameters in C++, so the parameters will change gradually and both methods need to see the latest values. My understanding is that if I trace each method (in 1.1), I will obtain two ScriptModule objects, each having its own copy of the parameters. It is not clear to me if it is possible to share the parameters somehow. Looking at the docs of 1.2 (1.2.0a0+84c2c89), it seems one can trace multiple methods of one module. Will the parameters be shared correctly? Finally, since the submodules are hierarchical, will parameters() on the module return all parameters touched by the traces?
st181903
Will the parameters be shared correctly? Yes, parameters should be shared across module’s methods. In this test 1, for example, conv.weight is shared by both forward and weighted_kernel_sum. Finally, since the submodules are hierarchical, will parameters() on the module return all parameters touched by the traces Yes, that should work as well, unless I’ve misunderstood the question.
st181904
At this stage of my project, I need to compile my code to binaries that work under Windows and / or Ubuntu. I know that jit is a great mile stone in PyTorch, but, I am not sure if this is the correct approach that I can use to achieve my objectives; not to mention the complexity and effort needed to use jit; or PyTorch v1.0 C++ Front-End. That said, there are some tools to convert Python to binaries, for example, PyInstaller 9 , which are not compatible with PyTorch. Are there any other alternatives to do PyTorch code compilation? Thanks in advance
st181905
What difficulties are you seeing with TorchScript (aka jit)? We are trying to make it as easy to use as possible for models defined in Python with PyTorch, so if you’re having issues please let us know. It appears that PyInstaller includes a Python runtime inside the binary, whereas TorchScript and the C++ frontend both have no dependency on Python and can be run in multithreaded environments with no GIL. As for TorchScript 35 vs. the C++ frontend 8, that’s a more personal choice as they both rely on libtorch and the difference mostly comes down to how you want to define your model (in Python + TorchScript vs in C++), so it’s hard to say without more information on your use case.
st181906
Thanks for your answer, which saved me putting lots of sparse effort. To be honest, I have not tried jit yet; but I need to know which path to take before going forward with it. From your answer I can say, as I am using PyTorch, then, it’s better to use jit.
st181907
@driazati, but if one needs to (1) train C++ and then (2) serialize model and then (3) load in C++ on the production, do you have to go through TorchScript in (2)? (Ok, I know that there’s ONYX)
st181908
soldierofhell: one needs to (1) train C++ and then (2) serialize model and then (3) load in C++ on the production, do you have to The C++ frontend is a high level API that provides a similar experience as using PyTorch’s Python API, as such it has its own serialization mechanisms, so you can iterate on your model all in C++, this example shows how 18. TorchScript lets you go from PyTorch models coded in Python to something that you can load in C++, so it’s not necessary for that case.
st181909
Thank you @driazati, but it’s not clear what torch::save do. From API doc seems like full serialization of Module or Tensor, but the comment in example states “checkpoint” (state_dict)? What is the method of serialization, is it something similar to Tensorflow ProtoBuf? Are there aby limitations?
st181910
We have many serialization formats and they’re all different and easy to mix up. We’re working on ways to fix the UX here but for now: torch.save() in eager mode Python lets you save models so they can be loaded in Python torch::save() in the C++ API lets you save models so they can be loaded in C++ with torch::load() torch.jit.save() in eager mode Python lets you save models that have been compiled with TorchScript so they can be loaded in Python with torch.jit.load() and in C++ with torch::jit::load()
st181911
@driazati, thanks, but from your answer it’s still unclear what torch::save() saves: only “weights” or whole model (Module). Definitely at least doc should be more precise. And to be honest this jit thing is also a little confusing, because this is kind of another world. E.g. in Tensorflow if you load .pb you end with normal tf.Graph not something like tf.pb.Graph
st181912
Hi I was adding some code to allow accepting list inputs & got this error when running the script RuntimeError: Tracing a list of arbitrary type is currently not supported! To which it leads me to https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/tracer.h 4 Apparently there’s also a similar error for dicts of arbitrary type. Why is this error coming up in the first place, if I may ask?
st181913
Tracing can only record what happens to tensors, so a trace that involves lists of arbitrary type is high likely to produce incorrect results. Therefore we throw rather than silently change your output.
st181914
When I trace a model with jit.trace and the optimize=True flag, I get a tracer warning saying that the outputs don’t match and the error is also significant. But when I trace with the flag set to False, everything is fine. Any idea why this might be happening?
st181915
When I want to script a method which calls torch.nn.dropout, it comes up a error message: RuntimeError: unknown builtin op: """ Chunking. Parameters ---------- z_in : ``torch.LongTensor``, required. The output of the character-level lstms. """ z_in = self.drop_(z_in) ~~~~~~~~~ <--- HERE out = self.chunk_layer(z_in).squeeze(1) return out self.drop = torch.nn.Dropout(p=droprate) @torch.jit.script def chunking(self, z_in): """ Chunking. Parameters ---------- z_in : ``torch.LongTensor``, required. The output of the character-level lstms. """ z_in = self.drop(z_in) out = self.chunk_layer(z_in).squeeze(1) return out
st181916
Yes, dropout should be supported: class MyModel(torch.jit.ScriptModule): def __init__(self): super(MyModel, self).__init__() self.fc1 = nn.Linear(10, 10) self.drop = nn.Dropout() @torch.jit.script_method def forward(self, x): x = self.fc1(x) x = self.drop(x) return x model = MyModel() x = torch.randn(1, 10) output = model(x) traced_model = torch.jit.trace(model, x) traced_model(x) Could you post a code snippet to reproduce this issue?
st181917
Thank you for your reply. I have solved it because I found it should be @torch.jit.script_method rather than @torch.jit.script . Thank you again!
st181918
I have another question: when I use the sequential container, it will tell me a confusing error message with this: image.png723×123 2.75 KB But the forward function doesn’t appear in my code. My PyTorch version is 1.0.0.
st181919
But the point is that I don’t write this forward function. It seems like the function is in the sequential container.
st181920
How are you scripting the module? As you can see in the linked example, the nn.Sequential module is wrapped inside a torch.jit.ScriptModule.
st181921
I am using the torch.jit.ScriptModule too. Mine is like that: class NER(torch.jit.ScriptModule): __constants__ = ['rnn_outdim', 'one_direction_dim', 'add_proj'] def __init__(self, rnn, w_num: int, w_dim: int, c_num: int, c_dim: int, y_dim: int, y_num: int, droprate: float): super(NER, self).__init__() ... if self.add_proj: ... self.chunk_layer = nn.Sequential(self.to_chunk, self.drop, self.to_chunk_proj, self.drop, self.chunk_weight) self.type_layer = nn.Sequential(self.to_type, self.drop, self.to_type_proj, self.drop, self.type_weight) else: ... self.chunk_layer = nn.Sequential(self.to_chunk, self.drop, self.chunk_weight) self.type_layer = nn.Sequential(self.to_type, self.drop, self.type_weight)
st181922
I want to convert the model but I meet some problems: RuntimeError: could not export python function call <python_value>. Remove calls to python functions before export.: mask : ``torch.ByteTensor`` , required. The mask for character-level input. """ w_emb = self.word_embed(w_in) c_emb = self.char_embed(c_in) emb = self.drop( torch.cat([w_emb, c_emb], 2) ) out = self.rnn(emb) ~~~~~~~~ <--- HERE mask = mask.unsqueeze(2).expand_as(out) out = out.masked_select(mask).view(-1, self.rnn_outdim) return out Here is the code: def __init__(self, rnn, w_num: int, w_dim: int, c_num: int, c_dim: int, y_dim: int, y_num: int, droprate: float): super(NER, self).__init__() self.rnn = rnn self.rnn_outdim = self.rnn.output_dim self.one_direction_dim = self.rnn_outdim // 2 self.word_embed = nn.Embedding(w_num, w_dim) self.char_embed = nn.Embedding(c_num, c_dim) self.drop = nn.Dropout(p=droprate) self.add_proj = y_dim > 0 self.to_chunk = highway(self.rnn_outdim) self.to_type = highway(self.rnn_outdim) if self.add_proj: self.to_chunk_proj = nn.Linear(self.rnn_outdim, y_dim) self.to_type_proj = nn.Linear(self.rnn_outdim, y_dim) self.chunk_weight = nn.Linear(y_dim, 1) self.type_weight = nn.Linear(y_dim, y_num) self.chunk_layer = nn.Sequential(self.to_chunk, self.drop, self.to_chunk_proj, self.drop, self.chunk_weight) self.type_layer = nn.Sequential(self.to_type, self.drop, self.to_type_proj, self.drop, self.type_weight) else: self.chunk_weight = nn.Linear(self.rnn_outdim, 1) self.type_weight = nn.Linear(self.rnn_outdim, y_num) self.chunk_layer = nn.Sequential(self.to_chunk, self.drop, self.chunk_weight) self.type_layer = nn.Sequential(self.to_type, self.drop, self.type_weight) @torch.jit.script_method def forward(self, w_in, c_in, mask): """ Sequence labeling model. Parameters ---------- w_in : ``torch.LongTensor``, required. The RNN unit. c_in : ``torch.LongTensor`` , required. The number of characters. mask : ``torch.ByteTensor`` , required. The mask for character-level input. """ w_emb = self.word_embed(w_in) c_emb = self.char_embed(c_in) emb = self.drop( torch.cat([w_emb, c_emb], 2) ) out = self.rnn(emb) mask = mask.unsqueeze(2).expand_as(out) out = out.masked_select(mask).view(-1, self.rnn_outdim) return out rnn_map = {'Basic': BasicRNN} rnn_layer = rnn_map[args.rnn_layer](args.layer_num, args.rnn_unit, args.word_dim + args.char_dim, args.hid_dim, args.droprate, args.batch_norm) ner_model = NER(rnn_layer, len(w_map), args.word_dim, len(c_map), args.char_dim, args.label_dim, len(tl_map), args.droprate) ner_model.load_state_dict(model) ner_model.to(device) ner_model.eval() def __init__(self, unit, input_dim, hid_dim, droprate, batch_norm): super(BasicUnit, self).__init__( self.unit_type = unit rnnunit_map = {'rnn': nn.RNN, 'lstm': nn.LSTM, 'gru': nn.GRU} self.layer = torch.jit.trace(nn.LSTM(input_dim, hid_dim//2, 1, batch_first=True, bidirectional=True), torch.randn(500, 1, input_dim)) -->I trace the lstm here self.droprate = droprate self.batch_norm = batch_norm if self.batch_norm: self.bn = nn.BatchNorm1d(hid_dim) self.output_dim = hid_dim self.init_hidden() How can I convert it? My pytorch version is 1.0.0.
st181923
Hi, the error is saying if you want to export(serialize) your model to disk, you will need to convert all your models to TorchScript (either via tracing or scripting). Right now your model only have partially converted to TorchScript. For how to convert, here is our doc https://pytorch.org/docs/stable/jit.html 14, I also recommend you to try our new API 12 if you stay on our nightly builds.
st181924
Thank you for your reply. I am a freshman, and I have read the doc, but I think I have done the conversion. So, could you please tell me what’ wrong with my conversion. Thank you again!
st181925
Another question: RuntimeError: cannot call a value: Returns ---------- output: ``torch.FloatTensor``. The output of RNNs. """ out, _ = self.layer(x) if self.batch_norm: output_size = out.size() out = self.bn(out.view(-1, self.output_dim)).view(output_size) ~~~~~~~ <--- HERE if self.droprate > 0: out = F.dropout(out, p=self.droprate, training=self.training) return out The code is: @torch.jit.script_method def forward(self, x): """ Calculate the output. Parameters ---------- x : ``torch.LongTensor``, required. the input tensor, of shape (seq_len, batch_size, input_dim). Returns ---------- output: ``torch.FloatTensor``. The output of RNNs. """ out, _ = self.layer(x) if self.batch_norm: output_size = out.size() out = self.bn(out.view(-1, self.output_dim)).view(output_size) if self.droprate > 0: out = F.dropout(out, p=self.droprate, training=self.training) return out The init function is the fourth one in the first post.
st181926
One more question, I get an error message with: RuntimeError: could not export python function call <python_value>. Remove calls to python functions before export.: def forward(self, input): for m in self: input = m(input) ~ <--- HERE return input But I can’t find this forward function in my code. Why does this happen? Please help me!
st181927
class LinearInLinear(nn.Module): def __init__(self): super(LinearInLinear, self).__init__() self.l = nn.Linear(3, 5) self.l1 = nn.Linear(5, 5) def forward(self, x): return self.l1(self.l(x + x)) class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.l1 = LinearInLinear() self.l = LinearInLinear() def forward(self, input): x1 = self.l1(input) x2 = self.l(input) return x1 + x2 + x2 if __name__ == "__main__": dummy_input = (torch.rand(5, 3),) # with torch.onnx.set_training(model, False): trace = torch.jit.trace(model, dummy_input) # _optimize_trace(trace, torch._C._onnx.OperatorExportTypes.ONNX) trace.save("b.pt") print(trace.graph) for node in trace.graph.nodes(): if node.kind() == "prim::Constant": continue print(list(node.outputs())[0].type().scalarType()) print(type(trace)) k = torch.jit.load("b.pt") print(type(k)) print(k.graph) for node in k.graph.nodes(): if node.kind() == "prim::Constant": continue print(node) output = list(node.outputs())[0] print(output.type().scalarType()) the last the last sentence produce the error below RuntimeError: r ASSERT FAILED at /pytorch/aten/src/ATen/core/jit_type.h:142, please report a bug to PyTorch.
st181928
I am trying to export the Transformer Model to Torch Script. In creating a module list from MultiHeadAttention Layer, the following error is generated. Accompanying code is also attached. File "/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 505, in build_Compare raise NotSupportedError(err_range, "unsupported comparison operator: " + op.__name__) torch.jit.frontend.NotSupportedError: unsupported comparison operator: In kv_same = key.data_ptr() == value.data_ptr() tgt_len, bsz, embed_dim = query.size() assert embed_dim == self.embed_dim assert list(query.size()) == [tgt_len, bsz, embed_dim] assert key.size() == value.size() if incremental_state is not None: saved_state = self._get_input_buffer(incremental_state) if 'prev_key' in saved_state: ~~~~~~~~~~~~~ <--- HERE CODE: __constants__ = ['attentions', 'causal', 'layers_module'] def __init__(self, <parameters>) att_modules = [] for _ in range(num_layers): att_modules.append(nn.MultiheadAttention(embed_dim, num_heads, dropout=dropout)) self.attentions = nn.ModuleList(att_modules) If I create an empty ModuleList and then append the MultiHeadAttention Layer it gives no error in no script mode, but ModuleList has to be Const in Script Mode, that route is blocked as well. Error is generated in constructor itself as confirmed while debugging, not in forward method.
st181929
Support for in was recently added, could you try using pytorch-nightly and see if that fixes this issue?
st181930
@driazati Yup, that issue is resolved but it fails now at this stage: if hasattr(self, '_qkv_same_embed_dim') and self._qkv_same_embed_dim is False: ~~~~~~~ <--- HERE return F.multi_head_attention_forward( query, key, value, self.embed_dim, self.num_heads, self.in_proj_weight, self.in_proj_bias, self.bias_k, self.bias_v, self.add_zero_attn, self.dropout, self.out_proj.weight, self.out_proj.bias, training=self.training, key_padding_mask=key_padding_mask, need_weights=need_weights, attn_mask=attn_mask, use_separate_proj_weight=True, q_proj_weight=self.q_proj_weight, k_proj_weight=self.k_proj_weight, Is it because hasattr or and is not supported yet ?
st181931
Hi, I’m a freshman to pytorch, recently I got some troubles when trying to convert a python models to c++. HELPPPPPP!! enviroment: pytorch1.1 When i trying to trace some custom layers which implemented by c++, i got an error like this: could not export python function call <python_value>. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__.: @torch.jit.script_method def forward(self, x): p1_conv1 = self.p1_conv1(x) pool1 = self.pool1(p1_conv1) # Troubles is here ~~~~~~~~~~ <--- HERE the code is here: class corner_pool(torch.jit.ScriptModule): def _init_layers(self, dim, pool1, pool2): self.p1_conv1 = torch.jit.trace(convolution(3, dim, 128), torch.rand(1, 256, 64, 64)) self.pool1 = TopPool() #custom layers implemented in cpp @torch.jit.script_method def forward(self, x): p1_conv1 = self.p1_conv1(x) pool1 = self.pool1(p1_conv1) # Troubles is here the toppool code is: class TopPoolFunction(Function): @staticmethod def forward(ctx, input): output = TopPool.forward(input)[0] ctx.save_for_backward(input) return output @staticmethod def backward(ctx, grad_output): input = ctx.saved_variables[0] output = TopPool.backward(input, grad_output)[0] return output class TopPool(nn.Module): def forward(self, x): result = TopPoolFunction.apply(x) return result
st181932
patrickpjiang: could not export python function call <python_value>. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to constants.: The error message says that you have a function that you did not provide script annotation. In your case you will need to script TopPoolFunction and TopPool to make the model to be exported.
st181933
Thanks for the reply. Yes, you are right, but when i apply script on TopPoolFunction, i got an error: attribute lookup is not defined on python value of type 'FunctionMeta': @torch.jit.script_method def forward(self, x): result = TopPoolFunction.apply(x) ~~~~~~~~~~~~~~~~~~~~~ <--- HERE return result Here is how i script TopPool: class TopPool(torch.jit.ScriptModule): # here is what i added, is there any problem? @torch.jit.script_method def forward(self, x): result = TopPoolFunction.apply(x) return result For TopPoolFunction, i actually have no idea how to script…
st181934
seems like TopPoolFunction is a autograd function, we don’t support script autograd Functions, so it is not scriptable. If you want to script it, you will need to write your TopPool function in python or TorchScript in order for TorchScript to compile it(TorchScript can only compile python, not C++),
st181935
I’m currently writing a fairly complex library that exposes various Modules and I have the following requirement in multiple places that I have a hard time satisfying. I need to be able to define a constant tensor in a Module that is initialized with some user-defined value and shape. I want the tensor to move to the correct target hardware when doing model.to(some_device), but I don’t want the value to be part of the state_dict when the model is saved/loaded, so that if the user pass a different value at init time, that value is taken and not overwritten when a state_dict is loaded. There are possible ways around this, e.g. define a Buffer, manually exclude it from the state_dict when the model is saved, and the reload with strict=False. My issue with this is that I would basically pass on the problem to the library’s user, and having a requirement on strict=False would also expose me to all sort of other weaknesses. Instead, I want to be able to manage the problem from within the library, i.e. from the Module definition. My only idea at the moment is to use a Buffer and override the load_state APIs of the Module, but I have never seen it done anywhere. Any sort of pointer would be highly appreciated. The implementation should work with the JIT compiler in a DataParallel context. Alessandro
st181936
To answer myself, I partially solved the problem for by splitting the value into a buffer with value 0 and a python constant with the user-defined value, and then summing them in the forward pass. Hopefully the JIT is smart enough to see that they are both constant, and it propagates them when required, but I haven’t verified it yet. There is still has a requirement on strict=False though, since loading a pretrained model to which any of these layers have been added will complain about the missing buffer. It would be nice to see a proper API for this use case, especially give the fact that constant values could be easily optimized by the JIT. Alessandro
st181937
I’ve read README.md about JIT at (https://github.com/WyldeCat/pytorch/blob/master/torch/csrc/jit/README.md 17). There is a great explanation about how JIT handles a module’s forward method. But I can’t find any explanation about how backward pass works. So, I want to ask whether JIT involves in backward pass or not and is there any way to print backward pass’s graph like JIT module’s forward graph. Thanks in advanced.
st181938
My Siammask network is used for object tracking. There are two input arguments in my network, one is target object and the other one is search domain. I hope to use libtorch, but i dont’t know how can I use jit.trace() funciton in Pytorch to save it . I referred some blog and docs but could not find a method for double input args about jit.trace() function. Cound anyone meet the similar situation?
st181939
torch.jit.trace() takes in the function to trace and a tuple of example inputs, so you can pass as many inputs as your model needs. See Tracing 3 for more details, does this fix your issue?
st181940
Hi there! I am currently trying to make JIT optimizations work on the source code of Tree-LSTM model. The Tree class in the model is a crucial part of it, so I need to make it a custom class type so it can be used by the core methods of the model. That’s when I find a problem: import torch from typing import List @torch.jit.script class Tree(object): def __init__(self): self.parent = None self.num_children = 0 self.children = torch.jit.annotate(List[Tree], []) # further definitions omitted When I try to run the code, here is the error: RuntimeError: Unknown type name Tree: def __init__(self): self.parent = None self.num_children = 0 self.children = torch.jit.annotate(List[Tree], []) ~~~~ <--- HERE So the question is: Tree is basically a recursive structure, the children of a tree node is a list of tree nodes. Therefore for the children variable of a Tree class, I need to define an empty list of Tree class type. But since the Tree class type definition is still halfway, the interpreter cannot recognize Tree type. I am wondering is there any method that I can solve this problem, or there is actually no support for custom classes like the Tree above in current version of PyTorch and I should try other ways. It would be so nice if someone can give me a hand. Thanks a lot!
st181941
I believe this is fixed in master (your class compiles for me). Could you try your code on our nightly release and see if it works?
st181942
The code successfully runs after I’ve switched to nightly release, cool! Thank you! But there is bad news that new errors arise when the interpreter tries to compile other methods. For example: @torch.jit.script class Tree(object): def __init__(self): self.parent = torch.jit.annotate(Optional[Tree], None) def add_child(self, child): # type: (Tree) -> None child.parent = torch.jit.annotate(Optional[Tree], self) When run to the last line, an error arises: RuntimeError: expected an expression of type Optional[__torch__.Tree] but found __torch__.Tree: at /Users/fere/repos/treelstm.pytorch/treelstm/tree.py:25:43 def add_child(self, child): # type: (Tree) -> None child.parent = torch.jit.annotate(Optional[Tree], self) ~~~~~~~~~~~~~~~~~~~~ <--- HERE The interpreter regards self as Tree instead of Optional[Tree], which makes sense but I thought that torch.jit.annotate should be able to do some casting work. The reason why I think so is that in code above, the interpreter takes torch.jit.annotate(Optional[Tree], None) as an Optional[Tree] type value instead of None. I’ve gone through the TorchScript documentation and done some searching, but the only method I’ve found that is able to specify types for a value other than function parameters and return values is by using torch.jit.annotate. Also, I haven’t find any ways which can cast a T typed value to an Optional[T] value. So I am curious that is there anything I miss? Another problem is below: @torch.jit.script class Tree(object): def __init__(self): self.num_children = torch.jit.annotate(int, 0) def add_child(self): self.num_children += 1 Error: RuntimeError: left-hand side of augmented assignment to module parameters/buffers can only be tensor types: at /Users/fere/repos/treelstm.pytorch/treelstm/tree.py:15:9 def add_child(self): self.num_children += 1 ~~~~~~~~ <--- HERE Does the error message means that only Tensor-typed values can be updated in a custom TorchScript class? Thankful and grateful for your help!
st181943
First of all—thank you for the feedback! Support for classes is in its early days and reports from intrepid users are super valuable. To summarize, it seems that you are running into two problems: torch.jit.annotate() is not doing implicit type promotion from T to Optional[T] for class types. This is a bug on our side, and we’ll look into fixing it. Can you file a Github task and we can track it there? Augmented assignment doesn’t work on non-tensors. This is a known issue, and we have a fix coming this week for it. In the meantime, just re-assigning is an easy workaround: def add_child(self): self.num_children = self.num_children + 1
st181944
actually no need to file an issue, https://github.com/pytorch/pytorch/pull/21593 10 fixes #1.
st181945
Update: We do not actually currently support recursive class definitions. I didn’t remember when I first replied. https://github.com/pytorch/pytorch/pull/21842 17 improves the error message in this case. We will likely support it soon (before the next release) but for now it will not work.
st181946
Thanks a lot for the update! Guess now I need to wait or look for a temporal workaround.
st181947
Hi, Following the comments in this question 3, I was trying to create a class that has attributes of the same class type as shown below: import torch from typing import Dict, List, Optional @torch.jit.script class JSONValue(object): def __init__(self): self.value = torch.jit.annotate(Optional[JSONValue], None) class TestModule(torch.jit.ScriptModule): @torch.jit.script_method def forward(self, input): # type: (JSONValue) -> JSONValue return input m = TestModule() m.save("test.pt") But this code results in a segfault with message Segmentation fault: 11. Any thoughts on what could be wrong? Thanks!
st181948
Solved by Michael_Suo in post #2 We do not actually currently support recursive class definitions. I didn’t remember that in the previous thread (will comment there as well). https://github.com/pytorch/pytorch/pull/21842 improves the error message in this case. We will likely support it soon (before the next release) but for now i…
st181949
We do not actually currently support recursive class definitions. I didn’t remember that in the previous thread (will comment there as well). https://github.com/pytorch/pytorch/pull/21842 8 improves the error message in this case. We will likely support it soon (before the next release) but for now it will not work.
st181950
Hi all, So I trained a 3D-UNet with 16 base filters and 5 layers deep. Now I am trying to infer it on a 240x240x155 on a CPU. I have allocated 128GB of ram, it still pops out with an error. RuntimeError: $ Torch: not enough memory: you tried to allocate 0GB. Buy new RAM! at /opt/conda/conda-bld/pytorch I do not have more money to buy new ram, The model should require at the most 32GB of ram for that image. Can I know where I may be going wrong? Thanks, Siddhesh
st181951
Can you post the code for the model? Also did this occur when executing a TorchScript function/module or a normal nn.Module?
st181952
I cannot post the model since it is an ongoing work. But I can confirm that I trained this model on a 16GB GPU.
st181953
As far as I understand your issue, the training script takes 16GB at most running on the GPU and more than 128GB on the CPU? If that’s correct, do you see an increasing memory usage during training or does your script run out of memory during the first iteration? Did you change something in your data loading pipeline, e.g. are you loading the complete dataset into RAM?
st181954
Oh I was not even talking about training, it is the cost of inference on a single example.
st181955
Hi all, So I was wondering the following two things. What is the use of DataLoader? What is the use of TensorDataset? So the scenario begins where I have my current machine with 128GB of RAM. Is there a particular reason why I should use TensorDataset or DataLoader instead of converting my complete Dataset into a Tensor? Wouldn’t this be a considerably faster operation and also cause less usage of I/O inturn giving me speedups? Am I missing something? Sounds like a good discussing point of pytorch to me.
st181956
Loading the complete dataset into memory might work for you, but won’t certainly work for a lot of use cases, e.g. dealing with 10,000,000 images. Also, the initial loading of the whole dataset might slow down your iteration speed. I.e. if you are still experimenting with your code and would like to iterate quickly, waiting several minutes for the data loading just to see a code error might be annoying. Of course this can be avoided by loading only a subset of the data, but would need additional code. Other benefits, e.g. shuffling and batching, will also be missing and you again would have to implement it manually. The idea behind the DataLoader is to load your data using multiprocessing (and pinned memory) to asynchronously push your data batch onto the GPU during training so that you can basically hide the data loading time. This is of course the optimal use case and if you are working with a slow HDD, you will most likely notice the data loading time. Anyway, the choice is of course yours, as PyTorch does not depend on a Dataset or DataLoader usage to work properly. The TensorDataset is a convenient method to wrap already loaded tensors into a Dataset and e.g. to use a Subset or wrap it in a DataLoader.
st181957
Yes, but let’s say my training set is of 7GB and I iterate over it a for 200 epochs, this means I have traversed through my harddisk or SSD for 1.4TB of data. Instead, if I just load this into the ram for 7~10GB(Considering overheads), would I not save myself from that hassle? Also, batchsize and shuffling can also be manually manipulated. My final question would then become, 'Is this more energy efficient and will it give me a speedup? ’
st181958
torch.jit.trace doesn’t allow us to trace a module that has shared parameters. Does someone know the reason? I guess it aims to prevent the shared parameters from being concurrently updated (and destroyed). If so, I think parameter sharing is safe when we know the modules that access the parameters are NOT concurrently run. For example, modules that run in a sequence never update their parameters concurrently. Could someone tell me if I can use tracing in that case (by removing the detection of shared parameters).
st181959
Im new to pytorch programming. I have a model in python there checkpoint file is saved in.ckpt format. I want to save that model in the .pt format ( use in c++ ) . The model uses two images as input. Then i sent two images using jit:: trace into model. I get following errors any help is appreciated. Thank you. The model in python left = cv2.imread(args.left) right = cv2.imread(args.right) pairs = {'left': left, 'right': right} transform = T.Compose([Normalize(mean, std), ToTensor(), Pad(384, 1248)]) pairs = transform(pairs) left = pairs['left'].to(device).unsqueeze(0) right = pairs['right'].to(device).unsqueeze(0) model = PSMNet(args.maxdisp).to(device) if len(device_ids) > 1: model = nn.DataParallel(model, device_ids=device_ids) state = torch.load(args.model_path) if len(device_ids) == 1: from collections import OrderedDict new_state_dict = OrderedDict() for k, v in state['state_dict'].items(): namekey = k[7:] # remove `module.` new_state_dict[namekey] = v state['state_dict'] = new_state_dict model.load_state_dict(state['state_dict']) print('load model from {}'.format(args.model_path)) print('epoch: {}'.format(state['epoch'])) print('3px-error: {}%'.format(state['error'])) model.eval() with torch.no_grad(): _, _, disp = model(left, right)] The Trace program that i tried to save model file in .pt format leftTest = torch.randn(3, 384, 1248).to(device).unsqueeze(0) rightTest = torch.randn(3, 384, 1248).to(device).unsqueeze(0) with torch.no_grad(): # error line _, _, dispTest = torch.jit.trace(model, (leftTest, rightTest)) where the problem is def forward(self, left_img, right_img): original_size = [self.D, left_img.size(2), left_img.size(3)] left_cost = self.cost_net(left_img) # [B, 32, 1/4H, 1/4W] right_cost = self.cost_net(right_img) # [B, 32, 1/4H, 1/4W] # cost = torch.cat([left_cost, right_cost], dim=1) # [B, 64, 1/4H, 1/4W] # B, C, H, W = cost.size() # print('left_cost') # print(left_cost[0, 0, :3, :3]) B, C, H, W = left_cost.size() cost_volume = torch.zeros(B, C * 2, self.D // 4, H, W).type_as(left_cost) # [B, 64, D, 1/4H, 1/4W] # for i in range(self.D // 4): # cost_volume[:, :, i, :, i:] = cost[:, :, :, i:] for i in range(self.D // 4): if i > 0: cost_volume[:, :C, i, :, i:] = left_cost[:, :, :, i:] # use 32 cost_volume[:, C:, i, :, i:] = right_cost[:, :, :, :-i] # use 32 else: # come at first cost_volume[:, :C, i, :, :] = left_cost # use 32 cost_volume[:, C:, i, :, :] = right_cost disp1, disp2, disp3 = self.stackedhourglass(cost_volume, out_size=original_size) return disp1, disp2, disp3 Error log that i get /home/ven/.local/lib/python3.5/site-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=trilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) save diparity map in /home/ven/Downloads/PSMNet/depth.png shape left ----> torch.Size([1, 3, 384, 1248]) shape right ----> torch.Size([1, 3, 384, 1248]) /home/ven/Downloads/PSMNet/models/PSMnet.py:42: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! cost_volume[:, :C, i, :, :] = left_cost /home/ven/Downloads/PSMNet/models/PSMnet.py:43: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! cost_volume[:, C:, i, :, :] = right_cost /home/ven/Downloads/PSMNet/models/PSMnet.py:39: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! cost_volume[:, :C, i, :, i:] = left_cost[:, :, :, i:] /home/ven/Downloads/PSMNet/models/PSMnet.py:40: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! cost_volume[:, C:, i, :, i:] = right_cost[:, :, :, :-i] /home/ven/.local/lib/python3.5/site-packages/torch/jit/__init__.py:702: TracerWarning: Output nr 3. of the traced function does not match the corresponding output of the Python function. Detailed error: Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 282, 783] (68.55914306640625 vs. 68.55826568603516) and 42 other locations (0.00%) _check_trace([example_inputs], func, executor_options, traced, check_tolerance, _force_outplace) Traceback (most recent call last): File "/home/ven/Downloads/PSMNet/inference.py", line 118, in <module> main() File "/home/ven/Downloads/PSMNet/inference.py", line 85, in main _, _, disp = torch.jit.trace(model, (leftTest, rightTest)) TypeError: 'TopLevelTracedModule' object is not iterable
st181960
solved. By changing dispTest = torch.jit.trace(model, (leftTest, rightTest)) to B, C, H, W assigned corresponding numerical values
st181961
When I try to train FasterRCNN in newly released torchvision 0.3, I run into a PTX JIT compilation failed error. Traceback (most recent call last): File "train_rcnn.py", line 124, in <module> loss_dict = model(images, targets) File "/opt/anaconda3/envs/nuscenes/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/danielkang92/nuscenes/code/FasterRCNN/model.py", line 19, in forward return self.rcnn(images, targets) File "/opt/anaconda3/envs/nuscenes/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/opt/anaconda3/envs/nuscenes/lib/python3.7/site-packages/torchvision/models/detection/generalized_rcnn.py", line 48, in forward features = self.backbone(images.tensors) File "/opt/anaconda3/envs/nuscenes/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/opt/anaconda3/envs/nuscenes/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/opt/anaconda3/envs/nuscenes/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/opt/anaconda3/envs/nuscenes/lib/python3.7/site-packages/torchvision/models/_utils.py", line 58, in forward x = module(x) File "/opt/anaconda3/envs/nuscenes/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) RuntimeError: /pytorch/torch/csrc/jit/fuser/cuda/fused_kernel.cpp:202: a PTX JIT compilation failed My environment is as follows : Collecting environment information... PyTorch version: 1.1.0 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Debian GNU/Linux 9.8 (stretch) GCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516 CMake version: Could not collect Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: Tesla P100-PCIE-16GB Nvidia driver version: 410.72 cuDNN version: Could not collect Versions of relevant libraries: [pip3] intel-numpy==1.15.1 [pip3] numpy==1.16.3 [pip3] torch==1.1.0 [pip3] torchvision==0.3.0 [conda] torch 1.1.0 pypi_0 pypi [conda] torchvision 0.3.0 pypi_0 pypi Would appreciate any help on this!
st181962
Thanks for the report! Can you file an issue on github with the same info and we can track it from there? Thanks
st181963
Just filled it out here: https://github.com/pytorch/pytorch/issues/21004 285 Thanks!
st181964
After using @torch.jit.script or torch.jit.trace on a Module/function is there a way to convert the traced module/function to Python again?
st181965
Solved by Michael_Suo in post #3 We don’t have a way to convert it back to the original python. The .code attribute for a method will give you valid python syntax that corresponds to that method implementation though.
st181966
You can call it from Python but if you want to debug into it, you would have to use a switch in your code to make it use the original Python function. You can also use the environment variable PYTORCH_JIT to disable it for the entire run of your program.
st181967
We don’t have a way to convert it back to the original python. The .code attribute for a method will give you valid python syntax that corresponds to that method implementation though.
st181968
I try to create trace file of a model without any warning. I use the model VGG_11 at this link. I trained the model with my custom dataset then saved the model as a .pth file. Then, I loaded this .pth file and created trace file as a .pt file. Then, it gave those warnings: VGG_Trace_Warnings.PNG1853×313 29.7 KB Could you offer a solution about how to eliminate those warnings?
st181969
The warning is telling you that there are non-deterministic nodes in your model, specifically dropouts. If you’re tracing a model that you already trained, you may have forgotten to set eval() on the module.
st181970
At the moment I am exploring the possibility of writing controllers for a real robot hardware using TorchScript. One of the requirements is to have low latency and constant execution times for the script. That is as the robot controller is running in a realtime environment with hard 1 kHz (=1 ms) time limits. Does someone of you know how the JIT compiler of TorchScript handles memory allocations? Is there something like a “realtime” mode switch to avoid memory allocations or does this all depends on the implementation of the TorchScript C++ operators? Best, julian
st181971
Solved by Michael_Suo in post #2 The JIT has no realtime mode. Allocation is controlled by operator implementations, which are generally (for CPU ops) tuned for server workloads.
st181972
The JIT has no realtime mode. Allocation is controlled by operator implementations, which are generally (for CPU ops) tuned for server workloads.
st181973
Hi, I am trying to write a TorchScript operator to replace a custom torch.autograd.Function to influence the gradient computation, since TorchScript cannot serialize calls to custom Functions. Is there an equivalent of torch.autograd.Function in ATen or Torch? Best regards Leopold
st181974
Solved by leowalkling in post #3 For reference, I’ve come up with a solution by imitating the generated kernels and the code generation from /tools/autograd. But it seems to me that it is a very intricate task, given that there is little documentation, so I made sure to follow the original code of builtin operations closely. The…
st181975
@leowalkling, could you elaborate more on what you want to replace here with a example? In case it’s helpful, in JIT we currently put ops to 2 categories: Ops have autodiff formulas in autodiff.cpp: their backward graph is replaced by https://github.com/pytorch/pytorch/blob/0cb24098c74f8ebed81ec08b83bf6cb5ab3903f5/torch/csrc/jit/graph_executor.cpp#L76 16 Ops don’t have autodiff formulas: their backwards are handled by eager mode autograd directly. Please let us know the context of problem so that we can help more on this. Thanks, Ailing
st181976
For reference, I’ve come up with a solution by imitating the generated kernels and the code generation from /tools/autograd. But it seems to me that it is a very intricate task, given that there is little documentation, so I made sure to follow the original code of builtin operations closely. The purpose I had in mind was to perform in-place modification of a variable without incrementing invalidating its former gradient, but instead creating a new variable depending on the old one. I’m using this in my implementations of IAF, MADE, etc. to save RAM (by using fewer clones of the same data). The following is what wasn’t obvious from the docs (to me): Use ta::make_variable or ta::make_variable_view to create the output Variable_s Implement a new subclass of ta::TraceableFunction in your module. With Pytorch 1.0 from Conda, the headers lack generated code, such as the xxxBackward classes, so there was no good way to use even the existing subclasses of ta::TraceableFunction. Instantiate your custom xxxBackward and register it in the graph using the methods set_next_edges and add_input_metadata of TraceableFunction and set_gradient_edge of Variable. Helper functions available in “torch/csrc/autograd/functions/utils.h” TorchScript-compatibility requires registering the op as described in the tutorials on extending TorchScript.
st181977
I am working on a torch to tensorrt project. currently the major problem is impossible to get correct weight of an op. A traced resnet18 model produces following inputs: node: %input.1 : Float(1, 3, 224, 224), %702 : Tensor, %703 : Tensor, %704 : Tensor, %705 : Tensor, %706 : Tensor, %707 : Tensor, %708 : Tensor, %709 : Tensor, %710 : Tensor, %711 : Tensor, %712 : Tensor, %713 : Tensor, %714 : Tensor, %715 : Tensor, %716 : Tensor, %717 : Tensor, %718 : Tensor, %719 : Tensor, %720 : Tensor, %721 : Tensor, %722 : Tensor, %723 : Tensor, %724 : Tensor, %725 : Tensor, %726 : Tensor, %727 : Tensor, %728 : Tensor, %729 : Tensor, %730 : Tensor, %731 : Tensor, %732 : Tensor, %733 : Tensor, %734 : Tensor, %735 : Tensor, %736 : Tensor, %737 : Tensor, %738 : Tensor, %739 : Tensor, %740 : Tensor, %741 : Tensor, %742 : Tensor, %743 : Tensor, %744 : Tensor, %745 : Tensor, %746 : Tensor, %747 : Tensor, %748 : Tensor, %749 : Tensor, %750 : Tensor, %751 : Tensor, %752 : Tensor, %753 : Tensor, %754 : Tensor, %755 : Tensor, %756 : Tensor, %757 : Tensor, %758 : Tensor, %759 : Tensor, %760 : Tensor, %761 : Tensor, %762 : Tensor, %763 : Tensor, %764 : Tensor, %765 : Tensor, %766 : Tensor, %767 : Tensor, %768 : Tensor, %769 : Tensor, %770 : Tensor, %771 : Tensor, %772 : Tensor, %773 : Tensor, %774 : Tensor, %775 : Tensor, %776 : Tensor, %777 : Tensor, %778 : Tensor, %779 : Tensor, %780 : Tensor, %781 : Tensor, %782 : Tensor, %783 : Tensor, %784 : Tensor, %785 : Tensor, %786 : Tensor, %787 : Tensor, %788 : Tensor, %789 : Tensor, %790 : Tensor, %791 : Tensor, %792 : Tensor, %793 : Tensor, %794 : Tensor, %795 : Tensor, %796 : Tensor, %797 : Tensor, %798 : Tensor, %799 : Tensor, %800 : Tensor, %801 : Tensor, %802 : Tensor, %803 : Tensor = prim::Param() It’s possible to get correct input nodes, but for parameter nodes, the only information I can get is “index” of slot, I don’t know how to get corresponding weight of a parameter node. A workaround is use torch.jit._unique_state_dict and remove all untracked variables to get a list of params, then assign them to param node in reversed order. but this isn’t work for models with unused modules such as torchvision.models.inception_v3 (it has a aux output). Thanks in advance!
st181978
Sorry I don’t quite get what you are asking for. what do you exactly mean the weight of an op? and the correct input node? If you can provide more context that will be good for us to answer your exact question
st181979
torch.jit.trace create a graph, graph.inputs() return input nodes in net.forward and parameter nodes, the problem is there is no way to get corresponding weight tensor for a parameter node. I currently use this code 45 to get weight tensor to parameter node mapping, but this isn’t guaranteed by pytorch doc.
st181980
I am trying to convert a list of tuples into a tensor using torch.as_tensor which throws the following error: unknown builtin op: aten::as_tensor Could not find any similar ops to aten::as_tensor. This op may not exist or may not be currently supported in TorchScript Is there any other way to accomplish this? Using torch.tensor also doesn’t work. I’m using PyTorch version 1.1.0
st181981
Hello, I have the following code: def normalize(data: torch.Tensor, mean: torch.Tensor, std: torch.Tensor) -> torch.Tensor: """Normalise the image with channel-wise mean and standard deviation. Args: data (torch.Tensor): The image tensor to be normalised. mean (torch.Tensor): Mean for each channel. std (torch.Tensor): Standard deviations for each channel. Returns: Tensor: The normalised image tensor. """ if not torch.is_tensor(data): raise TypeError('data should be a tensor. Got {}'.format(type(data))) if not torch.is_tensor(mean): raise TypeError('mean should be a tensor. Got {}'.format(type(mean))) if not torch.is_tensor(std): raise TypeError('std should be a tensor. Got {}'.format(type(std))) if len(mean) != data.shape[-3] and mean.shape[:2] != data.shape[:2]: raise ValueError('mean lenght and number of channels do not match') if len(std) != data.shape[-3] and std.shape[:2] != data.shape[:2]: raise ValueError('std lenght and number of channels do not match') if std.shape != mean.shape: raise ValueError('std and mean must have the same shape') mean = mean[..., :, None, None].to(data.device) std = std[..., :, None, None].to(data.device) out = data.sub(mean).div(std) return out I would like for this function to be able to be executed with and without JIT. The problem I am currently facing is that, since this function has control flow statements (IFs) when I run: f = image.Normalize(mean, std) jit_trace = torch.jit.trace(f, data) jit_trace(data2) It will run correctly, but if the input changes in such a way that the control flow takes a different branch then it will not raise the desired exception or give the desired output. How can I make this work? Thanks in advance!
st181982
If you want to use data dependent control flow in TorchScript, you need to use script mode, see the docs here 10 under “Scripting:” for details/examples. For your model in particular it looks like it would work to add @torch.jit.script to normalize() and remove the call to torch.jit.trace.
st181983
Hi, If I do that is there a way to call normalize() without jit. I would like to have to choice of running it with and without jit, just in case I would like to add a not jittable operation in the future
st181984
There is a global environment variable PYTORCH_JIT=0 that you can use to switch off JIT entirely, if you want to run the jitted parts with regular eager pytorch. https://pytorch.org/docs/stable/jit.html?highlight=pytorch_jit#envvar-PYTORCH_JIT=1 6
st181985
Thanks for the replies. However I cannot find the environment variable PYTORCH_JIT in my system. To search for environment variables I use the following code: for key in os.environ.keys(): if "py" in key.lower(): print(key) Which yields: CONDA_PYTHON_EXE PYTHONPATH Nothing Pytorch related is shown. On a second note. The @torch.script.jit runs the jitted code and throws the following error. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/diego/Projects/torchgeometry/.dev_env/lib/python3.7/site-packages/torch/nn/mo dules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/diego/Projects/torchgeometry/torchgeometry/image/normalization.py", line 28, in forward return normalize(input, self.mean, self.std) RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got -3) (maybe_wrap_dim at /opt/conda/conda-bld/pytorch_1556653215914/work/c10/core/WrapDimMinimal.h:20) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7ff3515d7dc5 in /home/diego/Projects/torchgeometry/.dev_env/lib/python3.7/site-packages/torch/lib/libc1 0.so) frame #1: <unknown function> + 0x7d4a40 (0x7ff351fc5a40 in /home/diego/Projects/torchgeomet ry/.dev_env/lib/python3.7/site-packages/torch/lib/libcaffe2.so) frame #2: at::native::slice(at::Tensor const&, long, long, long, long) + 0x4e (0x7ff351fc63 8e in /home/diego/Projects/torchgeometry/.dev_env/lib/python3.7/site-packages/torch/lib/lib caffe2.so) frame #3: at::TypeDefault::slice(at::Tensor const&, long, long, long, long) const + 0x1a (0 x7ff3522255fa in /home/diego/Projects/torchgeometry/.dev_env/lib/python3.7/site-packages/to rch/lib/libcaffe2.so) frame #4: torch::autograd::VariableType::slice(at::Tensor const&, long, long, long, long) c onst + 0x6d3 (0x7ff34a32ea23 in /home/diego/Projects/torchgeometry/.dev_env/lib/python3.7/s ite-packages/torch/lib/libtorch.so.1) frame #5: <unknown function> + 0x985117 (0x7ff34a4ec117 in /home/diego/Projects/torchgeomet ry/.dev_env/lib/python3.7/site-packages/torch/lib/libtorch.so.1) frame #6: <unknown function> + 0xa73df8 (0x7ff34a5dadf8 in /home/diego/Projects/torchgeomet ry/.dev_env/lib/python3.7/site-packages/torch/lib/libtorch.so.1) frame #7: torch::jit::InterpreterState::run(std::vector<c10::IValue, std::allocator<c10::IV alue> >&) + 0x22 (0x7ff34a5d6372 in /home/diego/Projects/torchgeometry/.dev_env/lib/python3 .7/site-packages/torch/lib/libtorch.so.1) frame #8: <unknown function> + 0xa5b2d9 (0x7ff34a5c22d9 in /home/diego/Projects/torchgeomet ry/.dev_env/lib/python3.7/site-packages/torch/lib/libtorch.so.1) frame #9: <unknown function> + 0x458bf3 (0x7ff3776d0bf3 in /home/diego/Projects/torchgeomet ry/.dev_env/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #10: <unknown function> + 0x12d07a (0x7ff3773a507a in /home/diego/Projects/torchgeome try/.dev_env/lib/python3.7/site-packages/torch/lib/libtorch_python.so) <omitting python frames> frame #37: __libc_start_main + 0xe7 (0x7ff384563b97 in /lib/x86_64-linux-gnu/libc.so.6) : operation failed in interpreter: raise ValueError('mean lenght and number of channels do not match') if std.shape[0] != data.shape[-3] and std.shape[:2] != data.shape[:2]: raise ValueError('std lenght and number of channels do not match') if std.shape != mean.shape: raise ValueError('std and mean must have the same shape') ''' mean = mean[..., :, None, None].to(data.device) ~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE std = std[..., :, None, None].to(data.device) out = (data - mean) / std return out What am I doing wrong? I am using Pytorch 1.1 on an Anaconda envrionment
st181986
You have to set it yourself, so if your code is in model.py, running $ PYTORCH_JIT=0 python model.py would set the variable (assuming you’re using bash). As for your code in script, I think it may have something to do with your input shapes, this snippet below works for me (TorchScript is statically typed so the is_tensor checks would also evaluate to true, so they’re removed). It’s slicing data.shape[-3] implies that the inputs need to be at least 4 dimensions. @torch.jit.script def normalize(data: torch.Tensor, mean: torch.Tensor, std: torch.Tensor) -> torch.Tensor: if len(mean) != data.shape[-3] and mean.shape[:2] != data.shape[:2]: raise ValueError('mean lenght and number of channels do not match') if len(std) != data.shape[-3] and std.shape[:2] != data.shape[:2]: raise ValueError('std lenght and number of channels do not match') if std.shape != mean.shape: raise ValueError('std and mean must have the same shape') mean = mean[..., :, None, None].to(data.device) std = std[..., :, None, None].to(data.device) out = data.sub(mean).div(std) return out print(normalize(torch.ones(2, 2, 2, 2), torch.ones(2, 2, 2, 2), torch.ones(2, 2, 2, 2)))
st181987
I tried executing the script with $PYTORCH_JIT=0 as you said and got the following error: image.png899×497 72.7 KB As for the runtimeerror. I don’t think there is anything wrong with the code, since it will run in native Pytorch. I also had to comment the istensor lines since they raise a different exception. I dont know why its not commented in the snippet I pasted but I did comment those lines, so even if the data.shape[-3] is wrong those lines should not run at all. P.S: -3 requires at least 3 dimensions since the last one is -1.
st181988
Looks like we had a bug with PYTORCH_JIT=0, it’s fixed in #20120 7. Can you post the full code you are using to run this that generates the error?
st181989
Sure thing. Its just a simple test: def test_normalize(self): # prepare input data data = torch.ones(1, 2, 2) mean = torch.tensor([0.5]) std = torch.tensor([2.0]) # expected output expected = torch.tensor([0.25]).repeat(1, 2, 2).view_as(data) f = image.Normalize(mean, std) assert_allclose(f(data), expected) And this is the normalize class definition class Normalize(nn.Module): """ Normalize a tensor image or a batch of tensor images with mean and standard deviation. Input must be a tensor of shape (C, H, W) or a batch of tensors (*, C, H, W). Given mean: ``(M1,...,Mn)`` and std: ``(S1,..,Sn)`` for ``n`` channels, this transform will normalize each channel of the input ``torch.*Tensor`` i.e. ``input[channel] = (input[channel] - mean[channel]) / std[channel]`` Args: mean (torch.Tensor): Mean for each channel. std (torch.Tensor): Standard deviation for each channel. """ def __init__(self, mean: torch.Tensor, std: torch.Tensor) -> None: super(Normalize, self).__init__() self.mean = mean self.std = std def forward(self, input: torch.Tensor) -> torch.Tensor: # type: ignore return normalize(input, self.mean, self.std) def __repr__(self): repr = '(mean={0}, std={1})'.format(self.mean, self.std) return self.__class__.__name__ + repr P.S: This and every other test pass perfectly without jit
st181990
class BidirectionalLSTM(torch.jit.ScriptModule): # Inputs hidden units Out def __init__(self, nIn, nHidden, nOut): super(BidirectionalLSTM, self).__init__() self.rnn = nn.LSTM(nIn, nHidden, bidirectional=True) self.embedding = nn.Linear(nHidden * 2, nOut) @torch.jit.script_method def forward(self, input): recurrent, _ = self.rnn(input) T, b, h = recurrent.size() t_rec = recurrent.view(T * b, h) output = self.embedding(t_rec) # [T * b, nOut] output = output.view(T, b, -1) return output net = BidirectionalLSTM(256,256, 512) net.save('model.pt') got the error: Traceback (most recent call last): File "/home/xxh/Desktop/crnn/models/test.py", line 24, in <module> net.save('model.pt') RuntimeError: could not export python function call <python_value>. Remove calls to python functions before export.: @torch.jit.script_method def forward(self, input): recurrent, _ = self.rnn(input) ~~~~~~~~ <--- HERE T, b, h = recurrent.size() t_rec = recurrent.view(T * b, h) output = self.embedding(t_rec) # [T * b, nOut] output = output.view(T, b, -1) return output
st181991
Solved by driazati in post #3 @acobobby You are correct but we have some special infrastructure to support torch.nn Modules without needing to trace them (see builtin functions in the master docs for details), so just assigning them as submodules without tracing them is fine (e.g. self.conv1 = nn.Conv2d(1, 20, 5)). I don’t rem…
st181992
I think that maybe the problem is that nn.LSTM and nn.Linear are not traced. From the docs: To be able to save a module, it must not make any calls to native python functions. This means that all submodules must be subclasses of ScriptModules as well. So you have to trace the inner modules as done in the example documentation for Conv2D import torch import torch.nn as nn import torch.nn.functional as F from torch.jit import ScriptModule, script_method, trace class MyScriptModule(ScriptModule): def __init__(self): super(MyScriptModule, self).__init__() # trace produces a ScriptModule's conv1 and conv2 self.conv1 = trace(nn.Conv2d(1, 20, 5), torch.rand(1, 1, 16, 16)) self.conv2 = trace(nn.Conv2d(20, 20, 5), torch.rand(1, 20, 16, 16)) @script_method def forward(self, input): input = F.relu(self.conv1(input)) input = F.relu(self.conv2(input)) return input
st181993
@acobobby You are correct but we have some special infrastructure to support torch.nn Modules without needing to trace them (see builtin functions 10 in the master docs for details), so just assigning them as submodules without tracing them is fine (e.g. self.conv1 = nn.Conv2d(1, 20, 5)). I don’t remember if these changes were put in an official release yet though, @XiaXuehai could you try to reproduce your issue on the PyTorch nightly build? Your code snippet runs fine for me on it.
st181994
@driazati Thanks, PyTorch nightly is fine. Another question. How to load the static_dict intorch.jit.script_method, the net name is changed, loaded by strict=False is not correct! Solved. I changed the pretrained model’s layer name one by one.
st181995
Hi folks, I’m following this Official PyTorch ONNX tutorial 17 and I would like to iterate through the torch._C.graph that generated so I can obtain the “layer indexes” from the trace (i.e. for the AlexNet tutorial, it is %17, %18 and so on). Is that possible? The reason why I want the indexes because I want to do a one-to-one layer comparison between PyTorch and Core ML layers generated by my model. Thanks!
st181996
Solved by alwc in post #2 I think I figured it out. For those who interested in getting the “layer indexes”, or to be specific, the “output indexes”, check out this file https://github.com/pytorch/pytorch/blob/98e312cf96f6a5e23933cd8794097063ee3cbc8c/torch/utils/tensorboard/_pytorch_graph.py
st181997
I think I figured it out. For those who interested in getting the “layer indexes”, or to be specific, the “output indexes”, check out this file https://github.com/pytorch/pytorch/blob/98e312cf96f6a5e23933cd8794097063ee3cbc8c/torch/utils/tensorboard/_pytorch_graph.py 126
st181998
If the size of input image is not fixed, how to convert the model to torch script?
st181999
You should use the @torch.jit.script annotation, which will recover the full semantics of your model (including any control flow that depends on input size). See the tutorial for more info: https://pytorch.org/tutorials/advanced/cpp_export.html 39