id
stringlengths
3
8
text
stringlengths
1
115k
st180800
I can’t directly answer that, but from viewing sources, I would say a complete redesign/rewrite would be needed for TorchScript conformance, unfortunately.
st180801
I try to TorchScript my model and it seems there is a problem while different branches possess different types. For instance in the following scenario once x is a torch.Tensor: if a > b: y.append(x) else: y.append(None) I got the following error: RuntimeError: aten::append.t(t self, t(c → *) el) → (t): Expected a value of type ‘t’ for argument ‘el’ but instead found type ‘None’. Is it the case? And does anybody know a workaround for such cases?
st180802
Hi @fermat97, You should try annotation your list as being Optional[torch.Tensor] when you initialize it: x : List[Optional[torch.Tensor]] = []
st180803
Thanks @eellison, it works for None. But since TorchScript does not support Union, if we consider the alternatives as int or any other typing, is there any solution for that?
st180804
Union is a WIP but slated for 1.9 release. A not particularly ergonomic solution is to use Any until Union is supported.
st180805
Is there any way to compute the gradient of activation map at a certain later with respect to the one of the output components with torch::jit::script::Module? it seems the hook function only supports the nn::Module. Could you provide a simple example? Thank you very much in advance.
st180806
Solved by driazati in post #4 We don’t have any support for hooks so anything you do is going to be pretty hacky; that said, you might be able to get something working by adding a custom operator to TorchScript that just calls a C++ autograd::Function that does nothing except let’s you grab the gradients in the backward pass. T…
st180807
Hooks aren’t supported right now but we are working on it and it will probably be done in the next couple weeks
st180808
Thank you very much for your reply @driazati ! I am looking forward to that feature but however, I am catching a due with in one week, I wonder is there any alternate way to compute the gradient of hidden activation map at certain layer with jit::script::Module? Thank you very much for your attention to this matter.
st180809
We don’t have any support for hooks so anything you do is going to be pretty hacky; that said, you might be able to get something working by adding a custom operator to TorchScript that just calls a C++ autograd::Function that does nothing except let’s you grab the gradients in the backward pass. There’s an example of a custom op used in this way in torchvision, see https://github.com/pytorch/vision/blob/master/torchvision/csrc/empty_tensor_op.h 8 and https://github.com/pytorch/vision/blob/43e94b39bcdda519c093ca11d99dfa2568aa7258/torchvision/csrc/vision.cpp#L51 13
st180810
driazati: Hooks aren’t supported right now but we are working on it and it will probably be done in the next couple weeks Hello @driazati, does current LibTorch1.5 supports hook in the jit model? If so, could you please give an example of using hook function? Thank you very much for your attention to this matter.
st180811
Hi @driazati, does current LibTorch1.7 supports hook in the jit model? I want to check whether the output of each layer is correct, and I found a forward_hook function play an important role in it.
st180812
Hello I’m using 1.5.0 version of PyTorch. I’d like to skip i numbers of layers in a Torchscript model. Below code is my trial. The method ‘skip’ takes input tensor x and skip point indicator i. import torch import torch.nn as nn class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(20,100) self.fc2 = nn.Linear(100, 100) self.fc3 = nn.Linear(100, 1) self.k = nn.ModuleList() self.k.append(self.fc1) self.k.append(self.fc2) self.k.append(self.fc3) @torch.jit.export def skip(self, x, i): for l in range(int(i), 3): x = self.k[l].forward(x) return x def forward(self, x): x = self.fc1(x) x = self.fc2(x) x = self.fc3(x) return x model = Model().cuda() x = torch.randn(20).cuda() x2 = torch.randn(100).cuda() # Assume that x2 is the result of self.fc1 which is computed in advance. idx = torch.tensor(1).cuda() print(model.skip(x2,idx)) # This works traced = torch.jit.trace(model, x) # Script print(traced.skip(x2,idx)) # This does not works. Error occurs. The error message is printed as below. Traceback (most recent call last): File "C:/Users/user/PycharmProjects/coin-modelgen/simple.py", line 33, in <module> traced = torch.jit.trace(model, x) # Script File "C:\Users\user\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\__init__.py", line 875, in trace check_tolerance, _force_outplace, _module_class) File "C:\Users\user\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\__init__.py", line 1021, in trace_module module = make_module(mod, _module_class, _compilation_unit) File "C:\Users\user\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\__init__.py", line 716, in make_module return torch.jit._recursive.create_script_module(mod, make_stubs_from_exported_methods, share_types=False) File "C:\Users\user\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 305, in create_script_module return create_script_module_impl(nn_module, concrete_type, stubs_fn) File "C:\Users\user\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 361, in create_script_module_impl create_methods_from_stubs(concrete_type, stubs) File "C:\Users\user\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 279, in create_methods_from_stubs concrete_type._create_methods(defs, rcbs, defaults) RuntimeError: Expected integer literal for index: File "C:/Users/user/PycharmProjects/coin-modelgen/simple.py", line 18 def skip(self, x, i): for l in range(int(i), 3): x = self.k[l].forward(x) ~~~~~~~~ <--- HERE return x Could someone can share any idea (or alternative approach) to solve this problem? Moreover, I want to know that such dynamically skipping is also possible or not in c++ implementation using libtorch. According to torch.jit.script cannot index a ModuleList with an int returned from range() · Issue #47496 · pytorch/pytorch · GitHub, I identified that the i must be selected in compile time not runtime. Thus…, is this meaning that there no way to dynamically skip the front part of the Torchscript model? Or, is such naive below code a unique way? import torch import torch.nn as nn class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(20,100) self.fc2 = nn.Linear(100, 100) self.fc3 = nn.Linear(100, 1) self.k = nn.ModuleList() self.k.append(self.fc1) self.k.append(self.fc2) self.k.append(self.fc3) # self.k = nn.Sequential(self.fc1,self.fc2,self.fc3 ) def from1(self, x): for index, v in enumerate(self.k[1:]): x = v.forward(x) return x def from2(self, x): for index, v in enumerate(self.k[2:]): x = v.forward(x) return x @torch.jit.export def skip(self, x, i:int): if i == 1: x = self.from1(x) elif i == 2: x = self.from2(x) return x def forward(self, x): x = self.fc1(x) x = self.fc2(x) x = self.fc3(x) return x model = Model().cuda() m = torch.jit.script(Model()).cuda() x = torch.randn(20).cuda() x2 = torch.randn(100).cuda() # Assume that x is the result of self.fc1 which is computed in advance. idx = 1 print(model.skip(x2,idx)) # This works print(m.skip(x2,idx)) # This works tensor([-0.3310], device='cuda:0', grad_fn=<AddBackward0>) tensor([-0.2432], device='cuda:0', grad_fn=<AddBackward0>) If this way is the unique way, why the two result are different? Thanks a lot.
st180813
Jungmo_Ahn: Could someone can share any idea (or alternative approach) to solve this problem? One thing you can do is iterate over the entire module list with enumerate and only call each module if i is greater than some limit k. Performance will be worse than if you skipped the first k layers completely. Jungmo_Ahn: According to torch.jit.script cannot index a ModuleList with an int returned from range() · Issue #47496 · pytorch/pytorch · GitHub, I identified that the i must be selected in compile time not runtime. Thus…, is this meaning that there no way to dynamically skip the front part of the Torchscript model? You can dynamically skip the front part but only by annotating the LHS of your indexing statement with a module interface type. See the issue you linked for more details. I am currently working on extending this for ModuleList, it currently only works for ModuleDict. Jungmo_Ahn: If this way is the unique way, why the two result are different? I think it’s because from1 and from2 perform different computations?
st180814
Hey guys, I would like to trace/ Build a Pytorch IR for a backward pass/backprop using Torch’s JIT, but for some reason everytime I try trace, it goes through the program 3 times. The first time, it outputs that the loss tensor have grad_fn. But the steps after that it shows that the loss_fn does not have grad_fn. Toy example: import torch def simple(x): loss = x.sum() print(loss) loss.backward() print("hello") return loss if __name__ == "__main__": example = torch.rand(1, 3, 224, 224, requires_grad=True) traced_script_loss = torch.jit.trace(simple,example) error: in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn Any guidance would be very appreicated
st180815
I am having the same issue where tracing through backward() gives me the same error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn. The stack trace reports loss.backward() as being the issue, but loss requires a grad. It looks like 55 tracing backward() calls should be supported…
st180816
I traced one of the detectron2 1 models and noticed that the resulting graph has in-place copy_ operators. Is it possible to somehow remove/replace this copy_ operator from the graph? Maybe jit has an option to produce a graph without in-place operators (such as copy_)?
st180817
Probably it is better to rewrite the following 4 lines in the model itself. Detectron2 - box2box_transform.apply_deltas:111 7
st180818
Changing these lines of code might remove the copy_ ops and you could try it by locally creating this model with the changes. However, just out of interest, why would you like to remove these operations from the graph?
st180819
I think the background is moving the model to TVM see this discussion on TVM and copy_ 7. @apivovarov You probably increase your chance of getting good answers if you give more context. Personally, I think the two main ways of doing this is: As you suggest, the immediate way is to change the model to use torch.cat + view instead. This is probably wildly preferable because any translation of index assignment look funny. Implement view tracking in TVM. I commented a bit on this in the TVM copy_ PR before yours. Takes some effort but after that it should be reasonably possible to express index assignments. There will likely be corner cases that you miss as there are operations PyTorch (e.g. reshape) that may or may not return views. Best regards Thomas
st180820
I think probably the best way to go about this is to prove that the input is linearly typed and replace copy_ with a functional op that creates a copy of the tensor and then writes to it. This is basically how other functional tensor languages handle writes - like jax and numlin 1. You need to prove that the input is linearly typed (there are no other aliases to it) so that you don’t change the semantics of the program. I maintain a pass 1 that does this logic for many inplace ops but does not yet handle copy_. I would accept a PR that adds a new operator similar to jax.ops.index_update and replaces copy_ with it when that can be proved safe. Looking at apply_deltas 1, we initialize the tensor and then write to it so we would successively be able to remove copy_ from this function.
st180821
Another place which probably should be rewritten in Detectron2 is boxes.clip() - detectron2/boxes.py at master · facebookresearch/detectron2 · GitHub 4
st180822
I am new to JIT scripting and don’t know what this error means, and have not found this zero_grad error online nor any related errors whose solutions seem to make sense here. What am I doing wrong here? The error is this (see gist for full error): Tried to access nonexistent attribute or method 'zero_grad' of type 'Tensor (inferred)'. The full repro is here: gist.github.com https://gist.github.com/alannnna/008106cdee4ced848780ed1d829f57d2 24 bench.py import time import torch from mnist import Net, DEFAULTS, train_example def make_net(): torch.manual_seed(1234) model = Net() This file has been truncated. show original error.txt $ python bench.py 7.002076864242554 Traceback (most recent call last): File "bench.py", line 50, in <module> try_jit() File "bench.py", line 31, in try_jit scripted_train_example = torch.jit.script(train_example) File "/Users/me/blah/blah/venv/lib/python3.7/site-packages/torch/jit/_script.py", line 940, in script qualified_name, ast, _rcb, get_default_args(obj) This file has been truncated. show original mnist.py from __future__ import print_function import argparse import time import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms This file has been truncated. show original
st180823
Solved by ptrblck in post #2 I think the error is raised, because scripting assumes tensors as the input, if no annotations are used as described here. I don’t know, if it’s possible to script the complete optimization, but you should be able to script the model at least.
st180824
I think the error is raised, because scripting assumes tensors as the input, if no annotations are used as described here 186. I don’t know, if it’s possible to script the complete optimization, but you should be able to script the model at least.
st180825
I would like to train a model in Pytorch which has been already torchscripted. I am curious if it is possible at all to do so. I think it is possible with the torch.jit.trace but I am not sure if it is possible with the torch.jit.script. Any idea?
st180826
Solved by ptrblck in post #2 Yes, you should be able to train a scripted model (as long as you are able to script it).
st180827
Yes, you should be able to train a scripted model (as long as you are able to script it).
st180828
Hi I try to use torch.jit.script on my custom Concat class: class Concat(nn.Module): def __init__(self, dimension=1): super(Concat, self).__init__() self.dim = dimension def forward(self, x): return torch.cat(x, dim=self.dim) But I got the following error: RuntimeError: Arguments for call are not valid. The following variants are available: aten::cat(Tensor[] tensors, int dim=0) → (Tensor): Expected a value of type ‘List[Tensor]’ for argument ‘tensors’ but instead found type ‘Tensor (inferred)’. Inferred the value for argument ‘tensors’ to be of type ‘Tensor’ because it was not annotated with an explicit type. aten::cat.names(Tensor[] tensors, str dim) → (Tensor): Expected a value of type ‘List[Tensor]’ for argument ‘tensors’ but instead found type ‘Tensor (inferred)’. Inferred the value for argument ‘tensors’ to be of type ‘Tensor’ because it was not annotated with an explicit type. aten::cat.names_out(Tensor[] tensors, str dim, *, Tensor(a!) out) → (Tensor(a!)): Expected a value of type ‘List[Tensor]’ for argument ‘tensors’ but instead found type ‘Tensor (inferred)’. Inferred the value for argument ‘tensors’ to be of type ‘Tensor’ because it was not annotated with an explicit type. aten::cat.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) → (Tensor(a!)): Expected a value of type ‘List[Tensor]’ for argument ‘tensors’ but instead found type ‘Tensor (inferred)’. Inferred the value for argument ‘tensors’ to be of type ‘Tensor’ because it was not annotated with an explicit type. I am using PyTorch version 1.7.1+cu92, and Python 3.7.6. Is it related to this bug 2?
st180829
Solved by ansley in post #2 TorchScript is a statically-typed language, and, in most cases, we’re forced to infer untyped variables to be instances of torch.Tensor. (One of our projects this half is improving our type inference.) An easy fix is to simply annotate the relevant variables with their correct type. def forward(sel…
st180830
TorchScript is a statically-typed language, and, in most cases, we’re forced to infer untyped variables to be instances of torch.Tensor. (One of our projects this half is improving our type inference.) An easy fix is to simply annotate the relevant variables with their correct type. def forward(self, x: List[torch.Tensor]): return torch.cat(x, dim=self.dim)
st180831
Hi, I have problem that requires applying function objects (like torch.sin) to some columns of a matrix X. Right now I solve this by having a list of tuples like this: [(function, col)] where function is the function object and col is the column this function acts on. In each step I stack the resulting columns and return the matrix. Now I’m not sure this is the most performant way because it’s doing a list comprehension over all columns for every evaluation. Doing something like X[:,i] = torch.sin(X[:,i]) is also not possible because it breaks autograd. I was thinking the new torch.vmap 2 could somehow help here? Another problem with my approach is that due to the list mentioned above I can’t jit the module because the functor datatype is not supported. Do you have any tips on how to improve the implementation?
st180832
Hi, I tried to build my custom operation and the compilation worked. When I load my ops, I got the below error. The error even happened when I load the example in documentation. Could you please help me out? Thank you so much! Load and get error print(torch.ops.my_ops.SpikeFunction) Traceback (most recent call last): File “”, line 1, in File “/calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/_ops.py”, line 61, in getattr op = torch._C._jit_get_operation(qualified_op_name) RuntimeError: No such operator my_ops::SpikeFunction The code I used to load the module: torch.utils.cpp_extension.load( … name=“SpikeFunction”, … sources=[“spikefunction.cpp”], … is_python_module=False, … verbose=True … ) Using /home/guozhang/.cache/torch_extensions as PyTorch extensions root… Emitting ninja build file /home/guozhang/.cache/torch_extensions/SpikeFunction/build.ninja… Building extension module SpikeFunction… Allowing ninja to set a default number of workers… (overridable by setting the environment variable MAX_JOBS=N) [1/2] c++ -MMD -MF spikefunction.o.d -DTORCH_EXTENSION_NAME=SpikeFunction -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include/TH -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include/THC -isystem /calc/guozhang/anaconda3/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/guozhang/multiplicative_rule/eprop/spikefunction.cpp -o spikefunction.o [2/2] c++ spikefunction.o -shared -L/calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o SpikeFunction.so Loading extension module SpikeFunction… The C++ code I wrote: #include <torch/all.h> #include <torch/python.h> class SpikeFunction : public torch::autograd::Function { public: static torch::autograd::variable_list forward( torch::autograd::AutogradContext* ctx, torch::autograd::Variable v_scaled, torch::autograd::Variable dampening_factor) { // forward calculation ctx->save_for_backward({v_scaled,dampening_factor}); return {torch::greater(v_scaled, 0.)}; } static torch::autograd::variable_list backward( torch::autograd::AutogradContext* ctx, torch::autograd::variable_list grad_output) { // backward calculation auto saved = ctx->get_saved_variables(); auto v_scaled = saved[0]; auto dampening_factor = saved[1]; auto dE_dz = grad_output[0]; auto dz_dv_scaled = torch::maximum(1 - torch::abs(v_scaled), torch::zeros_like(v_scaled)) * dampening_factor; auto dE_dv_scaled = dE_dz * dz_dv_scaled; return {dE_dv_scaled, torch::zeros_like(dampening_factor)}; } }; torch::autograd::variable_list SpikeFunction(const torch::Tensor& v_scaled, const torch::Tensor& dampening_factor) { return SpikeFunction::apply(v_scaled,dampening_factor); }
st180833
Solved by googlebot in post #2 seems that you’re not registering the operator: static auto registry = torch::RegisterOperators().op("myops::SpikeFunction", &SpikeFunction);
st180834
seems that you’re not registering the operator: static auto registry = torch::RegisterOperators().op("myops::SpikeFunction", &SpikeFunction);
st180835
Thank you for your suggestion. Although I don’t why it did work in my case, I checked the keyword “register” and found I should add the below code at the end. It works. TORCH_LIBRARY(my_ops, m) { m.def(“SpikeFunction”, &SpikeFunction); } But I have a following question; the returned tensor’s requires_grad is false and I cannot backward it properly. Initially, I thought it may be due to the logical format then I tried a square operation but the issue emerged. a=torch.rand(5,requires_grad=True) b=torch.ops.my_ops.SpikeFunction(a,torch.tensor(0.3)) b[0] tensor([True, True, True, True, True]) b[0].requires_grad False By the way, may I ask how to cast tensor from logic to float in c++? I checked the doc but I don’t know how to use it. I tried torch::greater(v_scaled, 0.)._cast_Float() , torch::greater(v_scaled, 0.) .at::_cast_Float() , at ::_cast_Float( torch::greater(v_scaled, 0.)) and many other patterns, all do not work. Thank you!
st180836
I think that’s another registration type, you need operator registry for autograd to handle a function, as in my snippet. By the way, may I ask how to cast tensor from logic to float in c++? python ops are normally duplicated in c++, in this case at::Tensor::to(ScalarType) overload would do that.
st180837
Thank you. I did what you suggested but it still does not have grad_fun in the output tensor. Could you please try the below c++ code on your side? class SpikeFunction : public torch::autograd::Function<SpikeFunction> { public: static torch::Tensor forward( torch::autograd::AutogradContext* ctx, torch::autograd::Variable v_scaled, torch::Tensor dampening_factor) { // forward calculation ctx->save_for_backward({v_scaled,dampening_factor}); return (v_scaled > 0).type_as(v_scaled); } static torch::autograd::variable_list backward( torch::autograd::AutogradContext* ctx, torch::autograd::variable_list grad_output) { // backward calculation auto saved = ctx->get_saved_variables(); auto v_scaled = saved[0]; auto dampening_factor = saved[1]; auto dE_dz = grad_output[0]; auto dz_dv_scaled = torch::maximum(1 - torch::abs(v_scaled), torch::zeros_like(v_scaled)) * dampening_factor; auto dE_dv_scaled = dE_dz * dz_dv_scaled; return {dE_dv_scaled, torch::Tensor()}; } }; torch::Tensor SpikeFunction(torch::Tensor& v_scaled, torch::Tensor& dampening_factor) { return SpikeFunction::apply(v_scaled,dampening_factor); } // TORCH_LIBRARY(my_ops, m) { // m.def("SpikeFunction", &SpikeFunction); // } static auto registry = torch::RegisterOperators().op("myops::SpikeFunction", &SpikeFunction);
st180838
worked for me on 1.8 build: torch.ops.myops.SpikeFunction(torch.ones(1).requires_grad_(), torch.ones(1).requires_grad_()) Out[10]: tensor(cpu,(1,)[1.], grad_fn=<CppNode>) Note that at least one of tensors must have requires_grad, otherwise backward pass is omitted and grad_fn is not set.
st180839
I tried to compile a module that contains a custom op defined by torch.autograd.Function: import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Function class mul2(Function): @staticmethod def forward(ctx, x): return x * 2 @staticmethod def backward(ctx, dx): return dx * 2 def f(a, b): c = a + b d = mul2.apply(c) e = torch.tanh(d * c) return d + (e + e) print(torch.jit.script(f).code) and I received Traceback (most recent call last): File "revisble.py", line 21, in <module> print(torch.jit.script(f).code) File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py", line 1226, in script fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj)) File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py", line 1075, in _compile_and_register_class ast = get_jit_class_def(obj, obj.__name__) File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 148, in get_jit_class_def self_name=self_name) for method in methods] File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 148, in <listcomp> self_name=self_name) for method in methods] File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 169, in get_jit_def return build_def(ctx, py_ast.body[0], type_line, self_name) File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 198, in build_def param_list = build_param_list(ctx, py_def.args, self_name) File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 224, in build_param_list raise NotSupportedError(ctx_range, _vararg_kwarg_err) torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults: at /Users/***/anaconda3/lib/python3.7/site-packages/torch/autograd/function.py:26:25 def mark_dirty(self, *args): ~~~~~ <--- HERE r"""Marks given tensors as modified in an in-place operation. **This should be called at most once, only from inside the** :func:`forward` **method, and all arguments should be inputs.** Every tensor that's been modified in-place in a call to :func:`forward` should be given to this function, to ensure correctness of our checks. It doesn't matter whether the function is called before or after modification. 'mul2' is being compiled since it was called from 'f' at revisble.py:17:4 def f(a, b): c = a + b d = mul2.apply(c) ~~~~~~~~~~~~~~~~ <--- HERE e = torch.tanh(d * c) return d + (e + e) torch.jit.trace does work either. May I know do script mode support custom ops? If so, what is the correct way to handle custom op? Thanks!
st180840
We currently don’t support autograd.Function in Python. Right now the workaround is to define the function in C++ and bind it to TorchScript as a custom op 38. This was done in pytorch/vision to support the Mask R-CNN model, you can see the specific implementation here 47.
st180841
@driazati thank you for your timely answer. It seems I can only write forward function in current torchscript custom ops, right? What if users would like to customize backward functions like what we did in torch.autograd.Function?
st180842
If your C++ op calls a C++ autograd op (i.e. a class that extends public torch::autograd::Function), it will act the same as torch.autograd.Function. For the code below, if you bind my_cool_op and call it from TorchScript, it will use the backward you defined in MyCoolOp #include <torch/all.h> #include <torch/python.h> class MyCoolOp : public torch::autograd::Function<MyCoolOp> { public: static torch::autograd::variable_list forward( torch::autograd::AutogradContext* ctx, torch::autograd::Variable input) { // forward calculation } static torch::autograd::variable_list backward( torch::autograd::AutogradContext* ctx, torch::autograd::variable_list grad_output) { // backward calculation } }; torch::Tensor my_cool_op(const torch::Tensor& input) { return MyCoolOp::apply(input); }
st180843
Hi Driazati, I tried to build my custom operation and the compilation worked. When I load my ops, I got below error. The error even happened when I load the example in documentation. Could you please help me out? Thank you so much! print(torch.ops.my_ops.SpikeFunction) Traceback (most recent call last): File “”, line 1, in File “/calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/_ops.py”, line 61, in getattr op = torch._C._jit_get_operation(qualified_op_name) RuntimeError: No such operator my_ops::SpikeFunction The code I used to load the module: torch.utils.cpp_extension.load( … name=“SpikeFunction”, … sources=[“spikefunction.cpp”], … is_python_module=False, … verbose=True … ) Using /home/guozhang/.cache/torch_extensions as PyTorch extensions root… Emitting ninja build file /home/guozhang/.cache/torch_extensions/SpikeFunction/build.ninja… Building extension module SpikeFunction… Allowing ninja to set a default number of workers… (overridable by setting the environment variable MAX_JOBS=N) [1/2] c++ -MMD -MF spikefunction.o.d -DTORCH_EXTENSION_NAME=SpikeFunction -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include/TH -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include/THC -isystem /calc/guozhang/anaconda3/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/guozhang/multiplicative_rule/eprop/spikefunction.cpp -o spikefunction.o [2/2] c++ spikefunction.o -shared -L/calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o SpikeFunction.so Loading extension module SpikeFunction… The C++ code I wrote: #include <torch/all.h> #include <torch/python.h> class SpikeFunction : public torch::autograd::Function { public: static torch::autograd::variable_list forward( torch::autograd::AutogradContext* ctx, torch::autograd::Variable v_scaled, torch::autograd::Variable dampening_factor) { // forward calculation ctx->save_for_backward({v_scaled,dampening_factor}); return {torch::greater(v_scaled, 0.)}; } static torch::autograd::variable_list backward( torch::autograd::AutogradContext* ctx, torch::autograd::variable_list grad_output) { // backward calculation auto saved = ctx->get_saved_variables(); auto v_scaled = saved[0]; auto dampening_factor = saved[1]; auto dE_dz = grad_output[0]; auto dz_dv_scaled = torch::maximum(1 - torch::abs(v_scaled), torch::zeros_like(v_scaled)) * dampening_factor; auto dE_dv_scaled = dE_dz * dz_dv_scaled; return {dE_dv_scaled, torch::zeros_like(dampening_factor)}; } }; torch::autograd::variable_list SpikeFunction(const torch::Tensor& v_scaled, const torch::Tensor& dampening_factor) { return SpikeFunction::apply(v_scaled,dampening_factor); }
st180844
I’m experimenting with the idea of generating Torchscript IR as part of a project I’m working on, and would like to better understand how it’s compiled and optimized. I’ve followed the tutorial here: Loading a TorchScript Model in C++ — PyTorch Tutorials 1.7.1 documentation and successfully loaded a Torchscript model into C++ and ran it on some example data. I’ve also found the overview here: pytorch/OVERVIEW.md at master · pytorch/pytorch · GitHub helpful. What I’d like to know is: When I load a module in in C++, is there any compilation / optimization that can performed before the first time I call the forward method? Or do optimizations only happen with a JIT that runs after data is first passed in? If there are JIT optimizations to be done when I call the forward method, do the optimizations persist so that next time I call the forward method on the same module object, they won’t have to be performed again? If a module/model gets optimized by the JIT, do any of those optimizations change the output I’d get if I now wrote the model back out to a torchscript .pt file? Where does kernel fusion happen? Is that part of the JIT, or is that something that has to be figured out when generating the IR in python? Does Pytorch have the ability to dynamically decide whether an operation should be run on CPU or GPU based on things like data size? If there’s some resource I haven’t yet seen that could help me better understand this stuff, a pointer towards it would be appreciated. Thanks for your help.
st180845
Solved by eellison in post #3 Hi johnc1231, Not unless you invoke it. Currently the only AOT optimization is torch.jit.freeze — PyTorch master documentation. It does a limited set of optimizations on 1.7, a little more on master, and I’m working on adding more AOT stuff. I will add a C++ api equivalent by 1.8. Yes, they do…
st180846
I’m happy to break these questions up into different posts if that would be helpful
st180847
Hi johnc1231, Not unless you invoke it. Currently the only AOT optimization is torch.jit.freeze — PyTorch master documentation 4. It does a limited set of optimizations on 1.7, a little more on master 2, and I’m working on adding more AOT stuff. I will add a C++ api equivalent by 1.8. Yes, they do persist, however if you pass in tensors of a different type we may respecialize. JIT optimizations will not affect saving your model. That happens as part of the JIT. It’s happens after we profile the tensors that are seen at runtime. No, that one of the main design points of pytorch is that it does not automatically decide what parts of the module to run on gpu vs cpu, for both eager & jit. That is controlled by the user. To better understand optimizations I would suggest running a simple file like: import torch @torch.jit.script def foo(x): return x + x + x foo(torch.rand([4, 4], device='cuda')) foo(torch.rand([4, 4], device='cuda')) with PYTORCH_JIT_LOG_LEVEL='profiling_graph_executor_impl' foo.py
st180848
Thanks for a very helpful response, and for planning to add freeze to the C++ API. I’ll experiment with the JIT log level printouts.
st180849
@eellison I did what you suggested, and I noticed the final optimized output looks like: [DUMP profiling_graph_executor_impl.cpp:620] Optimized Graph: [DUMP profiling_graph_executor_impl.cpp:620] graph(%x.1 : Tensor): [DUMP profiling_graph_executor_impl.cpp:620] %1 : int = prim::Constant[value=1]() [DUMP profiling_graph_executor_impl.cpp:620] %4 : Tensor = aten::add(%x.1, %x.1, %1) # triple_add.py:5:11 [DUMP profiling_graph_executor_impl.cpp:620] %7 : Tensor = aten::add(%4, %x.1, %1) # triple_add.py:5:11 [DUMP profiling_graph_executor_impl.cpp:620] return (%7) It still makes two separate calls to add. I would think part of kernel fusion is that it could somehow make one call to add that takes in all three inputs. Am I misinterpreting the graph, or am I misunderstanding what kernel fusion does?
st180850
Were you using CUDA or CPU ? Could you post a more complete repro? CPU fusion is in progress & targeting 1.9
st180851
Ah yeah, I was using CPU, that would explain it. I didn’t realize that GPU fusion was working but CPU fusion was in progress. Thank you.
st180852
Are there any flags / environment variables I can set to log the time spent in JIT compilation vs actually computing a result?
st180853
Hi, I use type annotation for my code. Is there any typing annotation guideline for pytorch? I want to do something like this. class MyModule(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(10, 4) def forward(self, x: torch.Tensor[torch.float, [B, 10]]) -> torch.Tensor[torch.float, [B, 4]]: return self.linear(x) I guess torchScript or some other approach, e.g. View(?), could support this one, but I couldn’t find official or near-official document. Is there anyone who know about this, help me Thanks,
st180854
Hi @sh0416 here is a sample of code i cameby while browsing Allen Nlp 5 guides and source code. I hope this points you to the right direction. Sorry I just pasted there example codes from typing import Dict, Iterable, List from allennlp.data import DatasetReader, Instance from allennlp.data.fields import LabelField, TextField from allennlp.data.token_indexers import TokenIndexer, SingleIdTokenIndexer from allennlp.data.tokenizers import Token, Tokenizer, WhitespaceTokenizer class ClassificationTsvReader(DatasetReader): def __init__(self, lazy: bool = False, tokenizer: Tokenizer = None, token_indexers: Dict[str, TokenIndexer] = None, max_tokens: int = None): super().__init__(lazy) self.tokenizer = tokenizer or WhitespaceTokenizer() self.token_indexers = token_indexers or {'tokens': SingleIdTokenIndexer()} self.max_tokens = max_tokens def _read(self, file_path: str) -> Iterable[Instance]: with open(file_path, 'r') as lines: for line in lines: text, sentiment = line.strip().split('\t') tokens = self.tokenizer.tokenize(text) if self.max_tokens: tokens = tokens[:self.max_tokens] text_field = TextField(tokens, self.token_indexers) label_field = LabelField(sentiment) fields = {'text': text_field, 'label': label_field} yield Instance(fields) dataset_reader = ClassificationTsvReader(max_tokens=64) instances = dataset_reader.read("quick_start/data/movie_review/train.tsv") for instance in instances[:10]: print(instance) or this class SimpleClassifier(Model): def __init__(self, vocab: Vocabulary, embedder: TextFieldEmbedder, encoder: Seq2VecEncoder): super().__init__(vocab) self.embedder = embedder self.encoder = encoder num_labels = vocab.get_vocab_size("labels") self.classifier = torch.nn.Linear(encoder.get_output_dim(), num_labels) def forward(self, text: Dict[str, torch.Tensor], label: torch.Tensor) -> Dict[str, torch.Tensor]: # Shape: (batch_size, num_tokens, embedding_dim) embedded_text = self.embedder(text) # Shape: (batch_size, num_tokens) mask = util.get_text_field_mask(text) # Shape: (batch_size, encoding_dim) encoded_text = self.encoder(embedded_text, mask) # Shape: (batch_size, num_labels) logits = self.classifier(encoded_text) # Shape: (batch_size, num_labels) probs = torch.nn.functional.softmax(logits, dim=-1) # Shape: (1,) loss = torch.nn.functional.cross_entropy(logits, label) return {'loss': loss, 'probs': probs}
st180855
Thanks for the suggestion! The second one is closer to the one I want. (Check the shape of tensor)
st180856
I have a PyTorch model of torch.jit.ScriptModule and have successfully converted it to onnx format. The problem is all the onnx nodes are named with sequential numbers. E.g., in the attached image below, the circled conv’s inputs and outputs are named with numbers (visualized with Netron), which is inconvenient for the following analysis if the network is large. How can I add more meaningful names to these intermediate onnx nodes? Thanks. image2030×1034 168 KB
st180857
Hi @Liming did you end up finding a solution to annotate nodes with more meaningful names? I would be interested in your approach to help with debugging an invalid graph that PyTorch produced
st180858
I need a means of connecting model.modules and exported jit graph. in torch 1.3 I used to be able to do import torchvision import torch from torch.onnx import utils model = torchvision.models.resnet18() tensor = torch.randn([1,3,224,224]) trace, out = torch.jit.get_trace_graph(model, tensor) graph = trace.graph() graph = utils._optimize_graph(graph, operator_export_type=torch._C._onnx.OperatorExportTypes.ONNX) for i, node in enumerate(graph.nodes()): scopename = ".".join([x.split('[')[-1] for x in [s for s in node.scopeName().split("]")] if len(x.split('[')) > 1]) print("node.scopeName() <%s> named_module:"%scopename, dict(model.named_modules())[scopename]) Which gives me a nice way to be able to inspect the latents with _hooks at the same time that I know connectivity, hence can see properties like receptive filed and so on. Since 1.4 scopeName() is empty. Looks like the pointer gets wiped out instead of stored in jit.trace I just tried in 1.8 and it is still empty, I do realize that code should change to something like from torch.onnx import TrainingMode from torch.onnx import utils graph, dic, out = utils._model_to_graph(model, tensor, training=TrainingMode.TRAINING, _retain_param_name=True) for i, node in enumerate(graph.nodes()): print(node.scopeName()) I know, I can “retain_names” and then i can grab the common prefix of the inputs that is not numeric, which will give me the scopeName of nodes with inputs. Conv yes, but not other nodes like ReLUs… I also know that instead of _model_to_graph() I could do trace = torch.jit.trace(model, tensor) graph = trace.inlined_graph which keeps the dirtier version out of which I can get the scopename, anyway, hacks. Is there a reason scopeName() gets cleared by jit? Anyone know if there is an alternative to scopeName such that I can connect a graph to the trace of the graph, node by node? Or do I have to fix the code, probably around <pytorch/torch/csrc/jit/ir/ir.h> thanks/
st180859
where is my mind, i logged this as a bug one year ago… i guess, last time i looked at this.
st180860
the answer to previous query works for this one… torch._C.Node.scopeName() missing in pytorch 1.4 · Issue #33463 · pytorch/pytorch · GitHub 17
st180861
github.com/pytorch/pytorch A strange phenomenon in speed when use jit/torchscript. What's the difference between 1.6 and 1.7 in jit/torchscript ? 10 opened Feb 1, 2021 ccilery phenomenon When I run code in pytorch 1.6 and pytorch 1.7.1, I found a strange phenomenon in speed. The speed is stable... oncall: jit
st180862
pytorch version : 1.7.1 os : win10 64 Trying to export the video classification by following script import os import numpy as np import torch import torchvision model = torchvision.models.video.r3d_18(pretrained=True, progress=True) model.eval() img = torch.zeros((16, 3, 112, 112)) _ = model(img) # dry run traced_script_module = torch.jit.trace(model, img) traced_script_module.save("video_r3d_18.pt") The script give me error messages RuntimeError: Expected 5-dimensional input for 5-dimensional weight [64, 3, 3, 7, 7], but got 4-dimensional input of size [3, 16, 112, 112] instead Afte I change it to [64, 3, 3, 7, 7], I can export the model, but the training codes and the doc, both use [3,16,112,112], this is weird, why this happen? Thanks
st180863
The error is raised in the “dry run” section, bot the tracing, since you are passing a 4-dimensional input, while 3D models expect a 5-dimensional input. You could unsqueeze the “depth” dimension and the code should work: img = torch.zeros((16, 3, 1, 112, 112)) _ = model(img) # dry run
st180864
The torch.jit documentation states that one of the limitations of tracing is that calls which differ based on whether the model is in train or eval mode will only ever use whatever mode the module was in at trace time. Specifically: In the returned ScriptModule, operations that have different behaviors in training and eval modes will always behave as if it is in the mode it was in during tracing, no matter which mode the ScriptModule is in. I know of some commonly used layer types have train / eval behavioral differences; BatchNorm2d and Dropout come to mind. Does this mean that tracing is A Bad Idea for modules making use of these two layers (and others like it?).
st180865
Solved by ptrblck in post #2 If might be a bad idea, if you would like to switch the behavior of the traced model. In that case you could try to script your model.
st180866
If might be a bad idea, if you would like to switch the behavior of the traced model. In that case you could try to script your model.
st180867
I have been puzzled by the inner workings of torch.jit.script and how it compiles the model code. My issue is to use a Python nn.Module class that calls C++/cuda functions during forward. For detailed overview I have created this Github repository: GitHub - maksad/torchscript-debug 1. To clarify here is a simple code: import torch.nn as nn import torch from carafe import CARAFEPack class MyClass(nn.Module): def __init__(self): super(MyClass, self).__init__() self.conv = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=2) self.upsampling = CARAFEPack(channels=1, scale_factor=2) def forward(self, x): x = self.conv(x) x = self.upsampling(x) return x and we can compile it like: with torch.no_grad(): my_nn = MyClass() my_nn = torch.jit.script(my_nn) The CARAFEPack is simply nn.Module class, that calls carafe_ext.forward, which is a compiled C++/cuda code. carafe_ext is compiled to .so during installation with a setuptool. My expectation would be torch.jit.script wouldn’t have any problem to compile it, but in fact it raises RuntimeError: RuntimeError: Python builtin <built-in method forward of PyCapsule object at 0x7fa48f8c4ed0> is currently not supported in Torchscript: File "/home/ubuntu/.virtualenvs/temp/lib/python3.6/site-packages/carafe/carafe.py", line 109 rmasks = masks.new_zeros(masks.size()) if features.is_cuda: carafe_ext.forward(features, rfeatures, masks, rmasks, self.up_kernel, ~~~~~~~~~~~~~~~~~~ <--- HERE self.up_group, self.scale_factor, routput, output) else: 'CARAFEPack.forward_carafe' is being compiled since it was called from 'CARAFEPack.feature_reassemble' File "/home/ubuntu/.virtualenvs/temp/lib/python3.6/site-packages/carafe/carafe.py", line 92 def feature_reassemble(self, x: Tensor, mask: Tensor): x = self.forward_carafe(x, mask, self.up_kernel, self.up_group, self.scale_factor) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE return x 'CARAFEPack.feature_reassemble' is being compiled since it was called from 'CARAFEPack.forward' File "/home/ubuntu/.virtualenvs/temp/lib/python3.6/site-packages/carafe/carafe.py", line 121 mask = self.kernel_normalizer(mask) x = self.feature_reassemble(x, mask) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE return x Am I missing specifics of torch.jit.script?
st180868
Hi, take a look at Extending TorchScript with Custom C++ Operators — PyTorch Tutorials 1.7.1 documentation 42, that should answer your question.
st180869
Hello all! I’m trying to convert a pytorch model(https://github.com/ZTao-z/resnet-ssd#use-a-pre-trained-ssd-network-for-detection 8) to onnx. This is the code I’m trying to use: from ssd import build_ssd import os import sys import time import torch import torch.onnx import torch.nn as nn import torch.backends.cudnn as cudnn import numpy as np import argparse torch.set_default_tensor_type('torch.cuda.FloatTensor') if __name__ == '__main__': ssd_net = build_ssd('train', 300, 21) if args.resume: print('Resuming training, loading {}...'.format(args.resume)) ssd_net.load_weights(args.resume) ssd_net = ssd_net.cuda().eval() dummy_input = torch.randn(1, 3, 300, 300, device='cuda', requires_grad=True) for param in ssd_net.parameters(): param.requires_grad = False # for name, param in ssd_net.named_parameters(): # print(name, param) # break torch.onnx.export(ssd_net, dummy_input, "onnx_model_name.onnx", verbose=True, export_params=True) This is the error: Resuming training, loading ssd300_mAP_77.43_v2.pth... Loading weights into state dict... Finished! Traceback (most recent call last): File "onnx_conversion.py", line 38, in <module> torch.onnx.export(ssd_net, dummy_input, "onnx_model_name.onnx", verbose=True, export_params=True) File "/home/user_name/venv/lib/python3.6/site-packages/torch/onnx/__init__.py", line 208, in export custom_opsets, enable_onnx_checker, use_external_data_format) File "/home/user_name/venv/lib/python3.6/site-packages/torch/onnx/utils.py", line 92, in export use_external_data_format=use_external_data_format) File "/home/user_name/venv/lib/python3.6/site-packages/torch/onnx/utils.py", line 530, in _export fixed_batch_size=fixed_batch_size) File "/home/user_name/venv/lib/python3.6/site-packages/torch/onnx/utils.py", line 366, in _model_to_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/home/user_name/venv/lib/python3.6/site-packages/torch/onnx/utils.py", line 319, in _trace_and_get_graph_from_model torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True) File "/home/user_name/venv/lib/python3.6/site-packages/torch/jit/__init__.py", line 338, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/home/user_name/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/user_name/venv/lib/python3.6/site-packages/torch/jit/__init__.py", line 426, in forward self._force_outplace, RuntimeError: output 1 ( 0.0133 0.0133 0.0700 0.0700 0.0133 0.0133 0.1025 0.1025 0.0133 0.0133 0.0990 0.0495 0.0133 0.0133 0.0495 0.0990 0.0400 0.0133 0.0700 0.0700 ------------------ 8723 more lines ------------------ 0.5000 0.5000 0.8700 0.8700 0.5000 0.5000 0.9558 0.9558 0.5000 0.5000 1.0000 0.6152 0.5000 0.5000 0.6152 1.0000 [ CUDAFloatType{8732,4} ]) of traced region did not have observable data dependence with trace inputs; this probably indicates your program cannot be understood by the tracer. There is no problem with invoking the pytorch model & loading state_dict. The problem comes with conversion to onnx model. Any help is greatly appreciated! Thanks!
st180870
Hi, I rencently tried to load a trained Pytorch model which could do “Image Style transfer” to a C++ model. The style transfer needs the gradient of loss w.r.t the input tensor, but it seems that the jit::script::Module can’t pass the gradients to the input tensor. So I’d like to know is there any way to convert a jit::script::Module to nn::Moudle conveniently? Many thanks!
st180871
Hi, there is no way to convert a jit:script::Module to an nn::Module. However, we should work on making it so that jit::script::Module can be used interchangeably. Can you comment on C++ API: Training and inference of torchscript modules on multiple GPU · Issue #46460 · pytorch/pytorch · GitHub 6 with your specific use case ? Thanks.
st180872
Is there a mechanism to define a custom JIT operator which returns variable number of outputs? (Tuple of any types) I tried defining ns::customOp(...) -> (...), but this doesn’t seem to work. When I save a model with a custom operator which returns 2 outputs Tuple[Tensor, Tensor] and load the saved model, I see error saying the number of outputs mismatch . Return value was annotated as having type Tuple[Tensor, Tensor] but is actually of type Tuple[]: My current operator registration is based off of some of the prim op registrations RegisterOperators r({Operator( "ns::customOp(…) -> (…)", [](const Node* node) -> Operation { return [node](Stack& stack) { myLogic(stack, node); return 0; }; }, AliasAnalysisKind::FROM_SCHEMA)}); How do we define an operator which is dynamic and can take variable number of inputs and return variable number of outputs? [Similar to This discussion 1]
st180873
Are there instructions for properly including CUDA kernels in custom TorchScript ops built using CMake? I haven’t been able to find any, and I’ve hit an odd nvcc-related compilation error when I attempt to do so (See the log file 2 and the repository which I am using to test these concepts) For my use-case, I need to serialize a model into TorchScript which references custom CUDA ops. Said model must also be executable from C++/Rust. The instructions for compiling a C++ operator for TorchScript here work well for pure C++ operators, and the op is both traceable in python and useable from C++ and Rust. However, I must also be able to include C++ operators that execute CUDA kernels (in fact, I’m actually porting a neural net that used the JIT method for compiling these - but in C++/Rust I need to link against a DLL or shared object so I cannot use this afaik). Thanks!
st180874
I want to take a deeper look at what my model is doing. Where can I find documentation or implementation on how different operators work? I see a lot of inplace operators (like aten::copy_) in my model after converting to TorchScript.
st180875
Hi, I am trying to load tensor from pytorch to c++, following suggestions this 8 github issue. However, it looks like torch::jit::script::Module 5 no longer has get_attribute and I having a tough time understanding how to get named parameters using the API. Can you please suggest how to get named_parameters with the current API?
st180876
Hey sorry for the confusion here. The TorchScript C++ API is still experimental and we made some changes to the API, we will update the doc in the next release. To answer your question, we are trying to mimic the module API in python, so you can get the named parameters via named_parameters() call in C++, see a list of methods here 20
st180877
Thank you for your response. i was able to get the values using attr but I still could not understand how to extract values from slot_iterator_impl. If its not too much trouble, can I please request sample code to extract for the same example (reproduced below)? It can help me better understand the codebase! Thank you again! #include <torch/script.h> #include <iostream> #include <memory> int main(int argc, const char *argv[]) { torch::jit::script::Module container = torch::jit::load("container.pt"); // Load values by name torch::Tensor a = container.get_attribute("a").toTensor(); std::cout << a << "\n"; torch::Tensor b = container.get_attribute("b").toTensor(); std::cout << b << "\n"; std::string c = container.get_attribute("c").toStringRef(); std::cout << c << "\n"; int64_t d = container.get_attribute("d").toInt(); std::cout << d << "\n"; return 0; }
st180878
If you are just trying to move values between Python and C++, the API in this comment 33 is now the blessed way to do that. But to answer your question, for a model like import torch class Model(torch.nn.Module): def __init__(self): super().__init__() self.w1 = torch.nn.Parameter(torch.ones(2, 2)) self.w2 = torch.nn.Parameter(torch.ones(2, 2)) def forward(self): return self.w1 + self.w2 m = torch.jit.script(Model()) torch.jit.save(m, 'model.pt') You can iterate over the parameters like: #include <torch/script.h> int main() { auto m = torch::jit::load("model.pt"); for (torch::jit::script::Named<at::Tensor> p : m.named_parameters(/*recurse=*/true)) { std::cout << p.name << ": " << p.value << "\n"; } return 0; }
st180879
Tried to load it using the mentioned link 10 . Im getting error on the line torch::IValue x = torch::pickle_load(f); i tried suing libtorch nightly as well as libtorch 1.7.1 error: ‘pickle_load’ is not a member of ‘torch’
st180880
Hello everyone, hope you are all having a great day. I was wondering if its possible to convert the following module into torch script: class PriorBox(torch.nn.Module): def __init__(self): super(PriorBox, self).__init__() # @torch.jit.script def forward(self, minSizes, steps, clip, image_size): anchors = [] feature_maps = [[ceil(image_size[0]/step), ceil(image_size[1]/step)] for step in steps] for k, f in enumerate(feature_maps): min_sizes = minSizes[k] for i, j in product(range(f[0]), range(f[1])): for min_size in min_sizes: s_kx = min_size / image_size[1] s_ky = min_size / image_size[0] dense_cx = [x * steps[k] / image_size[1] for x in [j + 0.5]] dense_cy = [y * steps[k] / image_size[0] for y in [i + 0.5]] for cy, cx in product(dense_cy, dense_cx): anchors += [cx, cy, s_kx, s_ky] # back to torch land output = torch.Tensor(anchors).view(-1, 4) if clip: output.clamp_(max=1, min=0) return output I have reimplemented this in libtorch but its slow as snail! here it is in case its needed: struct PriorBox : torch::nn::Module { PriorBox(const std::vector<std::pair<torch::Tensor, torch::Tensor>>& min_sizes, const std::vector<int>& steps, const bool& clip, const std::pair<int, int>& image_size, const std::string& phase = "train") { this->min_sizes = min_sizes; this->steps = steps; this->clip = clip; this->image_size = image_size; this->phase = phase; this->name = name; for (auto& step : this->steps) { auto height = torch::tensor(float(this->image_size.first) / step); auto width = torch::tensor(float(this->image_size.second) / step); this->feature_maps.emplace_back(std::make_pair(torch::ceil(height), torch::ceil(width))); } } torch::Tensor forward() { std::vector<torch::Tensor> anchors; int i = -1; for (auto& fmap : this->feature_maps) { auto min_sizes = this->min_sizes[++i]; auto result = torch::cartesian_prod({ torch::arange(c10::Scalar(0), c10::Scalar(fmap.first.item().toInt()), c10::Scalar(1)), torch::arange(c10::Scalar(0), c10::Scalar(fmap.second.item().toInt()), c10::Scalar(1)) }); for (int idx = 0; idx <= result.sizes()[0] - 1; idx++) { //takes around 0.006 ms auto i_ = result[idx][0]; auto j_ = result[idx][1]; //takes around 0.20 ms to 0.30 ms for (auto& min_size : { min_sizes.first, min_sizes.second }) { auto s_kx = min_size / float(this->image_size.second); auto s_ky = min_size / float(this->image_size.first); // takes around 0.037 ms torch::Tensor dense_cx = (j_ + 0.5) * this->steps[i] / this->image_size.second; torch::Tensor dense_cy = (i_ + 0.5) * this->steps[i] / this->image_size.first; //takes around 0.02ms auto result_cy_cx = torch::cartesian_prod({ dense_cy.unsqueeze(0), dense_cx.unsqueeze(0) }); //this takes around 0.010ms for (int l = 0; l <= result_cy_cx.sizes()[0] - 1; l++) { auto cy = result_cy_cx[l][0].unsqueeze(0); auto cx = result_cy_cx[l][1].unsqueeze(0); anchors.emplace_back(torch::cat({ cx, cy, s_kx, s_ky })); } } } } //takes around 5ms! auto output = torch::stack(anchors).view({ -1,4 }); //takes around 0 ms! if (this->clip) output.clamp_(0, 1); return output; } std::vector<std::pair<torch::Tensor, torch::Tensor>> min_sizes; std::vector<int> steps; bool clip; std::pair<int, int> image_size; std::string phase; std::string name; std::vector<std::pair<torch::Tensor, torch::Tensor>> feature_maps; }; So I thought maybe converting this into a Torchscript and loading it in C++ would be a better idea. Since we are dealing with loops, etc we cant simply jit trace the module. So I guess I’m stuck with scripting. When I tried to do scripting I kept getting different errors such asrange cannot be used as a value: RuntimeError: range cannot be used as a value: File "P:\ligen\layers\functions\prior_box.py", line 19 for k, f in enumerate(feature_maps): min_sizes = minSizes[k] for i, j in product(range(int(f[0])), range(int(f[1]))): ~~~~~~~~~~~~~~ <--- HERE for min_size in min_sizes: s_kx = min_size / image_size[1] So my question is, considering this, is it portable to torchscript? if so What am I missing here? how should I go about this? If not, what are my other options? Thanks a lot in advance
st180881
In general, scripting requires you to modify your input program so that it only uses the features that torch.jit.script supports. Take a look at the TorchScript Language Reference 23 for more information.
st180882
Thanks, I apprecaite your kind help, but I know about it. The issue is I’m not sure how I should be going about this! how can I substitute the range as I can’t find it in the supported list and I’m at a complete loss of idea on how to go about this!
st180883
Torchscript allows for range. I think its just your product(range, range) that’s the problem. Maybe just nest for loops.
st180884
Thanks a lot. I did try the range outside of the product and it failed with the same error nevertheless. I also substituted the product with the PyTorch’s built-in cartesian-prod if I remember correctly but that seems not to work either.
st180885
Torchscript code won’t be much faster. Torchscript doesn’t speed up your code by a factor of 2. If If you reimplemented in libtorch and code is slow, also it will be with Torchscript. Finally, I think you could avoid some list comprehension like with dense_cx and dense_cy
st180886
Thanks a lot. I already stripped down much of libtorch machinery and resorted back to the old plain for loops and got around 3x speed up! Having that part jit scripted should in theory at least help in performance, as I believe (and expect) the instructions are optimized as well by the engine) at least this is what I hoped for. I guess I’ll be able to hopefully extract a bit more performance out of it (making it around 4x faster), but thats still no way around the performance I get in Python! I’m not sure if this is because something is wrong with libtorch’s cartesian_prod implementation that takes too much compared to its counterpart (Python’s product) or some thing else that I’m missing completely. I know the implementation posted above is not efficient by any means, but I really didn’t expect this to be this slow! To give you an impression, Python code runs around 20,30 ms while this takes around 350!450 ms! (after my last changes, it now runs at 70~100ms )but still way slower than Python!
st180887
Question Hello, I am using a pytorch model (GitHub - MhLiao/MaskTextSpotterV3: The code of "Mask TextSpotter v3: Segmentation Proposal Network for Robust Scene Text Spotting") and I wanted to export it to onnx. I already asked on onnx github forum, and they suggested to ask the question here. The issue is, we have some post-processing between some layers. For example, we perform post-processing on the outputs of the RPN boxes before feeding the proposals to the heads. I am having then a lot of warnings, to give you an example, we want to convert tensors to numpy array to have image computation using opencv, triggering then : TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect.. That case can be found in the class SEGPostProcessor(torch.nn.Module) in the repository I cited above. Or another example, sometimes we have to iterate over a Tensor, triggering RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Then the conversion to onnx is giving many warnings, and eventually the onnx runtime fails without surprise. Here’s the runtime failure log : 2021-01-19 13:41:54.381600130 [E:onnxruntime:, sequential_executor.cc:333 Execute] Non-zero status code returned while running Gather node. Name:'' Status Message: indices element out of data bounds, idx=94375546868464 must be within the inclusive range [-1,0] Traceback (most recent call last): File "tools_onnx/test_net_onnx.py", line 263, in <module> main() File "tools_onnx/test_net_onnx.py", line 233, in main ort_outs = ort_session.run(None, ort_inputs) File "/home/ubuntu/anaconda3/envs/masktextspotter37/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 124, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'' Status Message: indices element out of data bounds, idx=94375546868464 must be within the inclusive range [-1,0] One of my idea was to split the model into many sub-models, and make the post-processing between these models, but it’s very fastidious and I am convinced there is a better solution. If you have any information about the good way to deal with these kind of steps, it would be really helpful. Thanks in advance.
st180888
I trained the model for the experiment, I did a test on pytorch well. but ,The following error occurs when converting and saving through torch.jit.trace. Full error phrase : Traceback (most recent call last): File “D:/Landmark/pytorch/convert2C_merge_model.py”, line 190, in traced_script_module.save("./model/output.pt") File “C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\jit_init_.py”, line 1205, in save return self._c.save(*args, **kwargs) RuntimeError: could not export python function call Scatter. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to constants.: C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\parallel\scatter_gather.py(13): scatter_map C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\parallel\scatter_gather.py(15): scatter_map C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\parallel\scatter_gather.py(28): scatter C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\parallel\scatter_gather.py(35): scatter_kwargs C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\parallel\data_parallel.py(159): scatter C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\parallel\data_parallel.py(148): forward C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py(481): _slow_forward C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py(491): call D:/Landmark/pytorch/convert2C_merge_model.py(74): forward C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py(481): slow_forward C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py(491): call C:\Users\kcw\Anaconda3\envs\pytorch\lib\site-packages\torch\jit_init.py(688): trace D:/Landmark/pytorch/convert2C_merge_model.py(187): help me… plz…
st180889
@xsacha seemed to have run into a similar message (also about Scatter) and debugged it to be accidentally converting a DDP model.
st180890
Hi, I’m trying to export a model via ONNX. The export is failing due to an operator not being supported. RuntimeError: Exporting the operator __derive_index to ONNX opset version 11 is not supported. Please open a bug to request ONNX export support for the missing operator. Implementing the missing operator is beyond my means, so I want to rewrite this part of the model. I’m now trying to figure out which line of my model is causing the issue - the ONNX traceback has no info on this at all. I tried putting print statements in-between as recommended elsewhere, but they all get printed until the forward pass end, and then the error appears. Is there any way to find the problematic line? I can’t make sense of the operator name itself.
st180891
my network contains: # A memory-efficient implementation of Swish function class SwishImplementation(torch.autograd.Function): @staticmethod def forward(ctx, i): result = i * torch.sigmoid(i) ctx.save_for_backward(i) return result @staticmethod def backward(ctx, grad_output): i = ctx.saved_tensors[0] sigmoid_i = torch.sigmoid(i) return grad_output * (sigmoid_i * (1 + i * (1 - sigmoid_i))) class MemoryEfficientSwish(nn.Module): def forward(self, x): return SwishImplementation.apply(x) after jit.trace, there are PythonOp nodes. I know that grad introduces PythonOp, but why and how, please?
st180892
When converting my model to TorchScript, I am using the decorator @torch.jit.export to mark some functions besides forward() to be exported by torch.jit.script(). When loading the TorchScript model in Python, I can indeed access these functions. My question is regarding C++: since these functions are not included in the standard module interface, I need them to appear in a header file somewhere. Is such a header generated, and if not, how does one call the exported functions from C++? Thank you!
st180893
You can see in torch/csrc/jit/api/module.h that torch::jit::Module::forward is implemented like so: IValue forward(std::vector<IValue> inputs) { return get_method("forward")(std::move(inputs)); } You should be able to call exported functions like so: auto mod = torch::jit::load(...); auto exported_method = mod.get_method(<exported_method_name>) auto result = exported_method(...);
st180894
Thanks. Yes, I just found an earlier question that was asked on this (it seems to be missing from docs at the moment). My function does not take any arguments, but it looks like you still need to pass in an empty vector of IValues. For future reference, here’s what I ended up doing: std::vector<torch::jit::IValue> stack; module.get_method("my_function_name")(stack);
st180895
Hey I’m trying to check my torchscript model from the python REPL. When I load a jit model with torch.jit.load(), that model doesn’t seem to have a get_method function. If I’m using python, what is the alternative to get_method
st180896
I seem to have answered my own question. I’m supposed to use model.__getattr__(func_name)(args)
st180897
I am facing a weird issue. I am doing the following Freeze a torchvision model on CPU host, which internalizes parameters/weight tensors as constants. Save this frozen module. Try to load and run the model on GPU host. I see that the weight tensor constants are not getting loaded onto GPU despite me invoking model.to(cuda:0) on the loaded model. How do we ensure that the tensor constants, of the saved model, get loaded onto GPU? Error StackTrace: Traceback of TorchScript, serialized code (most recent call last): File "code/__torch__/torchvision/models/segmentation/fcn/___torch_mangle_17301.py", line 9, in forward input_shape = torch.slice(torch.size(x), -2, 9223372036854775807, 1) features = torch.dict() x0 = torch.conv2d(x, CONSTANTS.c0, None, [2, 2], [3, 3], [1, 1], 1) ~~~~~~~~~~~~ <--- HERE x1 = torch.batch_norm(x0, CONSTANTS.c1, CONSTANTS.c2, CONSTANTS.c3, CONSTANTS.c4, False, 0.10000000000000001, 1.0000000000000001e-05, True) x2 = torch.relu_(x1) RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
st180898
Solved by tom in post #2 torch.jit.load(...., map_location="cuda") will work on master. scripted_module = torch.jit.script(torch.nn.Linear(2, 3).eval()) frozen_module = torch.jit.freeze(scripted_module) assert len(list(frozen_module.named_parameters())) == 0 print(frozen_module.code) frozen_module.save('/tmp/x.pt') and th…
st180899
torch.jit.load(...., map_location="cuda") will work on master. scripted_module = torch.jit.script(torch.nn.Linear(2, 3).eval()) frozen_module = torch.jit.freeze(scripted_module) assert len(list(frozen_module.named_parameters())) == 0 print(frozen_module.code) frozen_module.save('/tmp/x.pt') and then orch.jit.load('/tmp/x.pt').graph Out[16]: graph(%self : __torch__.torch.nn.modules.linear.___torch_mangle_0.Linear, %input.1 : Tensor): %8 : Tensor = prim::Constant[value=-0.6421 0.1414 -0.4491 -0.5532 -0.0829 0.0914 [ CPUFloatType{2,3} ]]() # :0:0 %6 : Tensor = prim::Constant[value=0.01 * -9.0403 -27.0162 3.8648 [ CPUFloatType{3} ]]() # :0:0 %4 : int = prim::Constant[value=2]() # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1663:22 %9 : int = prim::Constant[value=1]() # :0:0 %3 : int = aten::dim(%input.1) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1663:7 %5 : bool = aten::eq(%3, %4) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1663:7 %ret : Tensor = prim::If(%5) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1663:4 block0(): %ret0.1 : Tensor = aten::addmm(%6, %input.1, %8, %9, %9) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1665:14 -> (%ret0.1) block1(): %output.1 : Tensor = aten::matmul(%input.1, %8) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1667:17 %output0.1 : Tensor = aten::add_(%output.1, %6, %9) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1669:12 -> (%output0.1) return (%ret) In [17]: torch.jit.load('/tmp/x.pt', map_location="cuda").graph Out[17]: graph(%self : __torch__.torch.nn.modules.linear.___torch_mangle_0.Linear, %input.1 : Tensor): %8 : Tensor = prim::Constant[value=-0.6421 0.1414 -0.4491 -0.5532 -0.0829 0.0914 [ CUDAFloatType{2,3} ]]() # :0:0 %6 : Tensor = prim::Constant[value=0.01 * -9.0403 -27.0162 3.8648 [ CUDAFloatType{3} ]]() # :0:0 %4 : int = prim::Constant[value=2]() # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1663:22 %9 : int = prim::Constant[value=1]() # :0:0 %3 : int = aten::dim(%input.1) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1663:7 %5 : bool = aten::eq(%3, %4) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1663:7 %ret : Tensor = prim::If(%5) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1663:4 block0(): %ret0.1 : Tensor = aten::addmm(%6, %input.1, %8, %9, %9) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1665:14 -> (%ret0.1) block1(): %output.1 : Tensor = aten::matmul(%input.1, %8) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1667:17 %output0.1 : Tensor = aten::add_(%output.1, %6, %9) # /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py:1669:12 -> (%output0.1) return (%ret) Note that the constants move from CPU to CUDA in the second example. Best regards Thomas