id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st181000 | Hi there, I am trying to migrate faster rcnn implementation of torchvision to c++ to do that I simply run these lines of code to obtain ‘pt’ file.
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
model.eval()
script_model = torch.jit.script(model)
This part is work as intended but after when I try to upload this produced model in c++ I get the Microsofct C++ exception: torch::jit::ErrorReport at memory location 0x0000… error. Any ideas about what should i do? |
st181001 | Hi, the whole message ;
Unhandled exception at 0x00007FF9013A3C58 in TorchDeneme2.exe: Microsoft C++ exception: torch::jit::ErrorReport at memory location 0x000000A7B4199900. |
st181002 | Thanks for the update. I’m not familiar enough with Windows, unfortunately, and the error message doesn’t give a lot of information, so we might need to wait for a Windows expert. |
st181003 | Hi,
I have a simple PyTorch model that has a few layers, and I want to trace its graph and export it as a set of nodes and edges (as a traditional graph). Furthermore, I want to export the hyper-parameters for each layers (such as such as a Conv2D layer’s number of input channels and output channels etc…) as well as the parameters (such as a Conv2D layer’s kernels and bias).
Is it possible to do this using torch.jit.trace (or torch.jit._get_trace_graph) ?
Is there any other way I can trace the graph other than torch.jit (for more complex models such as ResNet or DenseNet) ?
Thank you ! |
st181004 | Hi,
I have a preprocess class that has two methods other than forward. when I try to sctrip the model this RunTime error apears:
Traceback (most recent call last):
File "/projects/src/preprocess_script.py", line 99, in <module>
scripted_prep = torch.jit.script(prep)
File "/home/.conda/envs/GC/lib/python3.6/site-packages/torch/jit/__init__.py", line 1261, in script
return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
File "/home/.conda/envs/GC/lib/python3.6/site-packages/torch/jit/_recursive.py", line 305, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/home/.conda/envs/GC/lib/python3.6/site-packages/torch/jit/_recursive.py", line 361, in create_script_module_impl
create_methods_from_stubs(concrete_type, stubs)
File "/home/.conda/envs/GC/lib/python3.6/site-packages/torch/jit/_recursive.py", line 279, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
File "/home/.conda/envs/GC/lib/python3.6/site-packages/torch/jit/_recursive.py", line 583, in compile_unbound_method
stub = make_stub(fn)
File "/home/.conda/envs/GC/lib/python3.6/site-packages/torch/jit/_recursive.py", line 34, in make_stub
ast = torch.jit.get_jit_def(func, self_name="RecursiveScriptModule")
File "/home/.conda/envs/GC/lib/python3.6/site-packages/torch/jit/frontend.py", line 171, in get_jit_def
type_line = torch.jit.annotations.get_type_line(source)
File "/home/.conda/envs/GC/lib/python3.6/site-packages/torch/jit/annotations.py", line 202, in get_type_line
raise RuntimeError("Return type line '# type: (...) -> ...' not found on multiline "
RuntimeError: Return type line '# type: (...) -> ...' not found on multiline type annotation
(See PEP 484 https://www.python.org/dev/peps/pep-0484/#suggested-syntax-for-python-2-7-and-straddling-code)
Process finished with exit code 1
I found out that one of the methods makes this error but even I commented the whole method code and just remain:
def from_tensors(self):
pass
still get the same error.
this is my forward code:
def forward(self, im_in):
images = im_in.permute(2, 0, 1).to('cpu')
images = self.normalizer(images)
images = self.from_tensors([images], 3) #this line raises error
return images |
st181005 | Solved by aryan_461 in post #3
thanks for responding.
I found that there is # type in one of my comments and even I commented the whole (code and comments) jit still could see that and took that as a command. after removig that comment the error gone. |
st181006 | If from_tensors is doing “pass” then calling:
images = self.from_tensors([images], 3)
is setting images to None
But nonetheless, can you post the original implementation of from_tesnsors ? (and normalizer)
Roy. |
st181007 | thanks for responding.
I found that there is # type in one of my comments and even I commented the whole (code and comments) jit still could see that and took that as a command. after removig that comment the error gone. |
st181008 | I want to load a model saved by torch.jit.save and then convert the model to other deep learning framework.
It is necessary to extract graph and weight/bias and other tensors from graph.
I only know that the weith/bias tensor is stored in state_dict.
However, in pytorch 1.5.0, the order of torch._C.Value in graph.inputs() and the order of state_dict keys are not same.
For example, for mobilenetv2 model in torchvision, the first few lines of its graph generated by torch.jit is:
graph(%input.10 : Float(10, 3, 224, 224),
%1 : Float(1280, 320, 1, 1),
%2 : Float(1280),
%3 : Float(1280),
%4 : Float(1280),
%5 : Float(1280),
%6 : Float(160, 960, 1, 1),
%7 : Float(160),
%8 : Float(160),
%9 : Float(160)
but the first few keys of state_dict is
['features.0.0.weight', 'features.0.1.weight', 'features.0.1.bias', 'features.0.1.running_mean', 'features.0.1.running_var']
and I have
model.state_dict()[list(model.state_dict().keys())[0]].shape
torch.Size([32, 3, 3, 3])
model.state_dict()[list(model.state_dict().keys())[1]].shape
torch.Size([32])
model.state_dict()[list(model.state_dict().keys())[2]].shape
torch.Size([32])
It is obvious that the order in graph.inputs() and order in state_dict are not the same.
So, is there any method to correctly extract the Tenso/weight/bias values from torch._C.Graph? |
st181009 | Is there any way to convert torch::nn::Module to torch::jit::Module in c++? There is nice docs and tutorial available for doing the same in pytorch but my query is specific to libtorch only configuration. |
st181010 | Heya I’ve been doing some googling and it looks like the first execution of a torchscript model includes a heavy warmup period. In my case, its almost 5 minutes.
This is pretty painful and I’m wondering if I can directly save the optimized model in the first place so I don’t have to go through this warmup period each time. |
st181011 | Solved by eellison in post #2
Hi, could you try running it on master or the nightly build? For the 1.7 release compilation times were improved. |
st181012 | Hi, could you try running it on master or the nightly build? For the 1.7 release compilation times were improved. |
st181013 | Sorry to hear that. Unfortunately, we can’t save the warmed-up model today due to a number of optimizations relying on in-memory data structures that can’t be serialized. We have some longer-term infrastructural work planned to improve compile times but it hasn’t landed yet.
Two things that would help us improve the situation:
We recently made a number of improvements to compilation and optimization time, can you try the nightly or the 1.7 RC and see if you get improvements (try running the model a few times).
If possible, can you provide the .pt file for the model that is taking a really long time to warm up, as well as some example inputs? It will help us understand what sort of models are causing long compilation times. |
st181014 | Thanks to both you and eellison for the replies.
Using 1.7 seems to have done the trick! Compile times went from 200s -> 2s. When do you think it will make it to stable? |
st181015 | import torch
import torch.nn as nn
class CustomFunction(torch.autograd.Function, nn.Module):
def __call__(self, input):
return self.apply(input)
@staticmethod
def forward(ctx, forward_in):
forward_out = forward_in.clamp(min=0)
ctx.save_for_backward(forward_in, forward_out)
return forward_out
@staticmethod
def backward(ctx, grad_output):
forward_in, forward_out = ctx.saved_tensors
relu_gradients = torch.ones_like(forward_out)
relu_gradients[forward_in < 0] = 0
relu_gradients = relu_gradients.mul(grad_output)
# Some extra functions here
return relu_gradients
class CustomModel(nn.Module):
def __init__(self):
super(CustomModel, self).__init__()
self.cr = CustomFunction()
def forward(self, x):
x.requires_grad_(True)
self.cr(x)
criterion = x**2
criterion.backward()
return x.grad
model = CustomModel()
traced_script_module = torch.jit.script(model)
traced_script_module.save("traced_jit_model.pt")
jit_model = torch.jit.load('traced_jit_model.pt')
This is the minimal version of the code that causes the following error:
RuntimeError: You attempted to access the anomaly metadata of a custom autograd function but the underlying PyNode has already been deallocated. The most likely reason this occurred is because you assigned x.grad_fn to a local variable and then let the original variable get deallocated. Don't do that! If you really have no way of restructuring your code so this is the case, please file an issue reporting that you are affected by this.
Is there a way to reconstruct the code so that this error isn’t thrown? I don’t understand what the error means. |
st181016 | I solved this as follows:
#include <torch/script.h>
#include <torch/all.h>
#include <iostream>
#include <memory>
class CustomReluOp : public torch::autograd::Function<CustomReluOp>{
public:
static torch::autograd::variable_list forward(torch::autograd::AutogradContext* ctx, torch::autograd::Variable forward_in){
auto forward_out = torch::clamp(forward_in, 0.0);
ctx->save_for_backward({forward_in, forward_out});
return {forward_out};
}
static torch::autograd::variable_list backward(torch::autograd::AutogradContext* ctx, torch::autograd::variable_list grad_output){
auto list_forward = ctx->get_saved_variables();
auto forward_in = list_forward[0];
auto forward_out = list_forward[1];
auto relu_gradients = torch::ones_like(forward_out);
auto indices = forward_in<0;
relu_gradients.index({indices}) = 0;
relu_gradients = torch::mul(relu_gradients, grad_output[0]);
relu_gradients = torch::nn::functional::relu(relu_gradients);
relu_gradients = relu_gradients * forward_out;
return {relu_gradients};
}
};
torch::Tensor custom_relu_op(const torch::Tensor& input) {
return CustomReluOp::apply(input)[0];
}
static auto registry = torch::RegisterOperators("my_ops::custom_relu_op", &custom_relu_op);
The code above I saved as customrelu.cpp, I then made the file CMakeLists.txt with the following instructions:
cmake_minimum_required(VERSION 3.1 FATAL_ERROR)
project(custom_relu)
find_package(Torch REQUIRED)
# Define our library target
add_library(custom_relu SHARED customrelu.cpp)
set(CMAKE_CXX_STANDARD 14)
# Link against LibTorch
target_link_libraries(custom_relu "${TORCH_LIBRARIES}")
I then made an empty folder build and cd’d into it and ran the following:
cmake -DCMAKE_PREFIX_PATH="$(python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')" ..
make -j
The following code now uses a custom operation that you can script:
import torch
import torch.nn as nn
torch.ops.load_library("inference/build/libcustom_relu.so")
class CustomModel(nn.Module):
def __init__(self):
super(CustomModel, self).__init__()
self.cr = torch.ops.my_ops.custom_relu_op
def forward(self, x):
x.requires_grad_(True)
self.cr(x)
criterion = x**2
criterion.backward()
return x.grad
model = CustomModel()
traced_script_module = torch.jit.script(model)
traced_script_module.save("traced_jit_model.pt")
jit_model = torch.jit.load('traced_jit_model.pt')
I added these instructions because the documentation for the c++ frontend is somewhat daunting for someone with little experience with c++ (like me). Credits to this post: TorchScript register backward C++ functions 2 |
st181017 | Hey,
I tried to use torch.jit.script on code that used OrderedDict's move_to_end, and got an error that suggested it was interpreted as a Dict. Am I doing something wrong, or are OrderedDicts not quite fully supported in TorchScript?
I’m on PyTorch 1.6.0.
The instantiation looks like this:
self.cache:typing.OrderedDict[str,Tensor] = OrderedDict()
Here’s the error:
RuntimeError:
Tried to access nonexistent attribute or method 'move_to_end' of type 'Dict[str, Tensor]'.:
File "/home/strawvulcan/lad/lad/util.py", line 162
return -1
else:
self.cache.move_to_end(key)
~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return self.cache[key]
'LRUCache.get' is being compiled since it was called from 'LRUCache.__getitem__'
File "/home/strawvulcan/lad/lad/util.py", line 175
def __getitem__(self, key:str):
return self.get(key)
~~~ <--- HERE
'LRUCache.__getitem__' is being compiled since it was called from '__torch__.lad.util.LRUCache'
File "/home/strawvulcan/lad/lad/util.py", line 188
"`batch` is a list of strings"
if not hasattr(self, 'item_cache'):
self.item_cache = LRUCache(max_cache_size)
~~~~~~~~ <--- HERE
hit_idxs = []
hits = []
'__torch__.lad.util.LRUCache' is being compiled since it was called from 'Autoencoder.forward'
File "/home/strawvulcan/lad/lad/util.py", line 188
"`batch` is a list of strings"
if not hasattr(self, 'item_cache'):
self.item_cache = LRUCache(max_cache_size)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
hit_idxs = []
hits = [] |
st181018 | I saved 3D DensNet from monai:
traced = torch.jit.trace(model, torch.rand(1, 3, 160, 160, 160).cuda())
torch.jit.save(traced, f'monai3_160_baseline.trcd')
Now I am trying to load it from trcd file with the following function:
torch.jit.load(x, map_location="cuda:0")
And I get the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-28-60944cabc718> in <module>()
----> 1 model = load_jit_model("./model_training/monai3_160_baseline.trcd").cuda()
<ipython-input-22-2020de3a15ce> in load_jit_model(x, cuda_id)
14
15 def load_jit_model(x, cuda_id=0):
---> 16 return torch.jit.load(x, map_location=f'cuda:{cuda_id}' if torch.cuda.is_available() else 'cpu')
/home/yeliseev/anaconda3/envs/rsna/lib/python3.7/site-packages/torch/jit/__init__.py in load(f, map_location, _extra_files)
273 cu = torch._C.CompilationUnit()
274 if isinstance(f, str) or isinstance(f, pathlib.Path):
--> 275 cpp_module = torch._C.import_ir_module(cu, f, map_location, _extra_files)
276 else:
277 cpp_module = torch._C.import_ir_module_from_buffer(cu, f.read(), map_location, _extra_files)
RuntimeError: expected ident but found 'class' here:
Serialized File "code/__torch__/torch/nn/modules/container/___torch_mangle_423.py", line 8
norm : __torch__.torch.nn.modules.pooling.AdaptiveAvgPool3d
flatten : __torch__.torch.nn.modules.flatten.Flatten
class : __torch__.torch.nn.modules.linear.Linear
~~~~~ <--- HERE
def forward(self: __torch__.torch.nn.modules.container.___torch_mangle_423.Sequential,
argument_1: Tensor) -> Tensor: |
st181019 | When I load a frozen model, I want to be able to parse the nodes into their respective values for analysis. I can use node.i(“value”) to get IntType integer values into a Python object. The same works for FloatType floats using node.f("value). But I cannot figure out a way to parse ListType prim::Constant values into Python. Is there a way to do this? |
st181020 | I dont know if I have encountered a bug or its something that I’m doing wrong.
I pruned a resnet18 model and then trained it. during checkpoints, I save the model as a jit model so I dont need to have a model definition to run the pruned model.
Everything seems fine, the model learns and performs as expected. however, what is not normal is the 10x times slow-down at inference.
Below are the stats I recorded for 10 iterations of the pruned model forward time before and after being trained :
Before training : 7,405 ms
After training : 75,480 ms
You can see and test the models yourself from here : https://gofile.io/d/sFkJkO 2
Whats wrong here?
Thanks a lot in advance |
st181021 | Hello All,
My team at Microsoft Research India has built an adaptive compiler framework for deep learning jobs and implemented it on PyTorch (ASPLOS '19 paper 17). We used the tracing API in v0.3 to capture the compute graph representation of the network and its gradient computation, whose execution was then optimized by our framework.
We would like to port the implementation to v1.1.0, which entails capturing the compute graph through the tracing API. Since the backward pass doesn’t return any outputs, it’s not possible to use v1.1.0’s tracing API to build the computational graph for the backward pass. I tried the graph visualization package to build the computational graph for the backward pass, but the nodes were named “SelectBackward” or “ViewBackward”.
Could someone please help me with understanding the tracing API so that I can capture the graph for the backward pass as well ? Or please suggest a different methodology ?
Thank You,
Sanjay |
st181022 | Hi @singam-sanjay, could you give an example about how it worked in 0.3 and what you need from 1.1.0?
“SelectBackward” are the autograd nodes and I’m not sure if they’re expected to show up in your backward pass. (Given that you capture the graph representation and do optimization through your compiler framework.)
Happy to help if you can give more context. Thanks! |
st181023 | Hi @ailzhang,
Thanks for responding !
We used the tracing API in 0.3 in the following way,
Tracing the forward pass,
trace, fwd_outputs = torch.jit.trace(model, fwd_args, nderivs = 1)
Tracing the backward pass,
torch.autograd.backward(fwd_outputs, bwd_args)
After Step 2, the trace.graph() is updated with nodes from the backward pass.
Could you please suggest an approach to generate equivalent results in the v1.1.0 API ?
Thanks,
Sanjay |
st181024 | This is very interesting!
I have a somewhat similar question along these lines. If anyone feels I should address this in a different issue, I can create a new one.
Is there a way to “intercept” the backward pass on-the-fly, as it runs in a normal fashion? Specifically for pytorch/xla package, but I know ptxla doesn’t have this, so maybe it’s in autograd? The reason being, i would very much like to be able to tell which forward op each backward op is associated with (i.e. this subgraph is, for the most part, the bwd pass of the dropout layer, for instance) and this be able to keep track of this in some sidebanded fashion to be processed during graph compile? |
st181025 | It seems that torch.jit.unused does not support normal object class. Is there any plans to support it? For now, the following codes will cause an error: 'torch.jit.frontend.UnsupportedNodeError: with statements aren’t supported:'
@torch.jit.script
class MyObject(object):
def __init__(self):
super().__init__()
@torch.jit.unused
def unsupported(self):
with torch.no_grad():
print('This is not supported by jit.')
def supported(self, x):
print('This is supported by jit.')
By the way, could anyone give some advices to handle this situation where classes with torchscript-unsupported attributes are used during torchscript inference process. |
st181026 | Right now everything that is in a @torch.jit.scripted class has to be TorchScript compatible, but we’re working on fixing it so @ignore and @unused work as they should. The best way around it for now is probably to use free-standing functions (which you can @ignore) that take the object as the first argument |
st181027 | driazati:
Right now everything that is in a @torch.jit.script ed class has to be TorchScript compatible, but we’re working on fixing it so @ignore and @unused work as they should. The best way around it for now is probably to use free-standing functions (which you can @ignore ) that take the object as the first argument
Got it, thank you very much:) |
st181028 | @driazati Thanks for the answer. Is there an issue where we can track the progress of the Python Object class @ignore and @unused support? |
st181029 | This feature will be released in TensorFlow 1.7. The pull request can be found here 4. |
st181030 | Rick:
This feature will be released in TensorFlow 1.7.
I thought TF1.7 was already released in 2018 |
st181031 | Does PyTorch guarantee backwards compatibility between minor releases (ie., 1.X->1.Y for all Y > X) for models/operator behaviors? In other words, would all the models defined and saved using an older 1.X version be loadable and runnable, with same behavior, using latest 1.Y version? Or, is this best effort which implies that some models will fail with upgrade of PyTorch version from 1.X->1.Y
Similarly, does PyTorch JIT guarantee backwards-compatibility for traced/scripted models between releases?
I didn’t find any documentation on backwards compatibility guarantee and I see backwards incompatible changes being noted in the release notes of each version release. Could someone help answer this or point to a document? |
st181032 | I trained a model using libtorch, and want to save it still using libtorch. And I found https://github.com/pytorch/pytorch/issues/35464 4, indicate that
we already have the ability to save to a std::ostream in C++
Which refer to torch::jit::Module::save() ?
And how to use it to save a torchscript model ? |
st181033 | I’ve been working with image transformations recently and came to a situation where I have a large array (shape of 100,000 x 3) where each row represents a point in 3D space like:
pnt = [x y z]
All I’m trying to do is iterating through each point and matrix multiplying each point with a matrix called T (shape = 3 X 3).
Test with Numpy:
def transform(pnt_cloud, T):
depth_array = np.zeros(pnt_cloud.shape[0])
i = 0
for pnt in pnt_cloud:
xyz_pnt = np.dot(T, pnt)
if xyz_pnt[0] > 0:
depth_array[i] = xyz_pnt[0]
i += 1
return depth_array
Calling the following function and calculating runtime (using %time) gives the output:
Out[190]: CPU times: user 670 ms, sys: 7.91 ms, total: 678 ms
Wall time: 674 ms
Test with Pytorch Tensor:
import torch
tensor_cld = torch.tensor(pnt_cloud)
tensor_T = torch.tensor(T)
def transform(pnt_cloud, T):
depth_array = torch.tensor(np.zeros(pnt_cloud.shape[0]))
i = 0
for pnt in pnt_cloud:
xyz_pnt = torch.matmul(T, pnt)
if xyz_pnt[0] > 0:
depth_array[i] = xyz_pnt[0]
i += 1
return depth_array
Calling the following function and calculating runtime (using %time) gives the output:
Out[199]: CPU times: user 6.15 s, sys: 28.1 ms, total: 6.18 s
Wall time: 6.09 s
I would have thought that PyTorch tensor computations would be much faster due to the way PyTorch breaks its code down in the compiling stage. What am I missing here?
Other things I’ve tried:
Doing the same with torch.jit only reduces 2s
tried torch.no_grad() as I thought I was accumulating gradients (I realized that’s not how it works)
Numba + Numpy jit works the fastest (120ms) |
st181034 | Yeah, so two things
operations of 1-3 elements are generally rather expensive in PyTorch as the overhead of Tensor creation becomes significant (this includes setting single elements), I think this is the main thing here. This is also the reason why the JIT doesn’t help a whole lot (it only takes away the Python overhead) and Numby shines (where e.g. the assignment to depth_array[i] is just a memory write).
the matmul itself might differ in speed if you have different BLAS backends for it in PyTorch vs. NumPy.
In this specific case, you could likely just do depth_array = torch.matmul(pnt_cloud, T.t()).clamp_(min=0) or so. Similarly in numpy.
Best regards
Thomas
(PS: it would help if you just added dummy data to your code examples so people could just copypaste to look into the benchmarking.) |
st181035 | I am trying to deploy bert with elastic inference on sagemaker.
It requires converting BERT torch model to torchscript. When I converted pytorch model to torchscript and predicted on it got different output for each run of same input to the model.
Pytorch version=1.3.1
Following are the outputs from the model for same input.
1: [[-4.0112, -4.7550, -4.6516, -3.7064, -3.3309, -5.2377, -7.5709, -7.8646,
-9.1199, -8.8557, -6.8730, -2.2870, -2.9781, -4.6967, -5.6849, -5.1054,
-8.1800, -7.7518, -3.2441]]
2: [[-4.2761, -4.5443, -4.6977, -2.6534, -2.8268, -5.3280, -7.1985, -7.1800,
-9.5470, -8.7595, -6.3672, -1.5636, -3.0138, -4.7585, -6.3858, -5.0327,
-8.6203, -8.0351, -3.9619]]
3: [[-4.5412, -4.4034, -4.4401, -2.7413, -3.2421, -6.0487, -7.3131, -7.4065,
-9.4242, -8.2876, -6.2156, -1.1610, -3.7632, -5.3920, -6.9708, -4.7110,
-8.0398, -8.1571, -4.3893]]
Prediction function is as follows:
def predict_fn(input_data, model):
model.eval()
print('Generating prediction based on input parameters.')
device = 'cuda' if cuda.is_available() else 'cpu'
try:
with torch.no_grad():
with torch.jit.optimized_execution(True, {"target_device": "eia:0"}):
outputs = model(input_data["ids"],input_data["mask"], input_data["token_type_ids"])
except Exception as ex:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
raise Exception(f'3- Error in predict_fn {exc_type, fname, exc_tb.tb_lineno, input_data["ids"].shape, " with exception message: ", ex}')
outputs = torch.sigmoid(outputs).cpu().detach().numpy()
return outputs
I don’t understand why I am getting this odd behavior from torchscript converted model. Whereas simple pytorch model which is not converted into torchscript gives same output on each run. |
st181036 | Hello, guys!
I searched in google but there is no solution.
If a model is traced by torch.jit.trace, then saved in disk.
Then I load the model just before, and get its graph by model.forward.graph and torch._C._jit_pass_lower_graph, but the output shapes of nodes in graph are lost, how to get these output shapes of nodes?
Here is an example code:
import torch
import torchvision
from torch._C import _propagate_and_assign_input_shapes
def _model_to_graph(model, args):
if isinstance(args, torch.Tensor):
args = (args, )
graph = model.forward.graph
method_graph, params = torch._C._jit_pass_lower_graph(graph, model._c)
in_vars, in_desc = torch.jit._flatten(tuple(args) + tuple(params))
graph = _propagate_and_assign_input_shapes( method_graph, tuple(in_vars), False, False)
return graph
traced_model_savepath = 'traced.pt'
model = torchvision.models.mobilenet.mobilenet_v2(pretrained=True)
dummy_input = torch.rand((1,3,224,224))
traced_model = torch.jit.trace(model, dummy_input)
traced_model.save(traced_model_savepath)
# load the saved traced model from disk
loaded_traced_model = torch.jit.load(traced_model_savepath)
graph= _model_to_graph(loaded_traced_model, dummy_input)
print(graph)
Sadly, the output shape of nodes are lost, for example:
%input.6 : Tensor = aten::_convolution(%626, %251, %276, %627, %628, %629, %277, %630, %274, %277, %277, %278) # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%input.104 : Tensor = aten::batch_norm(%input.6, %252, %253, %254, %255, %278, %280, %279, %278) # /data0/shareVR/pytorch/learn/torch/nn/functional.py:2051:0
node %input.104 is a batch_norm layer and its output is a `Tensor’, but what’s its output shape? How to get the output shape?
Many thanks! |
st181037 | There is torch._C._jit_pass_complete_shape_analysis(graph, inputs, with_grad:Bool) which works, but is internal.
See issue 39690 17 for some detail.
Best regards
Thomas |
st181038 | I am sorry for not sure how to use torch._C._jit_pass_complete_shape_analysis.
I found in https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/python/init.cpp#L359 2
that torch._C._jit_pass_complete_shape_analysis passes Graph as a shared_ptr.
So I modified my code as:
import torch
import torchvision
import torch.nn as nn
class DemoNet(nn.Module):
def __init__(self ):
super(DemoNet, self).__init__()
self.conv1 = torch.nn.Conv2d(3,32, kernel_size=(3,3))
self.conv2 = torch.nn.Conv2d(32, 64, kernel_size=(3, 3))
self.relu1 = torch.nn.ReLU(inplace=True)
self.conv3 = torch.nn.Conv2d(64, 32, kernel_size=(3, 3))
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.relu1(x)
x = self.conv3(x)
return x
def _model_to_graph(model, args):
if isinstance(args, torch.Tensor):
args = (args, )
graph = model.forward.graph
method_graph, params = torch._C._jit_pass_lower_graph(graph, model._c)
in_vars, in_desc = torch.jit._flatten(tuple(args) + tuple(params))
torch._C._jit_pass_complete_shape_analysis(method_graph, tuple(in_vars), False)
return method_graph
traced_model_savepath = 'traced.pt'
model = DemoNet()
model.eval()
dummy_input = torch.randn((2,3,224,224))
traced_model = torch.jit.trace(model, dummy_input)
traced_model.save(traced_model_savepath)
# load the saved traced model
loaded_traced_model = torch.jit.load(traced_model_savepath)
graph= _model_to_graph(loaded_traced_model, dummy_input)
print(graph)
I use a toy DemoNet to simplify output graph.
The output is:
graph(%input.2 : Float(2:150528, 3:50176, 224:224, 224:1, requires_grad=0, device=cpu),
%44 : Float(32:27, 3:9, 3:3, 3:1, requires_grad=1, device=cpu),
%45 : Float(32:1, requires_grad=1, device=cpu),
%46 : Float(64:288, 32:9, 3:3, 3:1, requires_grad=1, device=cpu),
%47 : Float(64:1, requires_grad=1, device=cpu),
%48 : Float(32:576, 64:9, 3:3, 3:1, requires_grad=1, device=cpu),
%49 : Float(32:1, requires_grad=1, device=cpu)):
%10 : int = prim::Constant[value=0]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%11 : int = prim::Constant[value=1]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%12 : bool = prim::Constant[value=0]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%13 : bool = prim::Constant[value=1]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%16 : int[] = prim::ListConstruct(%11, %11)
%17 : int[] = prim::ListConstruct(%10, %10)
%18 : int[] = prim::ListConstruct(%11, %11)
%19 : int[] = prim::ListConstruct(%10, %10)
%input0.1 : Float(*, *, *, *, requires_grad=0, device=cpu) = aten::_convolution(%input.2, %44, %45, %16, %17, %18, %12, %19, %11, %12, %12, %13) # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%21 : int = prim::Constant[value=0]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%22 : int = prim::Constant[value=1]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%23 : bool = prim::Constant[value=0]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%24 : bool = prim::Constant[value=1]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%27 : int[] = prim::ListConstruct(%22, %22)
%28 : int[] = prim::ListConstruct(%21, %21)
%29 : int[] = prim::ListConstruct(%22, %22)
%30 : int[] = prim::ListConstruct(%21, %21)
%input.1 : Float(*, *, *, *, requires_grad=0, device=cpu) = aten::_convolution(%input0.1, %46, %47, %27, %28, %29, %23, %30, %22, %23, %23, %24) # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%32 : Tensor = aten::relu_(%input.1) # /data0/shareVR/pytorch/learn/torch/nn/functional.py:1125:0
%33 : int = prim::Constant[value=0]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%34 : int = prim::Constant[value=1]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%35 : bool = prim::Constant[value=0]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%36 : bool = prim::Constant[value=1]() # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
%39 : int[] = prim::ListConstruct(%34, %34)
%40 : int[] = prim::ListConstruct(%33, %33)
%41 : int[] = prim::ListConstruct(%34, %34)
%42 : int[] = prim::ListConstruct(%33, %33)
%43 : Tensor = aten::_convolution(%32, %48, %49, %39, %40, %41, %35, %42, %34, %35, %35, %36) # /data0/shareVR/pytorch/learn/torch/nn/modules/conv.py:416:0
return (%43)
It seems that torch._C._jit_pass_complete_shape_analysis can not tell me the exact output shape of
nodes such as: %input0.1 %input.1, %32, %43,
For %input0.1 and %input.1, it just tells me that they are 4-D tensors, not exact shape information;
for %32 and %43, it just tells me they are tensors.
Any suggestion on how to use torch._C._jit_pass_complete_shape_analysis correctly?
Thanks a lot. |
st181039 | So apparently it cannot infer the shape of the convolution (it should when the stride, padding,dilation etc. params are constants). |
st181040 | Does it look at the raw source on disk? How can we give it a custom callable type for a function and force it to recompile a copy of the function body with the new types (but a new name)? |
st181041 | Does it look at the raw source on disk?
Yes. JIT uses the inspect module to look at the source code for the function, and determines the function schema (types of the inputs and outputs) based on type annotations (either Python3 or mypy style).
How can we give it a custom callable type for a function and force it to recompile a copy of the function body with the new types (but a new name)?
Could you provide a code sample to clarify this question? What is it that you want to do? |
st181042 | SplitInfinity:
How can we give it a custom callable type for a function and force it to recompile a copy of the function body with the new types (but a new name)?
Could you provide a code sample to clarify this question? What is it that you want to do?
I’d like to be able to say something like:
def generic_loop(self, inputs, state):
" ^^^ I'm trying to reuse this. ^^^ "
outs = []
for x in inputs:
out, state = self.fwd(x, state)
outs.append(out)
return outs, state
class Cell(nn.Module):
def __init__(self, forward_sig):
self.forward = apply_cell_sig(generic_loop, forward_sig)
# self.forward : (TCustomCell, List[TCustomInput], TCustomState) -> TCustomOut
class CustomCell(Cell):
def __init__(self):
super().__init__(
get_cell_sig(
CustomCell, TCustomInput, TCustomState, TCustomOut))
def fwd(self, input: TCustomInput, state: TCustomState):
return blah(input, state)
More generally, I’m looking for a way to treat a function body as a “template” that I can provide specific types for, and ask the JIT explicitly to give me a new version for it. So it doesn’t just see its full name and go “I’ve already seen this; here’s the cached version”.
By your confirmation though, this’s probably very difficult. I couldn’t get the JIT to accept a function that takes in a nn.Module instance as an argument either (besides self. accesses) so there’re multiple issues here. |
st181043 | More generally, I’m looking for a way to treat a function body as a “template” that I can provide specific types for, and ask the JIT explicitly to give me a new version for it. So it doesn’t just see its full name and go “I’ve already seen this; here’s the cached version”.
I don’t think it is a publicly advertised feature, but you could try using @torch.jit._overload. You can find examples of how to use this decorator in test/test_jit.py, like test_function_overloads. The caveat here is that the one body you supply will be used with all overloaded signatures because there is no concept of TypeVar in the JIT typing system. So you might have do type refinement to write one body that works for all types (see my_conv in test_function_overloading_isinstance, also in test_jit.py).
By your confirmation though, this’s probably very difficult. I couldn’t get the JIT to accept a function that takes in a nn.Module instance as an argument either (besides self. accesses) so there’re multiple issues here.
Yeah, modules cannot be passed around because there is no Module type in the JIT type system. There is no way to annotate a function correctly to make this work. |
st181044 | SplitInfinity:
I don’t think it is a publicly advertised feature, but you could try using @torch.jit._overload . You can find examples of how to use this decorator in test/test_jit.py , like test_function_overloads . The caveat here is that the one body you supply will be used with all overloaded signatures because there is no concept of TypeVar in the JIT typing system. So you might have do type refinement to write one body that works for all types (see my_conv in test_function_overloading_isinstance , also in test_jit.py ).
This looks promising!
Can you elaborate on what’s happening here?
# TODO: pyflakes currently does not compose @overload annotation with other
# decorators. This is fixed on master but not on version 2.1.1.
# Next version update remove noqa and add @typing.overload annotation
@torch.jit._overload # noqa: F811
def test_simple(x1): # noqa: F811
# type: (int) -> int
pass
@torch.jit._overload # noqa: F811
def test_simple(x1): # noqa: F811
# type: (float) -> float
pass
def test_simple(x1): # noqa: F811
return x1
def invoke_function():
return test_simple(1.0), test_simple(.5)
self.checkScript(invoke_function, ())
# testing that the functions are cached
compiled_fns_1 = torch.jit._script._get_overloads(test_simple)
compiled_fns_2 = torch.jit._script._get_overloads(test_simple)
# ^^^ HERE HERE HERE ^^^ #
for a, b in zip(compiled_fns_1, compiled_fns_2):
self.assertIs(a.graph, b.graph)
Why could the two successive invocations of torch.jit._script._get_overloads(test_simple) get different results? At that point, there’s only one thing attached to the test_simple name. |
st181045 | SplitInfinity:
Yeah, modules cannot be passed around because there is no Module type in the JIT type system. There is no way to annotate a function correctly to make this work.
I was hoping there’d be a way to work around this specifically The JIT already works with a self argument of an nn.Module subtype. How can we make this available for free functions? This restriction greatly limits the composability of free functions if I want to use script. Honestly not sure at that point if there’d be a worthwhile speed either. This’s unrelated, but I’ve had some trouble understanding what the JIT does with what it sees, and intuiting where my code might be giving it more trouble than it can handle. |
st181046 | Instead of that, you could also look into
Inject your code into linecache under different module names like IPython does. This isn’t exactly documented, but clean as far as the JIT is concerned.
Get the AST using git_jit_def, meddle with the AST, and use _jit_script_compile to compile yourself (you probably need the resolver too), so you basically mimic torch.jit.script except the caching. This is fairly invasive into JIT internals, but at least it is straightforward to your cause.
Best regards
Thomas |
st181047 | Enamex:
I was hoping there’d be a way to work around this specifically The JIT already works with a self argument of an nn.Module subtype. How can we make this available for free functions? This restriction greatly limits the composability of free functions if I want to use script .
The short answer is not really. The JIT supports classes insofar as they are static - if you JIT-compile Modules, you don’t give it the class source but rather an instance. It will then go through the data members and see and process their types, assuming they will be fixed (which isn’t the case in Python in general).
Maybe you might find more success in taking the “JIT” part more literally, e.g. JIT-compiling a local function from a dynamic function.
Honestly not sure at that point if there’d be a worthwhile speed either. This’s unrelated, but I’ve had some trouble understanding what the JIT does with what it sees, and intuiting where my code might be giving it more trouble than it can handle.
I’m affraid the JIT isn’t a magic bullet for optimization but “only” does specific things that are highly desirable. |
st181048 | tom:
Maybe you might find more success in taking the “JIT” part more literally, e.g. JIT-compiling a local function from a dynamic function.
Can you elaborate on this a bit more?
tom:
I’m affraid the JIT isn’t a magic bullet for optimization but “only” does specific things that are highly desirable.
Absolutely. I meant that what those “specific things” are is a bit unclear to me. And doesn’t seem to be a goal of the documentation. |
st181049 | Enamex:
Why could the two successive invocations of torch.jit._script._get_overloads(test_simple) get different results? At that point, there’s only one thing attached to the test_simple name.
There are three - the two overloads and the implementation. And as you can see in the test, they should not have different results in terms of the ordering of functions nor the graphs produced by compiling said functions. |
st181050 | Enamex:
Can you elaborate on this a bit more?
Well, so one thing could be to actually make it a method of the class of self before you JIT that class / the method.
Enamex:
Absolutely. I meant that what those “specific things” are is a bit unclear to me. And doesn’t seem to be a goal of the documentation.
My chance to sell advanced PyTorch courses. Your chance to be a hero.
More seriously we discuss this a bit in section 15.3.1 Interacting with the PyTorch JIT / What to expect from moving beyond classic Python/PyTorch of out book that you can download in exchange for anwering a few questions 2.
To summarize that, the main use-cases I see are
exporting stuff,
getting rid of the GIL for nicer multithreading in deployment,
optimize certain patterns (e.g. pointwise ops in RNNs and elsewhere are one thing we had relatively early), but people are working on expanding this (e.g. to cover reductions).
Best regards
Thomas |
st181051 | There are three bodies in the text, but I don’t see why the two torch.jit._script._get_overloads(test_sample) calls could return different compiled objects even if the cache was disabled. |
st181052 | tom:
The short answer is not really. The JIT supports classes insofar as they are static - if you JIT-compile Modules, you don’t give it the class source but rather an instance. It will then go through the data members and see and process their types, assuming they will be fixed (which isn’t the case in Python in general).
This bit’s interesting. I might’ve been misunderstanding the cache behavior here. If I script different instances of the same class at different times, does it use the cache or treat every instance separately? script, not trace. Because if it treats instances separately, could assigning different modules to attributes be a workaround for JITing functions that make use of modules from arguments (by not passing them through arguments…)?
tom:
Maybe you might find more success in taking the “JIT” part more literally, e.g. JIT-compiling a local function from a dynamic function.
Which is why I’d imagine being able to “JIT” the same function with different types. Am I misinterpreting this?
tom:
More seriously we discuss this a bit in section 15.3.1 Interacting with the PyTorch JIT / What to expect from moving beyond classic Python/PyTorch of out book that you can download in exchange for anwering a few questions .
Looks great! Thanks for the rec. |
st181053 | If I script different instances of the same class at different times, does it use the cache or treat every instance separately?
Instances are not cached, but their types in the JIT system are. Every time you script an nn.Module, a type is created for in the JIT type system. This is reused if multiple instances of the same module are scripted in the same program, or even the same instance is scripted twice. The ScriptModule returned by torch.jit.script is always fresh and never from a cache, but it might refer to a type object that is cached. |
st181054 | Hey,
I’m trying to edit a graph representation (torch._C.Graph) via python API.
I want to add a node at the begging of the graph that returns a CUDA device.
this is what I did:
cuda_node = graph.create('prim::Constant[value="cuda:0"]')
cuda_value = next(cuda_node.outputs())
cuda_value.setType(torch._C.DeviceObjType.get())
graph.prependNode(cuda_node)
This code has been successfully ran, but I get this error when I run the graph function by torch._C._create_function_from_graph(...) :
RuntimeError: 0 INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":465, please report a bug to PyTorch. We don't have an op for prim::Constant[value="cuda:0"] but it isn't a special case. Argument types:
How can I create this kind of node without experience this issue ?
Thanks. |
st181055 | It seems you are using internal methods from the _C namespace, so I’m unsure if your use case is supported out of the box (without breaking stuff).
What’s your use case you need to manipulate the graph and cannot recreate it? |
st181056 | @ptrblck I’m not sure that recreate a graph replica will help in that case, since that I still don’t know how to create a prim::Constant with the attribute value="cuda:0".
my mistake was that there is no prim::Constant[value="cuda:0"] node kind, I need to create the node by:
cuda_node = graph.create('prim::Constant')
and then I need to set for cuda_node an attribute value="cuda:0" but I’m not sure how. |
st181057 | This code snippet would create a prim::Constant on the GPU:
@torch.jit.script
def fun(x):
y = x + torch.tensor(1, device='cuda')
return y
print(fun.graph)
> graph(%x.1 : Tensor):
%7 : bool = prim::Constant[value=0]()
%21 : Device = prim::Constant[value="cuda"]()
%5 : None = prim::Constant()
%2 : int = prim::Constant[value=1]() # <ipython-input-36-11fdcf9e4060>:9:25
%8 : Tensor = aten::tensor(%2, %5, %21, %7) # <ipython-input-36-11fdcf9e4060>:9:12
%y.1 : Tensor = aten::add(%x.1, %8, %2) # <ipython-input-36-11fdcf9e4060>:9:8
return (%y.1) |
st181058 | @ptrblck thanks, seems like I can take this example and use createClone to create my desired node in my graph. I guess there is no exposed API to set attributes for nodes, right? |
st181059 | Omer_B:
I guess there is no exposed API to set attributes for nodes, right?
I don’t know, so lets wait for some experts here. |
st181060 | @ptrblck actually, it takes too much time that I can’t afford to copy an entire graph just for 1 node modification. if there will be an option to set attributes for nodes it would be great. |
st181061 | Looking to see if anyone has succesfully deployed a Torchvision Faster RCNN (or Mask RCNN) model to C++ via torchscript/libtorch.
In python
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
script_model = torch.jit.script(model)
script_model.save("model.pt")
In C++
module = torch::jit::load("model.pt");
I linked to library with QMake in QT Creator with this .pro in Windows 10
QT -= gui
CONFIG += c++14 console no_keywords
CONFIG -= app_bundle
QMAKE_CXXFLAGS += -D_GLIBCXX_USE_CXX11_ABI=0
QMAKE_LFLAGS += -INCLUDE:?warp_size@cuda@at@@YAHXZ
DEFINES += QT_DEPRECATED_WARNINGS
SOURCES += \
main.cpp
LIBS += -LD:\src\PyTorch_Playground\c++\testLibTorch\libtorch-win-shared-with-deps-debug-latest\libtorch\lib \
-lc10 -lc10_cuda -ltorch_cuda -ltorch_cpu -ltorch
INCLUDEPATH += D:\src\PyTorch_Playground\c++\testLibTorch\libtorch-win-shared-with-deps-latest\libtorch\include
INCLUDEPATH += D:\src\PyTorch_Playground\c++\testLibTorch\libtorch-win-shared-with-deps-latest\libtorch\include\torch\csrc\api\include
This works when the model loaded is a ResNet model as well as FCN-ResNet. Both run and give correct outputs but RCNN models throw an exception
schemas.size() > 0 INTERNAL ASSERT FAILED at "..\..\torch\csrc\jit\frontend\schema_matching.cpp":491, please report a bug to PyTorch.
Torchscript RCNN model has no problem running in python.
There is a few month old github issue (of which I added my comments) https://github.com/pytorch/pytorch/issues/35881 16
but since this is seriously blocking my transition from tensorflow to pytorch I’m hoping crossposting it here will attract more eyes. Also I’m not sure if it’s a bug or a mistake on my part or I’m not linking correctly. Also wondering if people have ever had this working perhaps on previous versions. I’m using nightly builds because I was playing with AMP and autocast. |
st181062 | Fairly new to PyTorch and for me loading faster-rcnn model is failing as well. Tried following the directions here: https://pytorch.org/tutorials/advanced/cpp_export.html 43
An exception is thrown here:
module = torch::jit::load(“Path_to_model”); |
st181063 | Created a class contains a number of nn.Modules. Looks like doing so adds a prefix to state_dict keys. Is there a way to specify a prefix during load_state_dict () ? Or what is the recommended way to hande such situation ? |
st181064 | You could create a new dict by assigning the state_dict keys and values to the new keys with the desired prefix. |
st181065 | batch norm can receive half tensor and also return half tensor in normal cases,
but after tracing, and under torch.no_grad(), it return float instead of half (see img)
bn_layer = torch.nn.BatchNorm2d(10).cuda().float()
x = torch.rand(1,10,10,10).cuda().half()
o1 = bn_layer(x)
print(o1.dtype)
with torch.no_grad():
trace = torch.jit.trace(bn_layer, torch.rand(1,10,10,10).cuda().half())
with torch.no_grad():
o2 = trace(x)
print(o2.dtype)
o3 = trace(x)
print(o3.dtype)
image1624×748 73.3 KB |
st181066 | PT seems to support backwards compatibility for JIT saved models but doesn’t guarantee forward compatability for the same. Are these backwards compatibility tests a part of the regression suite? Where do we find this suite? |
st181067 | When JIT saving “model.pt” of a complex pytorch model with many custom classes, I am encountering the error that pytorch doesn’t know the type annotation of one of those custom classes. In other words, the following code (drastically summarized from original) fails on the seventh line:
import torch
from gan import Generator
from gan.blocks import SpadeBlock
generator = Generator()
generator.load_weights("path/to/weigts")
jitted = torch.jit.script(generator)
torch.jit.save(jitted, "model.pt")
Error:
Traceback (most recent call last):
File "pth2onnx.py", line 72, in <module>
to_torch_jit(generator)
File "pth2onnx.py", line 24, in to_torch_jit
jitted = torch.jit.script(generator)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/__init__.py", line 1516, in script
return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 310, in create_script_module
concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 269, in get_or_create_concrete_type
concrete_type_builder = infer_concrete_type_builder(nn_module)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 138, in infer_concrete_type_builder
sub_concrete_type = concrete_type_store.get_or_create_concrete_type(item)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 269, in get_or_create_concrete_type
concrete_type_builder = infer_concrete_type_builder(nn_module)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 138, in infer_concrete_type_builder
sub_concrete_type = concrete_type_store.get_or_create_concrete_type(item)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 269, in get_or_create_concrete_type
concrete_type_builder = infer_concrete_type_builder(nn_module)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 126, in infer_concrete_type_builder
attr_type = infer_type(name, item)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 99, in infer_type
attr_type = torch.jit.annotations.ann_to_type(class_annotations[name], _jit_internal.fake_range())
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/annotations.py", line 303, in ann_to_type
raise ValueError("Unknown type annotation: '{}'".format(ann))
ValueError: Unknown type annotation: '<class 'gan.blocks.SpadeBlock'>'
The type it complains about is indeed a class we ourselves have programmed and used in the loaded Generator. I would appreciate pointers on what could cause this or how to investigate this!
I tried the following:
explicitly importing SpadeBlock in the script that calls torch.jit.script
ensured it inherits from nn.Module (as does Generator)
ensured the gan package is installed, using pip install --user -e <directory>
Any ideas? Thanks in advance! |
st181068 | Which version of torch are you using? If it is anything below 1.6, I think you need to script SpadeBlock as well by decorating its definition with @torch.jit.script. |
st181069 | Thanks for responding.
I checked, but I already had torch==1.6.0 installed. I’m using Python 3.6.0, it it matters. Just to be sure, I reinstalled using pip uninstall torch; pip install --user torch==1.6.0 and now the error changed to be even stranger. Now it can’t identify nn.Module!
Traceback (most recent call last):
File "pth2onnx.py", line 72, in <module>
to_torch_jit(generator)
File "pth2onnx.py", line 24, in to_torch_jit
jitted = torch.jit.script(generator)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/__init__.py", line 1516, in script
return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 310, in create_script_module
concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 269, in get_or_create_concrete_type
concrete_type_builder = infer_concrete_type_builder(nn_module)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 126, in infer_concrete_type_builder
attr_type = infer_type(name, item)
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/_recursive.py", line 99, in infer_type
attr_type = torch.jit.annotations.ann_to_type(class_annotations[name], _jit_internal.fake_range())
File "/home/a.nieuwland/.conda/envs/python3.6/lib/python3.6/site-packages/torch/jit/annotations.py", line 303, in ann_to_type
raise ValueError("Unknown type annotation: '{}'".format(ann))
ValueError: Unknown type annotation: '<class 'torch.nn.modules.module.Module'>'
EDIT: After testing some more the difference in unknown type annotation appears to because I used a different generator class in the second export. Is it possible pytorch’s JIT doesn’t support type hints 1? I’m using those heavily. |
st181070 | I run this exact script and get the Unknown type annotation nn.Module error. It includes a successful export to onnx just as a sanity check that there isn’t something wrong with the model.
import torch
import torch.nn as nn
class DcganGenerator(nn.Module):
__main: nn.Module
def __init__(self, shape_originals, shape_targets):
super().__init__()
num_channels = shape_originals[0]
shape_originals # TODO upsample to this shape
ngf = 64
ndf = 64
self.__main = nn.Sequential(
nn.ConvTranspose2d(num_channels, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.PReLU(),
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.PReLU(),
nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.PReLU(),
nn.ConvTranspose2d(ngf * 2, ngf, 4, 1, 1, bias=False),
nn.BatchNorm2d(ngf),
nn.PReLU(),
nn.ConvTranspose2d(ngf, num_channels, 4, 1, 1, bias=False),
nn.PReLU(),
)
def forward(self, inputs):
return {"generated": self.__main(inputs)}
def load_generator():
print("Instantiating generator")
g = DcganGenerator([3, 128, 128], [3, 512, 512])
print(g)
return g
def to_torch_jit(generator):
print("Converting to JIT pytorch")
try:
jitted = torch.jit.script(generator)
torch.jit.save(jitted, "model.pt")
print("Created model.pt")
except Exception as e:
print(f"Failed. {e}")
def to_onnx(generator):
print("Converting to ONNX")
try:
torch.onnx.export(
generator,
torch.rand(1, 3, 128, 128),
"model.onnx",
export_params=True,
opset_version=11,
input_names=["in"],
output_names=["out"],
)
print("Created model.onnx")
except Exception as e:
print(f"Failed. {e}")
generator = load_generator()
print()
to_torch_jit(generator)
to_onnx(generator)
Saved as jit-it.py, output:
$ python3 jit-it.py
Instantiating generator
DcganGenerator(
(_DcganGenerator__main): Sequential(
(0): ConvTranspose2d(3, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): PReLU(num_parameters=1)
(3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): PReLU(num_parameters=1)
(6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): PReLU(num_parameters=1)
(9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1), bias=False)
(10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): PReLU(num_parameters=1)
(12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1), bias=False)
(13): PReLU(num_parameters=1)
)
)
Converting to JIT pytorch
Failed. Unknown type annotation: '<class 'torch.nn.modules.module.Module'>'
Converting to ONNX
Created model.onnx |
st181071 | Your example works for me if I remove the declaration of __main outside of __init__ (nn.Module is not a supported type annotation) and rename it to _main (probably some special handling for identifiers that begin with two __). |
st181072 | Hi - We are trying to implement a lowering of Torchscript to MLIR/ATen dialect as part of the LLVM npcomp incubator effort [1]. Is there a declarative specification of the Torchscript IR we could leverage for this work ? Thanks
[1] https://llvm.discourse.group/t/npcomp-next-steps-for-torch-ir-aten-dialect/1777 27 |
st181073 | I think the closest thing to what you are looking for is the core program representation section of the JIT overview documentation. 30 |
st181074 | Thanks - that is a really helpful and nicely written doc.
Are there any facilities for systematically enumerating the ops and how they are materialized in the IR? (i.e. I’ve got a harness that traces each op and then extracts the corresponding IR to make such a mapping, but it would be better if this was in some machine-usable source form somewhere) |
st181075 | If by ops, you mean things like convolutions, not really. Most ops of that sort show up in the IR as aten::op, but they are not a formally defined part of the JIT IR. There is no formal restriction on which ops are “allowed” and which ops are not. They map to operators in ATen, some of which can be user-defined and registered outside of the core PyTorch library. |
st181076 | Thanks - that was my understanding but was just double checking that I wasn’t missing something obvious. |
st181077 | I have OCR NN really…huge and
cuz I want to use pytorch on android…
I tried to convert pytorch model to pytorch mobile…
(But… If I can just uses pytorch library in JAVA??)
Here is what I tried…
recognizer2 = recognizer
recognizer2.eval()
example = img_cv_grey, horizontal_list, free_list,\
decoder, beamWidth, batch_size,\
workers, allowlist, blocklist, detail,\
paragraph, contrast_ths, adjust_contrast,\
filter_ths, False
torch.jit.trace(recognizer2, example)
The example is sample input(altough I don know why it is needed… Can’t we just convert to the mobile without sample input? it is very anoying…)
the error message is…
RuntimeError Traceback (most recent call last)
in ()
----> 1 result = convert_torch_mobile(“drive/My Drive/colab/easy_ocr_test/Capture1.PNG”)
2 frames
in convert_torch_mobile(image, decoder, beamWidth, batch_size, workers, allowlist, blocklist, detail, paragraph, min_size, contrast_ths, adjust_contrast, filter_ths, text_threshold, low_text, link_threshold, canvas_size, mag_ratio, slope_ths, ycenter_ths, height_ths, width_ths, add_margin)
15 detector2.eval()
16 example = img_cv_grey, horizontal_list, free_list, decoder, beamWidth, batch_size, workers, allowlist, blocklist, detail, paragraph, contrast_ths, adjust_contrast, filter_ths, False
—> 17 torch.jit.trace(recognizer2, example)
18
19
/usr/local/lib/python3.6/dist-packages/torch/jit/init.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
953 return trace_module(func, {‘forward’: example_inputs}, None,
954 check_trace, wrap_check_inputs(check_inputs),
–> 955 check_tolerance, strict, _force_outplace, _module_class)
956
957 if (hasattr(func, ‘self’) and isinstance(func.self, torch.nn.Module) and
/usr/local/lib/python3.6/dist-packages/torch/jit/init.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
1107 func = mod if method_name == “forward” else getattr(mod, method_name)
1108 example_inputs = make_tuple(example_inputs)
-> 1109 module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, strict, _force_outplace)
1110 check_trace_method = module._c._get_method(method_name)
1111
RuntimeError: Tracer cannot infer type of (array([[255, 255, 255, …, 255, 255, 255],
[255, 255, 255, …, 255, 255, 255],
[255, 255, 255, …, 255, 255, 255],
…,
[255, 255, 255, …, 255, 255, 255],
[255, 255, 255, …, 255, 255, 255],
[255, 255, 255, …, 255, 255, 255]], dtype=uint8), [[59, 89, 15, 31], [97, 295, 11, 31], [58, 326, 32, 58], [332, 746, 34, 58], [61, 169, 59, 79], [197, 295, 59, 79], [318, 460, 56, 80], [481, 559, 59, 79], [581, 663, 59, 79], [99, 131, 83, 99], [145, 210, 77, 103], [231, 445, 83, 99], [58, 504, 128, 154], [11, 45, 143, 159], [58, 774, 152, 176], [58, 172, 174, 198], [197, 295, 177, 197], [318, 404, 174, 198], [411, 471, 177, 197], [494, 602, 174, 198], [625, 711, 177, 197], [65, 181, 199, 219], [199, 275, 199, 219], [297, 489, 203, 219], [58, 352, 244, 273], [59, 551, 273, 293], [59, 169, 295, 315], [197, 279, 295, 315], [305, 445, 295, 315], [469, 575, 295, 315], [599, 683, 295, 315], [67, 181, 317, 337], [197, 305, 317, 337], [327, 569, 321, 337], [60, 288, 366, 390], [58, 724, 390, 414], [59, 169, 413, 433], [197, 281, 413, 433], [305, 437, 413, 433], [461, 565, 413, 433], [591, 675, 413, 433], [96, 182, 434, 458], [199, 307, 437, 457], [327, 523, 439, 457], [58, 242, 484, 508], [58, 472, 508, 532], [58, 172, 530, 554], [197, 273, 533, 553], [297, 437, 533, 553], [461, 535, 533, 551], [561, 645, 533, 553], [97, 181, 555, 575], [199, 305, 555, 575], [362, 440, 552, 576], [456, 506, 552, 576], [532, 795, 555, 575], [635, 659, 613, 631], [696, 734, 610, 634], [335, 499, 605, 641], [539, 561, 615, 631]], [], ‘greedy’, 5, 1, 0, None, None, 1, False, 0.1, 0.5, 0.003, False)
:Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type ndarray.
And my model is
DataParallel(
(module): Model(
(FeatureExtraction): ResNet_FeatureExtractor(
(ConvNet): ResNet(
(conv0_1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn0_1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv0_2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn0_2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(maxpool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(maxpool3): MaxPool2d(kernel_size=2, stride=(2, 1), padding=(0, 1), dilation=1, ceil_mode=False)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(4): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(conv3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(conv4_1): Conv2d(512, 512, kernel_size=(2, 2), stride=(2, 1), padding=(0, 1), bias=False)
(bn4_1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv4_2): Conv2d(512, 512, kernel_size=(2, 2), stride=(1, 1), bias=False)
(bn4_2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(AdaptiveAvgPool): AdaptiveAvgPool2d(output_size=(None, 1))
(SequenceModeling): Sequential(
(0): BidirectionalLSTM(
(rnn): LSTM(512, 512, batch_first=True, bidirectional=True)
(linear): Linear(in_features=1024, out_features=512, bias=True)
)
(1): BidirectionalLSTM(
(rnn): LSTM(512, 512, batch_first=True, bidirectional=True)
(linear): Linear(in_features=1024, out_features=512, bias=True)
)
)
(Prediction): Linear(in_features=512, out_features=1568, bias=True)
)
)
Why it happens? What I want is just use pytorch in JAVA… any help please? Thanks… |
st181078 | My simpled error output is this
torch.jit.trace(recognizer2, example)
RuntimeError: Type ‘Tuple[int, int, float, float, float, int, float, float, float, float, float, float, bool]’ cannot be traced. Only Tensors and (possibly nested) Lists, Dicts, and Tuples of Tensors can be traced |
st181079 | Is it expected that for some module torch.jit.trace works well, while torch.jit.script fails. In particular, I get an error, related to TorchScript treating all objects as Tensors by default, i.e.
RuntimeError:
Tensor cannot be used as a tuple: |
st181080 | Yes, this is expected.
torch.jit.trace constructs a JIT IR graph by observing the operations that are performed on Tensors by the Module being traced with little concern for the Python language constructs that are used to express these operations. This means that it can produce JIT graphs successfully for a wide variety of programs, but those graphs may not always be faithful representations of the corresponding programs. For example, tracing is unable to capture control flow because it observes tensor operations and runtime, and that means the graph it constructs will contain one branch of a given instance of if statement but not the other.
torch.jit.script produces a JIT graph from a program by compiling it. However, it places a number of restrictions on the programs it can compile, such as requiring that they be statically typed (through heavy use of type annotations) and use only a specified subset of Python language features.
To answer your question, torch.jit.script assumes any argument without an annotated type is a Tensor. To be able to script your Module, you will need to add type annotations to all methods and attributes and to make sure you are not using any unsupported Python language features. |
st181081 | Currently, I cannot convert simple python strings like “0” or “1” to an integer or float in torchscript. Trying int('0') will result in the error:
RuntimeError:
Arguments for call are not valid.
And it expects only a tensor, a bool, a float or a number to be converted to int.
With float, it only allows "-inf" or "inf" to be converted.
I tried encoding with bytes, but that isn’t allowed either. Do you have any suggestions for a JIT/Torchscript compatible way to convert strings to ints/floats? |
st181082 | Which version of torch are you using? I just tried this with torch-1.6 (the latest release) and it works:
Code
import torch
def fn() -> int:
a = int('0')
return a
s = torch.jit.script(fn)
print(s())
Output
0 |
st181083 | class Model(mm.Module):
def __init__(self):
# Init
def forward(self, x):
# type: (List[Tensor]) -> List[Tensor]
# Code
If we need to define type of input x that if different from Tensor, we could do some thing like above. However, i am confused about how to add annotation if i use
class Model(mm.Module):
def __init__(self):
# Add sequential module
self.layer = nn.Sequential([...])
def forward(self, x):
# type: (List[Tensor]) -> List[Tensor]
# Code
If i define an attribute self.layer of type nn.Sequential and this nn.Sequential requires other data type for example List[Tensor], how can i annotate this to work with torch.jit? |
st181084 | This is not possible at the moment. What you can do is extend or make your own version of Sequential with the type annotation you need on its forward method. |
st181085 | After I read this toturial 1, which I learnt is that I should use jit.script to transfer my module into TorchScript if there are conditional expressions or other uncertainties during execution in its forward propagation.
If my comprehension is correct, then can I say that it’s OK to use jit.script any time to replace jit.trace? Since I figure no drawback of jit.script against jit.trace.
Also I cannot understand why the toturial calls somthing like jit.trace(jit.script(module)) then. Does this make any sense? Can’t I just use jit.script(module) alone? |
st181086 | Yeah, jit.trace is of limited use. It can process some python code without requiring modifications (type annotations, etc.), but at the same time non-trivial code may fail to work correctly, if any non-tensor non-constants get captured.
Flicic_Suo:
Also I cannot understand why the toturial calls somthing like jit.trace(jit.script(module)) then. Does this make any sense?
No, this doesn’t make sense. Note that jit.script(module) returns a new module, old one remains in uncompiled state. |
st181087 | How can I iterate nn.squential() module like a list:
RuntimeError:
'module' object is not iterable:
at /home/dai/scripts/card_ocr_cpu/detector/model_torchscript.py:83:8
def forward(self, x):
out = []
for i, m in enumerate(self.features):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... <--- HERE
x = m(x)
if i in [3, 6, 13, 18]:
out.append(x)
return out
'__torch__.extractor.forward' is being compiled since it was called from '__torch__.EAST.forward'
at /home/dai/scripts/card_ocr_cpu/detector/model_torchscript.py:209:8
def forward(self, x, train:bool=False):
x1,x2,x3,x4 = self.extractor(x)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
x = self.merge(x1,x2,x3,x4)
score,geo = self.output(x)
if not train:
boxes = get_boxes_torch(score,geo,score_thresh=0.95,nms_thresh=0.2) |
st181088 | Solved by dalalaa in post #3
Thank you , I fixed this problem by removing enumerate(). |
st181089 | Could you try to add all modules into an nn.ModuleList instead of nn.Sequential? |
st181090 | Are you sure? I’m trying to iterate over a ModuleList (without enumerate) and get the error “‘torch.torch.nn.modules.container.ModuleList’ object is not iterable”. Latest pytorch version |
st181091 | Hi,
I am trying to save a model using torch.jit.script in the following code:
with torch.jit.optimized_execution(True):
traced_script_module = torch.jit.script(model)# save the converted model
traced_script_module.save("yolov3.pt")
But I got this error :
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 87
def forward(self, *input):
~~~~~~ <--- HERE
r"""Defines the computation performed at every call.
The code of building the model can be found here:
github.com
eriklindernoren/PyTorch-YOLOv3/blob/master/models.py 14
from __future__ import division
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
from utils.parse_config import *
from utils.utils import build_targets, to_cpu, non_max_suppression
import matplotlib.pyplot as plt
import matplotlib.patches as patches
def create_modules(module_defs):
"""
Constructs module list of layer blocks from module configuration in module_defs
"""
hyperparams = module_defs.pop(0)
This file has been truncated. show original
How can I solve this issue? |
st181092 | Is the DarkNet the thing you want to script for? Can you provide a smaller example for easier navigation? |
st181093 | Yes, I am trying to script YOLOv3 based on Darknet
The main two classes are YOLOLayer and Darknet
class YOLOLayer(nn.Module):
"""Detection layer"""
def __init__(self, anchors, num_classes, img_dim=416):
super(YOLOLayer, self).__init__()
self.anchors = anchors
self.num_anchors = len(anchors)
self.num_classes = num_classes
self.ignore_thres = 0.5
self.mse_loss = nn.MSELoss()
self.bce_loss = nn.BCELoss()
self.obj_scale = 1
self.noobj_scale = 100
self.metrics = {}
self.img_dim = img_dim
self.grid_size = 0 # grid size
def forward(self, x, targets=None, img_dim=None):
# Tensors for cuda support
FloatTensor = torch.cuda.FloatTensor if x.is_cuda else torch.FloatTensor
LongTensor = torch.cuda.LongTensor if x.is_cuda else torch.LongTensor
ByteTensor = torch.cuda.ByteTensor if x.is_cuda else torch.ByteTensor
self.img_dim = img_dim
num_samples = x.size(0)
grid_size = x.size(2)
prediction = (
x.view(num_samples, self.num_anchors, self.num_classes + 5, grid_size, grid_size)
.permute(0, 1, 3, 4, 2)
.contiguous()
)
# Get outputs
x = torch.sigmoid(prediction[..., 0]) # Center x
y = torch.sigmoid(prediction[..., 1]) # Center y
w = prediction[..., 2] # Width
h = prediction[..., 3] # Height
pred_conf = torch.sigmoid(prediction[..., 4]) # Conf
pred_cls = torch.sigmoid(prediction[..., 5:]) # Cls pred.
# If grid size does not match current we compute new offsets
if grid_size != self.grid_size:
self.compute_grid_offsets(grid_size, cuda=x.is_cuda)
# Add offset and scale with anchors
pred_boxes = FloatTensor(prediction[..., :4].shape)
pred_boxes[..., 0] = x.data + self.grid_x
pred_boxes[..., 1] = y.data + self.grid_y
pred_boxes[..., 2] = torch.exp(w.data) * self.anchor_w
pred_boxes[..., 3] = torch.exp(h.data) * self.anchor_h
output = torch.cat(
(
pred_boxes.view(num_samples, -1, 4) * self.stride,
pred_conf.view(num_samples, -1, 1),
pred_cls.view(num_samples, -1, self.num_classes),
),
-1,
)
if targets is None:
return output, 0
else:
iou_scores, class_mask, obj_mask, noobj_mask, tx, ty, tw, th, tcls, tconf = build_targets(
pred_boxes=pred_boxes,
pred_cls=pred_cls,
target=targets,
anchors=self.scaled_anchors,
ignore_thres=self.ignore_thres,
)
# Loss : Mask outputs to ignore non-existing objects (except with conf. loss)
loss_x = self.mse_loss(x[obj_mask], tx[obj_mask])
loss_y = self.mse_loss(y[obj_mask], ty[obj_mask])
loss_w = self.mse_loss(w[obj_mask], tw[obj_mask])
loss_h = self.mse_loss(h[obj_mask], th[obj_mask])
loss_conf_obj = self.bce_loss(pred_conf[obj_mask], tconf[obj_mask])
loss_conf_noobj = self.bce_loss(pred_conf[noobj_mask], tconf[noobj_mask])
loss_conf = self.obj_scale * loss_conf_obj + self.noobj_scale * loss_conf_noobj
loss_cls = self.bce_loss(pred_cls[obj_mask], tcls[obj_mask])
total_loss = loss_x + loss_y + loss_w + loss_h + loss_conf + loss_cls
# Metrics
cls_acc = 100 * class_mask[obj_mask].mean()
conf_obj = pred_conf[obj_mask].mean()
conf_noobj = pred_conf[noobj_mask].mean()
conf50 = (pred_conf > 0.5).float()
iou50 = (iou_scores > 0.5).float()
iou75 = (iou_scores > 0.75).float()
detected_mask = conf50 * class_mask * tconf
precision = torch.sum(iou50 * detected_mask) / (conf50.sum() + 1e-16)
recall50 = torch.sum(iou50 * detected_mask) / (obj_mask.sum() + 1e-16)
recall75 = torch.sum(iou75 * detected_mask) / (obj_mask.sum() + 1e-16)
self.metrics = {
"loss": to_cpu(total_loss).item(),
"x": to_cpu(loss_x).item(),
"y": to_cpu(loss_y).item(),
"w": to_cpu(loss_w).item(),
"h": to_cpu(loss_h).item(),
"conf": to_cpu(loss_conf).item(),
"cls": to_cpu(loss_cls).item(),
"cls_acc": to_cpu(cls_acc).item(),
"recall50": to_cpu(recall50).item(),
"recall75": to_cpu(recall75).item(),
"precision": to_cpu(precision).item(),
"conf_obj": to_cpu(conf_obj).item(),
"conf_noobj": to_cpu(conf_noobj).item(),
"grid_size": grid_size,
}
return output, total_loss
class Darknet(nn.Module):
"""YOLOv3 object detection model"""
def __init__(self, config_path, img_size=416):
super(Darknet, self).__init__()
self.module_defs = parse_model_config(config_path)
self.hyperparams, self.module_list = create_modules(self.module_defs)
self.yolo_layers = [layer[0] for layer in self.module_list if hasattr(layer[0], "metrics")]
self.img_size = img_size
self.seen = 0
self.header_info = np.array([0, 0, 0, self.seen, 0], dtype=np.int32)
def forward(self, x, targets=None):
img_dim = x.shape[2]
loss = 0
layer_outputs, yolo_outputs = [], []
for i, (module_def, module) in enumerate(zip(self.module_defs, self.module_list)):
if module_def["type"] in ["convolutional", "upsample", "maxpool"]:
x = module(x)
elif module_def["type"] == "route":
x = torch.cat([layer_outputs[int(layer_i)] for layer_i in module_def["layers"].split(",")], 1)
elif module_def["type"] == "shortcut":
layer_i = int(module_def["from"])
x = layer_outputs[-1] + layer_outputs[layer_i]
elif module_def["type"] == "yolo":
x, layer_loss = module[0](x, targets, img_dim)
loss += layer_loss
yolo_outputs.append(x)
layer_outputs.append(x)
yolo_outputs = to_cpu(torch.cat(yolo_outputs, 1))
return yolo_outputs if targets is None else (loss, yolo_outputs) |
st181094 | Hi @wanchaol,
Is there anything that I can do from my side to help you navigate through the code and realize my problem? |
st181095 | HI @baheytharwat, sorry I haven’t got time to debug the issue out as the model is a big large, I will continue do it next week. It would be nice if you could provide a minimal repro, that could help on debug and fix |
st181096 | Hi everyone,
@baheytharwat Did you manage to solve this?
Does anyone have some ideas on how to debug it?
Thanks! |
st181097 | I have a big composite module, that jit compiles and executes forward() ok, but fails in backward(). The big issue is that there is no error with jit disabled, and set_detect_anomaly() is not too helpful in jit mode.
I’m 90% sure the error itself is related to how jit incorrectly enables requires_grad in multiple scenarios. But are there any techniques to localize it?
For reference, here is exception text:
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "<string>", line 138, in <backward op>
dim: int):
def backward(grad_outputs: List[Tensor]):
grad_self = torch.stack(grad_outputs, dim)
~~~~~~~~~~~ <--- HERE
return grad_self, None
RuntimeError: sizes() called on undefined Tensor
So, some generated code, look like for unbind() operation, where outputs have inconsistent requires_grad?
Also note how “dim:int” line cutoff does a bad service.
And console:
[W …\torch\csrc\autograd\python_anomaly_mode.cpp:104] Warning: Error detected in struct torch::jit::`anonymous namespace’::DifferentiableGraphBackward. Traceback of forward call that caused the error:
…
has traceback that stops at jit module |
st181098 | To me the error looks like you have an undefined Tensor (aka None) showing up unexpectedly.
I would try to cut down the model (or just the JITed part) a bit to zoom into where it happens. (You can see a method I use to grab a submodule when searching for DebugWrap in my blog Post on PyTorch and TVM 10).
Naturally, we would be most grateful if you found a reproducing snippet that you can share to fix this in PyTorch.
Best regards
Thomas |
st181099 | I couldn’t easily divide that module (issue with mutable objects in mixed mode). What helped somewhat was running backward() in profiler context, exporting chrome trace and looking at last successful ops, but that’s not a good or reliable solution. Luckily, I identified unbind from the above message, and indeed unbind was the problem.
Now, for some reason I failed to reproduce the failure in a small script, however I think it is somehow related to “Backward through view of unbind output 9” issue and seems to only fail in “legacy” jit mode (which is 1.6 default), so perhaps this will be handled in next release. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.