id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st180900 | Thanks for this quick suggestion. I will try to dig deeper to understand why .to(device) doesn’t do the same. |
st180901 | I would like to define a module in torch-script that contains other modules. In the forward pass, a matrix would be populated by assigning the output of each sub-module to different elements of the matrix.
Here’s a simplified example:
import torch
from torch import jit, nn, Tensor
class Module1(jit.ScriptModule):
def __init__(self, first: torch.nn.Module, second: torch.nn.Module):
super(Module1, self).__init__()
self.first = first
self.second = second
@jit.script_method
def forward(self, input: Tensor) -> Tensor:
out = torch.eye(2)
out[0,0] = self.first(input)
out[1,1] = self.second(input)
return out
def test1():
m1 = Module1(first=nn.Linear(1, 1), second=nn.Linear(1, 1))
out = m1(torch.ones(1))
print(out)
if __name__ == '__main__':
test1()
This runs fine with PYTORCH_JIT=0 python3 test.py, but an error is thrown with jit:
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "torch_kalman/rewrite/test.py", line 14, in forward
def forward(self, input: torch.Tensor) -> torch.Tensor:
out = torch.eye(2)
out[0,0] = self.first(input)
~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
out[1,1] = self.second(input)
return out
RuntimeError: output with shape [] doesn't match the broadcast shape [1]
I am not sure why the interpreter thinks that the module outputs have zero extent. Is this a known unsupported feature? I could not find it in documentation. Or is this a bug?
EDIT: Maybe “output of a module” is not needed in the title – seems like nothing generated at runtime can be assigned to tensor-elements? For example, the following attempt at a workaround hits the same error:
@jit.script_method
def forward(self, input: Tensor) -> Tensor:
out = jit.annotate(List[Tensor], [])
out += [self.first(input)]
out += [self.second(input)]
out = torch.stack(out)
out2 = torch.eye(len(out))
for i in range(len(out)):
out2[i, i] = out[i]
return out2 |
st180902 | The following seems to work:
def forward(self, input: Tensor) -> Tensor:
out = torch.eye(2)
out[slice(0, 1), slice(0, 1)] = self.first(input)
out[slice(1, 2), slice(1, 2)] = self.second(input)
return out |
st180903 | that’s a quirk with zero-dimensional tensors, out[0,0] and out[0,0:1] are not equivalent, and broadcasting from 1-dim to 0-dim is like a “special case”, that is apparently not handled in TorchScript mode. |
st180904 | Spent the better part of the day trying to script a couple of facial detection and recognition models for inference in hope of seeing some performance improvements and was very disappointed.
Scripting is pretty restrictive, some very benign code don’t seem to compile properly and forced me to exclude from JIT compilation. In my case it was the handling of a list of tensors erroring as the JIT doesn’t know how to handle such objects.
One of the simpler models actually scripted very easily but… showed only a degradation for the first couple of passes and then identical performance to python (cpython).
I was reading through this 4 and was really hopeful but at this point (pytorch1.7.1/cuda11.2/cudnn8.05) with my relatively common pretrained models, I am not seeing anything positive. Has anyone been able to verify performance improvements? I have experience with lua/luajit which shows drastic speed enhancements. |
st180905 | i saw two ways of exporting models for production onnx and jit , but I am unable to understand what is the difference and when to use which one?? |
st180906 | Hi,
I have a GPT2 model fine-tuned for a question generation task. I am now trying to convert it into torchscript but I am facing the following issues. My results are not as good as the original model. It would be great if someone could help me figure out the issue. Following are the versions of the packages installed:
torch: 1.7
transformers: 4.1.1(tried with 3.0.1 and 2.0 as well)
/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py:168: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / (float(v.size(-1)) ** 0.5)
/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py:173: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
/usr/local/lib/python3.6/dist-packages/torch/jit/_trace.py:966: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
With rtol=1e-05 and atol=1e-05, found 254695 element(s) (out of 27897075) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 5.1975250244140625e-05 (2.250950336456299 vs. 2.251002311706543), which occurred at index (0, 69, 37541).
_module_class,
Thanks |
st180907 | Hi!
Let I have the Model and some method check_input_shape. I always want to use this method in pytorch but I want to ignore it in TorchScript. So, is there anything (maybe some decorator) to do it? I found torch.jit.unused but this decorator raise exception when trying to call decorated function |
st180908 | Nope, it doesn’t work. If you use the torch.jit.ignore and try to run TorchScript model you either use python interpreter (when pass argument drop=False) or raise exception (when pass argument drop=True). |
st180909 | sorry, I assumed you wanted to ignore compilation.
you can use something like
if not jit.is_scripting(): check_input_shape(x)
to skip execution. |
st180910 | In a pytorch network model, if I use nn.ConvTranspose2d and do traced_script_module = torch.jit.trace(model, example) after constructing the model. Whatever the shape size of example tensor is, I can get the output tensor, of which the shape is same as the input image, from traced_script_module.
But if I use nn.Upsample instead of nn.ConvTranspose2d in the model. Strange things happen. I only can get the output tensor which the shape of is the same as example tensor that I used in torch.jit.trace(model, example).
Can anyone explain this strange thing. Is that related to weak_script_method decorator or something? |
st180911 | Hi @cfanyyx, thanks for raising the issue. The problem on this is because nn.Upsample is a module that contains data dependent control flow (which means the traced model totally depends on the input you provide during tracing and will not generalize to the new input). To make nn.Upsample working in JIT, you will need to use TorchScript scripting instead of tracing. You can refer more to the doc here 47
Let me know if that’s clear or not. |
st180912 | Thanks for your reply. It helps me a lot. I will have a try. Actually, I tried to use TorchScript scripting before, but I didn’t make it so I changed to use tracing instead. I will try TorchScript scripting again. And I will come back if I meet any new question. |
st180913 | What’s the solution? It seems to not be just putting @torch.jit.script as a decorator on the function that calls upsample. |
st180914 | Using a model serialized as torchscript.
While inference, getting the following error at model’s forward method.
Doing type conversion here. Can’t use tensor.type(torch.float16). Leads to a different error. Hence creating a new tensor. But can’t infer using this approach either.
out = self.decoder(x).clone().detach()
cast = torch.tensor(out , dtype=torch.float16)
~~~~~~~~~~~~ <--- HERE
return cast
RuntimeError: Cannot input a tensor of dimension other than 0 as a scalar argument |
st180915 | Solved by googlebot in post #2
out.to(dtype=torch.float16) should work |
st180916 | I’d like to parametrize my torch.jit.script'ed function with a function argument, i.e. whether a Forward-Backward algorithm should use lambda x: torch.max(x, dim = dim).values or torch.logsumexp(x, dim = dim). When I pass it as lambda, TorchScript complains that I’m calling a tensor-typed value which happens because it types the argument as Tensor (despite the fact that it’s initialized to a lambda by default).
Is there any way to tell TorchScript to type a value as a Callable?
Is it possible to template a TorchScript function by an external callable values via some other mechanism?
Should I create an issue about this on GitHub? |
st180917 | Solved by driazati in post #2
TorchScript currently doesn’t support callables as values (we’re working on supporting it in the coming months). The compiler doesn’t look at the default values at all since it’s not always possible to recover a full type from the default, so if types aren’t specified it just assumes everything is a… |
st180918 | TorchScript currently doesn’t support callables as values (we’re working on supporting it in the coming months). The compiler doesn’t look at the default values at all since it’s not always possible to recover a full type from the default, so if types aren’t specified it just assumes everything is a Tensor. To specify otherwise, you can use Python 3 type hints or mypy style type comments (details 11).
Function attributes on modules is the closest you can get at the moment, so something like
def my_fn(x, some_callable):
return some_callable(x + 10)
would have to be changed something like to
class M(nn.Module):
def __init__(self, some_callable):
self.some_callable = some_callable
def forward(self, x):
return self.some_callable(x + 10)
torch.jit.script(M(the_function))
torch.jit.script(M(some_other_function)) |
st180919 | This still has some big unfortunate limitations.
You can’t store some callables in a ModuleList or a ModuleDict and index that, because TorchScript sees them as modules and cannot subscript them. |
st180920 | In my use case a function complied by torch.jit.scripts decorator is about 30% faster which is very nice performance boost! But I need to give up on its flexibility and freeze the function used as an argument to the decorated function (which is OK for when it comes to deployment where such flexibility is not needed as opposed to running many different experiments in development phase).
So +1 from me to enable Callable as an argument when using torch.jit.scripts so that it can be directly used without any extra effort like in the solution proposed. |
st180921 | Hello.
I am trying to convert the Glow TTS model from PyTorch to ONNX. I am running into the following issue during the export step -
/content/TTS_repo/TTS/tts/layers/glow_tts/glow.py in forward(self, x, x_mask, reverse, **kwargs)
261 self.weight) * (c / self.num_splits) * x_len # [b]
262
--> 263 weight = weight.view(self.num_splits, self.num_splits, 1, 1)
264 z = F.conv2d(x, weight)
265
RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient
Tensor:
-0.4500 -0.2462 -0.0506 -0.4333
-0.3748 -0.0181 0.0442 0.3575
-0.2864 0.7973 -0.0522 -0.1719
0.1037 0.2156 -0.9611 0.2521
[ torch.FloatTensor{4,4} ]
Here’s my Colab Notebook for the full reproduction of the issue - https://colab.research.google.com/gist/sayakpaul/96f00d33385cbb2c97e1befedcf2e3cd/copy-of-glow_tts.ipynb 8. Any directions to solve this issue would be greatly helpful. |
st180922 | I want to add preprocessing to a wrapper class around my forward model, and trace with TorchScript to be able to use in a C++ code. The normalization of input data, etc. works fine, but I want to be able to seamlessly handle inputs with different channel numbers (padding with 0’s if 1 channel). I tried doing this with if input.size()[0]==1, but get a warning that makes me think this might be ignored:
create_torchscript_model.py:37: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
I try keeping everything in tensors:
if torch.tensor(input.size())[0]==torch.tensor(1):
input = F.pad(input,(0,0,0,0,0,0,1,0))
but get similar warnings. Is there a better way to do this, or should I just keep it in the C++ code? |
st180923 | Tracing is using the currently provided inputs and uses it as constants as given in the warning.
Could you try to script the module instead? |
st180924 | Ah, I see better from your comment and reading the documentation closer also (https://pytorch.org/docs/stable/jit.html 11), control flow has to be done with scripting not tracing, because tracing will all depend on the input you give it when tracing. Thanks, I switched this over to scripting using:
traced_script_module = torch.jit.script(wrapper)
instead of
traced_script_module = torch.jit.trace(wrapper,example)
I have some related issues with mixing CPU/GPU here, Mixing GPU/CPU code in TorchScript traced model 2, but in general I’m seeing scripting is answering some of those questions (e.g. with torch.no_grad throws an error, saying its not allowed). I’m still not certain about .cuda() and .cpu() mixing in a scripted module, and if I can have separate locations for some parameters, but this answers this question, and I’ll await answers to the others in the other post. |
st180925 | Is it possible to TorchScript trace a wrapper module, which does preprocessing on CPU, then a feed-forward network on GPU? I’m trying the below, but I’m not sure if .cpu() and .cuda() mean anything within the TorchScript traced module. The below runs, but then when I run in C++, I get an error about expected device cpu but got device cuda:0, basically in the traced code part after preprocess. In C++, I do have a lines to put the model on the GPU, I’m not sure if that is interfering, and in general if such lines are redundant if the tracing in Python already has .cuda() calls (also, I know about torch::NoGradGuard no_grad; in C++, am I able to avoid that if I use with torch.no_grad() in the tracing Python code?)
C++ code
model = torch::jit::load("traced_auglag_best.pt");
model.to(at::kCUDA);
model.eval(); //not sure if necessary, was .eval() in the tracing code
Python tracing code
class Wrapper(torch.nn.Module):
#mean_f: Final[float]
#std_f: Final[float]
#mean_fdf: Final[float]
#std_fdf: Final[float]
def __init__(self,model,file_stats):
super(Wrapper, self).__init__()
self.file_stats = file_stats
self.read_stats()
self.model = model
self.model = self.model.cuda()
self.model.eval()
def read_stats(self):
fh = h5py.File(self.file_stats,'r')
attrs = ['mean_f','std_f','mean_fdf','std_fdf']
for attr in attrs:
setattr(self,attr,torch.nn.Parameter(torch.from_numpy(fh[attr][...])))
fh.close()
def normalize_f(self,f):
return ((f - self.mean_f[None,...])/self.std_f[None,...]).float()
def preprocess(self,f):
#remove negative inds
f[f<0] = torch.tensor(0.0).double()
#if adiabatic electron, add dimension for electrons
#if torch.tensor(f.size())[0]==torch.tensor(1): #(assume for now adiabatic electrons, otherwise trace issue)
f = F.pad(f,(0,0,0,0,0,0,1,0))
#switch order for pytorch model to [Ngrid,Nsp,Nmu,Nvpara]
f = f.permute(2,0,1,3)
#pad mu direction (so 32,32)
f = F.pad(f,(0,1,0,0),mode='replicate')
#normalize and convert to float for input to model
return f
def postprocess(self,fdfnorm,f,isp=1):
#unnormalize
df = fdfnorm*self.std_fdf[None,[isp],...] + self.mean_fdf[None,[isp],...] - f[:,[isp],...]
#remove extra vpara dimension
df = df[:,:,:,:-1]
#switch order back to XGC order of [Nsp,Nmu,Ngrid,Nvpara] and convert to double
return df.permute(1,2,0,3).double()
def forward(self,f):
with torch.no_grad():
fpre = self.preprocess(f)
fnorm = self.normalize_f(fpre)
out = self.model(fnorm.cuda()).cpu()
return self.postprocess(out,fpre) |
st180926 | I have noticed some exciting stuff going on in the torch/csrc/jit/tensorexpr directory in the git repo.
It seems to define an IR+codegen. I’m very interested in hearing what the end goal is with this. Obviously, what is there seems immediately useful for allowing optimization over execution graphs (e.g. fusing kernels). I haven’t seen anything that generates the IR. Is the idea to define kernels directly in the IR, so they can more easily be optimized? Or are there plans for a front end that generates the IR?
I ask because many of the pieces seem to be in place for a numba-like system where custom kernels can be defined in a subset of python, and fast code can be generated for GPU and CPU. Are there plan for anything like this, or is that just my own fantasy? (I see that there’s something new going on in the torch.fx namespace, but I don’t know what it’s for.) The missing piece seems to be something that can take a python AST (or some other high level language) and generate the IR.
There’s very little innovation in the basic weight layers of neural networks IMO, there’s convolutions and linear layers, and not much else. I suspect a big reason for this is that it’s hard to produce a performant implementation of an idea. I was a big fan of the Tensor Comprehensions project a couple of years back, but that died out.
I would be very happy to hear a bit more about what the roadmap is for this feature. |
st180927 | I am now using PyTorch to convert PyTorch model to Onnx model. When I use torch.onnx.export function, I find that this function’s second parameter needs model inputs (x # model input (or a tuple for multiple inputs)).
What value should I set for this parameter? Is this parameter a must and why this parameter is a must?
image1034×512 48.8 KB |
st180928 | I think tracing is used under the hood so an example input is necessary to perform the trace and create the graph in order to export the ONNX model. |
st180929 | So this function torch.onnx.export tries to perform the trace in onnx model (super_resolution.onnx)? But I do not have to write down any input when using onnxruntime to save a model? It’s a torch to onnx necessary step? |
st180930 | I’m not deeply familiar with onnxruntime and don’t know how it’s working under the hood.
I assumed it’s just a runtime/serving platform and provides engines to execute the models, but might be wrong. |
st180931 | Out of curiosity, anyone knows what the abbreviation “NNC” in JIT graph fuser means? Thanks. |
st180932 | Neural Net Compiler:
github.com
pytorch/pytorch/blob/6e4de445010ef5e9e0438e4a4c16f6ef3129af14/test/cpp/tensorexpr/tutorial.cpp#L13 60
// This tutorial covers basics of NNC's tensor expressions, shows basic APIs to// work with them, and outlines how they are used in the overall TorchScript// compilation pipeline. This doc is permanently a "work in progress" since NNC// is under active development and things change fast.//// This Tutorial's code is compiled in the standard pytorch build, and the// executable can be found in `build/bin/tutorial_tensorexpr`.//// *** What is NNC ***//// NNC stands for Neural Net Compiler. It is a component of TorchScript JIT// and it performs on-the-fly code generation for kernels, which are often a// combination of multiple aten (torch) operators.//// When the JIT interpreter executes a torchscript model, it automatically// extracts subgraphs from the torchscript IR graph for which specialized code// can be JIT generated. This usually improves performance as the 'combined'// kernel created from the subgraph could avoid unnecessary memory traffic that// is unavoidable when the subgraph is interpreted as-is, operator by operator.// This optimization is often referred to as 'fusion'. Relatedly, the process of// finding and extracting subgraphs suitable for NNC code generation is done by
I’ve written a JIT tutorial on fusers (from a user perspective mostly looking into how things reach the fuser, hopefully online soon) and I standardized on calling them the TensorExpr fuser and the CUDA fuser.
Best regards
Thomas |
st180933 | Help !!!
I know that the function torch.jit.get_trace_graph(model, args) in Pytorch 1.3.0 is changed into torch.jit._get_trace_graph(model, args) in Pytorch 1.7.0, but it seems different between this two functions.
So what should I do to get the same output as torch.jit.get_trace_graph(model, args) in Pytorch 1.3.0 while using torch.jit._get_trace_graph(model, args) in Pytorch 1.7.0 ???
Here comes some of my tries
Due to following warning message from torch/jit/_trace.py() def _get_trace_graph(), Line 1114 - 1142
warning::
This function is internal-only and should only be used by the ONNX
exporter. If you are trying to get a graph through tracing, please go
through the public API instead::
trace = torch.jit.trace(nn.LSTMCell(), (input, hidden))
trace_graph = trace.graph
Trace a function or model, returning a tuple consisting of the both the
*trace* of an execution, as well as the original return value. If return_inputs,
also returns the trace inputs as part of the tuple
Tracing is guaranteed not to change the semantics of the function/module
that is traced.
Arguments:
f (torch.nn.Module or function): the function or module
to be traced.
args (tuple or Tensor): the positional arguments to pass to the
function/module to be traced. A non-tuple is assumed to
be a single positional argument to be passed to the model.
kwargs (dict): the keyword arguments to pass to the function/module
to be traced.
Example (trace a cell):
.. testcode::
trace = torch.jit.trace(nn.LSTMCell(), (input, hidden))
so I try the following code
trace, _ = torch.jit.trace(model, args).graph
then it returns:
Traceback (most recent call last):
File "tools/train.py", line 299, in <module>
main()
File "tools/train.py", line 124, in main
writer_dict['writer'].add_graph_deprecated(sor_model, (dump_input, ))
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/tensorboardX/writer.py", line 825, in add_graph_deprecated
self._get_file_writer().add_graph(graph(model, input_to_model, verbose, profile_with_cuda, **kwargs))
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/tensorboardX/pytorch_graph.py", line 381, in graph
trace, _ = torch.jit.trace(model, args).graph
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 745, in trace
_module_class,
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 931, in trace_module
module = make_module(mod, _module_class, _compilation_unit)
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 563, in make_module
return _module_class(mod, _compilation_unit=_compilation_unit)
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 1043, in __init__
submodule, TracedModule, _compilation_unit=None
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 563, in make_module
return _module_class(mod, _compilation_unit=_compilation_unit)
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 1043, in __init__
submodule, TracedModule, _compilation_unit=None
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 563, in make_module
return _module_class(mod, _compilation_unit=_compilation_unit)
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 1043, in __init__
submodule, TracedModule, _compilation_unit=None
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 563, in make_module
return _module_class(mod, _compilation_unit=_compilation_unit)
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 1043, in __init__
submodule, TracedModule, _compilation_unit=None
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 563, in make_module
return _module_class(mod, _compilation_unit=_compilation_unit)
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 1043, in __init__
submodule, TracedModule, _compilation_unit=None
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 563, in make_module
return _module_class(mod, _compilation_unit=_compilation_unit)
File "/home/ivc/anaconda3/envs/PYt/lib/python3.6/site-packages/torch/jit/_trace.py", line 991, in __init__
assert isinstance(orig, torch.nn.Module)
AssertionError
I followed the advice from Missing ‘get_trace_graph’ function in Pytorch1.7 6 and then:
I try
trace, _ = torch.jit._get_trace_graph(model, args)
and than it returns:
Traceback (most recent call last):
File "tools/train.py", line 299, in <module>
main()
File "tools/train.py", line 124, in main
writer_dict['writer'].add_graph_deprecated(sor_model, (dump_input, ))
File "/home/ZhuoweiXu/anaconda3/envs/HRNet/lib/python3.6/site-packages/tensorboardX/writer.py", line 825, in add_graph_deprecated
self._get_file_writer().add_graph(graph(model, input_to_model, verbose, profile_with_cuda, **kwargs))
File "/home/ZhuoweiXu/anaconda3/envs/HRNet/lib/python3.6/site-packages/tensorboardX/pytorch_graph.py", line 387, in graph
graph = trace.graph
AttributeError: 'torch._C.Graph' object has no attribute 'graph'
So I check the function difference between _get_trace_graph(model, args) and get_trace_graph(model, args)
% Pytorch 1.7.0
from torch/jit/_trace.py, _get_trace_graph(), Line 1111,
def _get_trace_graph(f, args=(), kwargs=None, strict=True, _force_outplace=False,
return_inputs=False, _return_inputs_states=False):
if kwargs is None:
kwargs = {}
if not isinstance(args, tuple):
args = (args,)
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
return outs
from torch/jit/_trace.py, class ONNXTracedModule(), Line 73, I get the following in Line 125:
graph, out = torch._C._create_graph_by_tracing(
wrapper,
in_vars + module_state,
_create_interpreter_name_lookup_fn(),
self.strict,
self._force_outplace,
)
and the variable graph seems a conv output, with following attributes:
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '_export_onnx', '_pretty_print_onnx', 'addInput', 'appendNode', 'at', 'constant', 'copy', 'create', 'createClone', 'createCudaFusionGroup', 'createFusionGroup', 'dump_alias_db', 'eraseInput', 'findAllNodes', 'findNode', 'inputs', 'insertConstant', 'insertNode', 'lint', 'nodes', 'op', 'outputs', 'param_node', 'prependNode', 'registerOutput', 'return_node', 'str']
% Pytorch 1.1.0
from torch/jit/init.py, get_trace_graph(), Line 171.
def get_trace_graph(f, args=(), kwargs=None, _force_outplace=False):
if kwargs is None:
kwargs = {}
if not isinstance(args, tuple):
args = (args,)
return LegacyTracedModule(f, _force_outplace)(*args, **kwargs)
from torch/jit/init.py, class LegacyTracedModule(), Line 233, I get the following in Line 247:
trace, all_trace_inputs = torch._C._tracer_enter(*(in_vars + module_state))
and the variable trace shows <TracingState 0x5648e988f880> , with following attributes:
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'graph', 'pop_scope', 'push_scope', 'set_graph']
so what should I do to get this two output same?
I do really appreicate any advice! |
st180934 | How could I get a torchscript version of torchvision.models.detection. maskrcnn_resnet50_fpn?
torch.jit.script and torch.jit.tarce are not working with this model
With torch.jit.script
model = torch.load(modelname+"-best.pth")
model=model.cuda()
model.eval()
print(img)
with torch.no_grad():
print(model(img))
traced_cell = torch.jit.script(model, (img))
torch.jit.save(traced_cell, modelname+"-torchscript.pth")
loaded_trace = torch.jit.load(modelname+"-torchscript.pth")
loaded_trace.eval()
with torch.no_grad():
print(loaded_trace(img))
TensorMask(torch.argmax(loaded_trace(img),1)).show()
Output:
TensorImage([[[[0.8961, 0.9132, 0.8789, ..., 0.2453, 0.1939, 0.2282],
[0.8276, 0.9132, 0.8618, ..., 0.2282, 0.1939, 0.2282],
[0.8961, 0.9132, 0.8789, ..., 0.2282, 0.2282, 0.2453],
...,
[0.8961, 0.8618, 0.9132, ..., 0.4508, 0.4166, 0.3994],
[0.9303, 0.9132, 0.9474, ..., 0.4166, 0.4166, 0.4508],
[0.9646, 0.8789, 0.9303, ..., 0.3994, 0.3994, 0.3994]],
[[1.0455, 1.0630, 1.0280, ..., 0.3803, 0.3277, 0.3627],
[0.9755, 1.0630, 1.0105, ..., 0.3627, 0.3277, 0.3627],
[1.0455, 1.0630, 1.0280, ..., 0.3627, 0.3627, 0.3803],
...,
[1.0455, 1.0105, 1.0630, ..., 0.5903, 0.5553, 0.5378],
[1.0805, 1.0630, 1.0980, ..., 0.5553, 0.5553, 0.5903],
[1.1155, 1.0280, 1.0805, ..., 0.5378, 0.5378, 0.5378]],
[[1.2631, 1.2805, 1.2457, ..., 0.6008, 0.5485, 0.5834],
[1.1934, 1.2805, 1.2282, ..., 0.5834, 0.5485, 0.5834],
[1.2631, 1.2805, 1.2457, ..., 0.5834, 0.5834, 0.6008],
...,
[1.2631, 1.2282, 1.2805, ..., 0.8099, 0.7751, 0.7576],
[1.2980, 1.2805, 1.3154, ..., 0.7751, 0.7751, 0.8099],
[1.3328, 1.2457, 1.2980, ..., 0.7576, 0.7576, 0.7576]]]],
device='cuda:0')
[{'boxes': tensor([[412.5222, 492.3208, 619.7662, 620.9233]], device='cuda:0'), 'labels': tensor([1], device='cuda:0'), 'scores': tensor([0.1527], device='cuda:0'), 'masks': tensor([[[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]]], device='cuda:0')}]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-23-7216a0dac5a0> in <module>
12 loaded_trace.eval()
13 with torch.no_grad():
---> 14 print(loaded_trace(img))
15
16 TensorMask(torch.argmax(loaded_trace(img),1)).show()
~/anaconda3/envs/pro1/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
556 result = self._slow_forward(*input, **kwargs)
557 else:
--> 558 result = self.forward(*input, **kwargs)
559 for hook in self._forward_hooks.values():
560 hook_result = hook(self, input, result)
RuntimeError: forward() Expected a value of type 'List[Tensor]' for argument 'images' but instead found type 'TensorImage'.
Position: 1
Value: TensorImage([[[[0.8961, 0.9132, 0.8789, ..., 0.2453, 0.1939, 0.2282],
[0.8276, 0.9132, 0.8618, ..., 0.2282, 0.1939, 0.2282],
[0.8961, 0.9132, 0.8789, ..., 0.2282, 0.2282, 0.2453],
...,
[0.8961, 0.8618, 0.9132, ..., 0.4508, 0.4166, 0.3994],
[0.9303, 0.9132, 0.9474, ..., 0.4166, 0.4166, 0.4508],
[0.9646, 0.8789, 0.9303, ..., 0.3994, 0.3994, 0.3994]],
[[1.0455, 1.0630, 1.0280, ..., 0.3803, 0.3277, 0.3627],
[0.9755, 1.0630, 1.0105, ..., 0.3627, 0.3277, 0.3627],
[1.0455, 1.0630, 1.0280, ..., 0.3627, 0.3627, 0.3803],
...,
[1.0455, 1.0105, 1.0630, ..., 0.5903, 0.5553, 0.5378],
[1.0805, 1.0630, 1.0980, ..., 0.5553, 0.5553, 0.5903],
[1.1155, 1.0280, 1.0805, ..., 0.5378, 0.5378, 0.5378]],
[[1.2631, 1.2805, 1.2457, ..., 0.6008, 0.5485, 0.5834],
[1.1934, 1.2805, 1.2282, ..., 0.5834, 0.5485, 0.5834],
[1.2631, 1.2805, 1.2457, ..., 0.5834, 0.5834, 0.6008],
...,
[1.2631, 1.2282, 1.2805, ..., 0.8099, 0.7751, 0.7576],
[1.2980, 1.2805, 1.3154, ..., 0.7751, 0.7751, 0.8099],
[1.3328, 1.2457, 1.2980, ..., 0.7576, 0.7576, 0.7576]]]],
device='cuda:0')
Declaration: forward(__torch__.torchvision.models.detection.mask_rcnn.___torch_mangle_1723.MaskRCNN self, Tensor[] images, Dict(str, Tensor)[]? targets=None) -> ((Dict(str, Tensor), Dict(str, Tensor)[]))
Cast error details: Unable to cast Python instance to C++ type (compile in debug mode for details)
With torch.jit.trace
modelname="maskrcnn"
model = torch.load(modelname+"-best.pth")
model=model.cuda()
model.eval()
print(img)
with torch.no_grad():
print(model(img))
traced_cell = torch.jit.trace(model, (img))
torch.jit.save(traced_cell, modelname+"-torchscript.pth")
loaded_trace = torch.jit.load(modelname+"-torchscript.pth")
loaded_trace.eval()
with torch.no_grad():
print(loaded_trace(img))
TensorMask(torch.argmax(loaded_trace(img),1)).show()
Output
TensorImage([[[[0.8961, 0.9132, 0.8789, ..., 0.2453, 0.1939, 0.2282],
[0.8276, 0.9132, 0.8618, ..., 0.2282, 0.1939, 0.2282],
[0.8961, 0.9132, 0.8789, ..., 0.2282, 0.2282, 0.2453],
...,
[0.8961, 0.8618, 0.9132, ..., 0.4508, 0.4166, 0.3994],
[0.9303, 0.9132, 0.9474, ..., 0.4166, 0.4166, 0.4508],
[0.9646, 0.8789, 0.9303, ..., 0.3994, 0.3994, 0.3994]],
[[1.0455, 1.0630, 1.0280, ..., 0.3803, 0.3277, 0.3627],
[0.9755, 1.0630, 1.0105, ..., 0.3627, 0.3277, 0.3627],
[1.0455, 1.0630, 1.0280, ..., 0.3627, 0.3627, 0.3803],
...,
[1.0455, 1.0105, 1.0630, ..., 0.5903, 0.5553, 0.5378],
[1.0805, 1.0630, 1.0980, ..., 0.5553, 0.5553, 0.5903],
[1.1155, 1.0280, 1.0805, ..., 0.5378, 0.5378, 0.5378]],
[[1.2631, 1.2805, 1.2457, ..., 0.6008, 0.5485, 0.5834],
[1.1934, 1.2805, 1.2282, ..., 0.5834, 0.5485, 0.5834],
[1.2631, 1.2805, 1.2457, ..., 0.5834, 0.5834, 0.6008],
...,
[1.2631, 1.2282, 1.2805, ..., 0.8099, 0.7751, 0.7576],
[1.2980, 1.2805, 1.3154, ..., 0.7751, 0.7751, 0.8099],
[1.3328, 1.2457, 1.2980, ..., 0.7576, 0.7576, 0.7576]]]],
device='cuda:0')
[{'boxes': tensor([[412.5222, 492.3208, 619.7662, 620.9233]], device='cuda:0'), 'labels': tensor([1], device='cuda:0'), 'scores': tensor([0.1527], device='cuda:0'), 'masks': tensor([[[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]]], device='cuda:0')}]
/opt/conda/conda-bld/pytorch_1587452831668/work/torch/csrc/utils/python_arg_parser.cpp:760: UserWarning: This overload of nonzero is deprecated:
nonzero(Tensor input, *, Tensor out)
Consider using one of the following signatures instead:
nonzero(Tensor input, *, bool as_tuple)
/home/david/anaconda3/envs/proy/lib/python3.7/site-packages/torch/tensor.py:467: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
'incorrect results).', category=RuntimeWarning)
/home/david/anaconda3/envs/proy/lib/python3.7/site-packages/fastai2/torch_core.py:272: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
res = getattr(super(TensorBase, self), fn)(*args, **kwargs)
/opt/conda/conda-bld/pytorch_1587452831668/work/aten/src/ATen/native/BinaryOps.cpp:81: UserWarning: Integer division of tensors using div or / is deprecated, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.
/home/david/anaconda3/envs/proy/lib/python3.7/site-packages/torchvision/models/detection/rpn.py:164: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
torch.tensor(image_size[1] / g[1], dtype=torch.int64, device=device)] for g in grid_sizes]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-15-44b7a9360e87> in <module>
6 with torch.no_grad():
7 print(model(img))
----> 8 traced_cell = torch.jit.trace(model, (img))
9 torch.jit.save(traced_cell, modelname+"-torchscript.pth")
10
~/anaconda3/envs/proy/lib/python3.7/site-packages/torch/jit/__init__.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
881 return trace_module(func, {'forward': example_inputs}, None,
882 check_trace, wrap_check_inputs(check_inputs),
--> 883 check_tolerance, strict, _force_outplace, _module_class)
884
885 if (hasattr(func, '__self__') and isinstance(func.__self__, torch.nn.Module) and
~/anaconda3/envs/proy/lib/python3.7/site-packages/torch/jit/__init__.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)
1035 func = mod if method_name == "forward" else getattr(mod, method_name)
1036 example_inputs = make_tuple(example_inputs)
-> 1037 module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, strict, _force_outplace)
1038 check_trace_method = module._c._get_method(method_name)
1039
~/anaconda3/envs/proy/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
554 input = result
555 if torch._C._get_tracing_state():
--> 556 result = self._slow_forward(*input, **kwargs)
557 else:
558 result = self.forward(*input, **kwargs)
~/anaconda3/envs/proy/lib/python3.7/site-packages/torch/nn/modules/module.py in _slow_forward(self, *input, **kwargs)
540 recording_scopes = False
541 try:
--> 542 result = self.forward(*input, **kwargs)
543 finally:
544 if recording_scopes:
~/anaconda3/envs/proy/lib/python3.7/site-packages/torchvision/models/detection/generalized_rcnn.py in forward(self, images, targets)
68 if isinstance(features, torch.Tensor):
69 features = OrderedDict([('0', features)])
---> 70 proposals, proposal_losses = self.rpn(images, features, targets)
71 detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
72 detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)
~/anaconda3/envs/proy/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
554 input = result
555 if torch._C._get_tracing_state():
--> 556 result = self._slow_forward(*input, **kwargs)
557 else:
558 result = self.forward(*input, **kwargs)
~/anaconda3/envs/proy/lib/python3.7/site-packages/torch/nn/modules/module.py in _slow_forward(self, *input, **kwargs)
540 recording_scopes = False
541 try:
--> 542 result = self.forward(*input, **kwargs)
543 finally:
544 if recording_scopes:
~/anaconda3/envs/proy/lib/python3.7/site-packages/torchvision/models/detection/rpn.py in forward(self, images, features, targets)
486 proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
487 proposals = proposals.view(num_images, -1, 4)
--> 488 boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
489
490 losses = {}
~/anaconda3/envs/proy/lib/python3.7/site-packages/torchvision/models/detection/rpn.py in filter_proposals(self, proposals, objectness, image_shapes, num_anchors_per_level)
392
393 # select top_n boxes independently per level before applying nms
--> 394 top_n_idx = self._get_top_n_idx(objectness, num_anchors_per_level)
395
396 image_range = torch.arange(num_images, device=device)
~/anaconda3/envs/proy/lib/python3.7/site-packages/torchvision/models/detection/rpn.py in _get_top_n_idx(self, objectness, num_anchors_per_level)
372 pre_nms_top_n = min(self.pre_nms_top_n(), num_anchors)
373 _, top_n_idx = ob.topk(pre_nms_top_n, dim=1)
--> 374 r.append(top_n_idx + offset)
375 offset += num_anchors
376 return torch.cat(r, dim=1)
RuntimeError: expected device cuda:0 but got device cpu |
st180935 | what version of PyTorch/Torchvision are you using? Afaik this should work on the latest for both (cc @fmassa) |
st180936 | print(torch.__version__)
print(torchvision.__version__)
1.6.0.dev20200421
0.7.0a0+6e47842 |
st180937 | Could you check, if passing the inputs as a list would solve the error, as claimed in the error message for scripting?
This code works for me:
import torch
import torchvision
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=False)
model.eval()
scripted_model = torch.jit.script(model)
out = scripted_model([torch.randn(3, 224, 224), torch.randn(3, 400, 400)]) |
st180938 | I have tried the following:
modelname="maskrcnn"
model = torch.load(modelname+"-best.pth")
model=model.cuda()
model.eval()
with torch.no_grad():
print(model(img.clone()))
traced_cell = torch.jit.script(model)
traced_cell.save(modelname+"-torchscript.pth")
loaded_trace = torch.jit.load(modelname+"-torchscript.pth")
loaded_trace.eval()
with torch.no_grad():
print(loaded_trace([img[0]]))
However, the output of the model is looking different now!
[{'boxes': tensor([[412.5222, 492.3208, 619.7662, 620.9233]], device='cuda:0'), 'labels': tensor([1], device='cuda:0'), 'scores': tensor([0.1527], device='cuda:0'), 'masks': tensor([[[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]]], device='cuda:0')}]
({}, [{'scores': tensor([0.1527], device='cuda:0'), 'labels': tensor([1], device='cuda:0'), 'boxes': tensor([[412.5222, 492.3208, 619.7662, 620.9233]], device='cuda:0'), 'masks': tensor([[[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]]], device='cuda:0')}])
code/__torch__/torchvision/models/detection/mask_rcnn.py:42: UserWarning: RCNN always returns a (Losses, Detections) tuple in scripting
It is returning a tuple now! @ptrblck |
st180939 | I’m not sure, why the tuple is returned, but as a workaround you could just remove the first element, as the others yield the same result. |
st180940 | For multiple images, the first dict will also be empty, so I might be missing something obvious, but I don’t know what it could be used for. |
st180941 | If anyone is still interested, PyTorch throws a UserWarning explaining what’s what:
code/__torch__/torchvision/models/detection/keypoint_rcnn.py:86: UserWarning: RCNN always returns a (Losses, Detections) tuple in scripting |
st180942 | Hi,
I am facing an issue with multiple forward passes using torchscript on GPU. It works fine on the first pass but throws an error on the next. However, it works perfectly on CPU.
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=False).cuda()
model.eval()
scripted_model = torch.jit.script(model)
torch.jit.save(scripted_model, "maskrcnn_torchscript.pth")
scripted_model = torch.jit.load('maskrcnn_torchscript.pth').eval()
x = cv2.imread('image.jpg')
x_tensor = torch.as_tensor(x.astype("float32").transpose(2, 0, 1)).cuda()
with torch.no_grad():
count = 10
while count > 0:
t0 = time()
out = scripted_model([x_tensor])
t1 = time()
print("Time = {}, FPS = {}".format(t1 - t0, 1/(t1 -t0)))
count -= 1
The first pass works fine and on the next pass I get the following error
code/__torch__/torchvision/models/detection/mask_rcnn.py:95: UserWarning: RCNN always returns a (Losses, Detections) tuple in scripting
Time = 1.6492173671722412, FPS = 0.6063482109181317
Traceback (most recent call last):
File "simple_torchscript.py", line 24, in <module>
out = scripted_model([x_tensor])
File "/path/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
RuntimeError:
Arguments for call are not valid.
The following variants are available:
aten::size.int(Tensor self, int dim) -> (int):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'List[Tensor]'.
Empty lists default to List[Tensor]. Add a variable annotation to the assignment to create an empty list of another type (torch.jit.annotate(List[T, []]) where T is the type of elements in the list for Python 2)
aten::size.Dimname(Tensor self, str dim) -> (int):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'List[Tensor]'.
Empty lists default to List[Tensor]. Add a variable annotation to the assignment to create an empty list of another type (torch.jit.annotate(List[T, []]) where T is the type of elements in the list for Python 2)
aten::size(Tensor self) -> (int[]):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'List[Tensor]'.
Empty lists default to List[Tensor]. Add a variable annotation to the assignment to create an empty list of another type (torch.jit.annotate(List[T, []]) where T is the type of elements in the list for Python 2)
The original call is:
However, if I run it on CPU, it works perfectly.
code/__torch__/torchvision/models/detection/mask_rcnn.py:95: UserWarning: RCNN always returns a (Losses, Detections) tuple in scripting
Time = 2.616811990737915, FPS = 0.3821443816137551
Time = 3.022855758666992, FPS = 0.3308130059242312
Time = 2.174025297164917, FPS = 0.4599762483463605
Time = 2.174373149871826, FPS = 0.45990266208858743
Time = 2.098878860473633, FPS = 0.47644483863844345
Time = 2.3181285858154297, FPS = 0.43138245484697213
Time = 2.0808985233306885, FPS = 0.4805616366142636
Time = 2.05479097366333, FPS = 0.48666750672803294
Time = 1.95816969871521, FPS = 0.5106809694053165
Time = 2.1022541522979736, FPS = 0.47567987862309613
Here’s the details of the versions used.
----------------------------------------------------------------------------------------------------------------
python 3.6.12 |Anaconda, Inc.| (default, Sep 8 2020, 23:10:56) [GCC 7.3.0]
Numpy 1.19.2
PyTorch 1.7.0
torchvision 0.8.1
GPU 0,1,2,3 Quadro RTX 6000
CUDA_HOME /usr/local/cuda-10.2
NVCC Cuda compilation tools, release 10.2, V10.2.89
cv2 4.4.0
------------------- --------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- CUDA Runtime 10.2
- CuDNN 7.6.5 |
st180943 | Hello Forum!
I noticed that jit “quit working” on a particular use case with the
1.8.0.dev20201203 nightly build.
Specifically, a function that showed an about 2x jit speed-up with
version 1.6.0 shows no speed-up with version 1.8.0.
Details can be found in this post:
How to implement efficient element-wise 'ax+b' function
Hi Zhi and Alex!
The short story is that I can reproduce your results.
I’ve also done some further experimentation.
The best I could come up with was to apply Alex’s torchscript suggestion
to your “w-b” version.
I also tried a Conv2d version – even though it doesn’t have
BatchNorm2d's normalization step, it was also slower.
Note, torchscript gave a nice speedup with pytorch version 1.6.0, but
not with a version 1.8.0 nightly build.
I used your original problem – same-shaped tensors, a…
Best.
K. Frank |
st180944 | Thanks for the report! Do you mind making an issue on Github with the same repro? We changed optimizers in that time period so I suspect it may be a regression with a new optimizer but we need to check. |
st180945 | Hi Michael!
Michael_Suo:
Do you mind making an issue on Github with the same repro?
Let me apologize for not creating a github issue. It turns out I would
need a github account to do so, which I don’t have. It might be fastest
if someone with a github account could link to my post, or something.
Best.
K. Frank |
st180946 | Hi I have a torchscripted transformer model and I’m trying to evaluate the performance of my C++ inference code. To this end, I’m running on a server with 32 CPUs (and doing inference on CPU). I have set OMP_NUM_THREADS=1 and MKL_NUM_THREADS=1 but I still notice that running 30 inference processes in parallel causes each forward step to be significantly slower (up to 2x slower) than running inference jobs one at a time.
Is there something else I’m missing? |
st180947 | Hi!
I’m doing the TorchVision Object Detection Finetuning Tutorial in colab: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html 5, and I save the trained model using torch.jit, if I load the model in the same notebook after train and save it, I don’t have any problem, but if I try to upload this model in another notebook or in my pc it doesn’t work.
The error in colab is:
RuntimeError Traceback (most recent call last)
<ipython-input-18-c0fa55fc421f> in <module>()
----> 1 model_loaded = torch.jit.load("/content/MRCNN_eval.pt")
/usr/local/lib/python3.6/dist-packages/torch/jit/_serialization.py in load(f, map_location, _extra_files)
159 cu = torch._C.CompilationUnit()
160 if isinstance(f, str) or isinstance(f, pathlib.Path):
--> 161 cpp_module = torch._C.import_ir_module(cu, f, map_location, _extra_files)
162 else:
163 cpp_module = torch._C.import_ir_module_from_buffer(
RuntimeError: [enforce fail at inline_container.cc:145] . PytorchStreamReader failed reading zip archive: failed finding central directory
And the error in my pc:
pytorch version: 1.6.0
torchvision version: 0.7.0
Traceback (most recent call last):
File "mrcnn_inference.py", line 17, in <module>
model_loaded = torch.jit.load("/home/nae/ML/LoadModelPt/build/Models/MRCNN_model.pt", map_location = 'cpu')
File "/home/nae/.local/lib/python3.6/site-packages/torch/jit/__init__.py", line 275, in load
cpp_module = torch._C.import_ir_module(cu, f, map_location, _extra_files)
RuntimeError:
Arguments for call are not valid.
The following variants are available:
aten::upsample_nearest1d.out(Tensor self, int[1] output_size, float? scales=None, *, Tensor(a!) out) -> (Tensor(a!)):
Expected a value of type 'List[int]' for argument 'output_size' but instead found type 'Optional[List[int]]'.
aten::upsample_nearest1d(Tensor self, int[1] output_size, float? scales=None) -> (Tensor):
Expected a value of type 'List[int]' for argument 'output_size' but instead found type 'Optional[List[int]]'.
The original call is:
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 3130
if input.dim() == 3 and mode == 'nearest':
return torch._C._nn.upsample_nearest1d(input, output_size, scale_factors)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
if input.dim() == 4 and mode == 'nearest':
return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
Serialized File "code/__torch__/torch/nn/functional/___torch_mangle_46.py", line 155
_49 = False
if _49:
_51 = torch.upsample_nearest1d(input, output_size3, scale_factors6)
~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_50 = _51
else:
'interpolate' is being compiled since it was called from '_resize_image_and_masks'
Serialized File "code/__torch__/torchvision/models/detection/transform.py", line 227
self_max_size: float,
target: Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]:
_77 = __torch__.torch.nn.functional.___torch_mangle_46.interpolate
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_78 = torch.slice(torch.size(image), -2, 9223372036854775807, 1)
im_shape = torch.tensor(_78, dtype=None, device=None, requires_grad=False)
'_resize_image_and_masks' is being compiled since it was called from 'GeneralizedRCNNTransform.resize'
Serialized File "code/__torch__/torchvision/models/detection/transform.py", line 96
target: Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]:
_25 = "This Python function is annotated to be ignored and cannot be run"
_26 = __torch__.torchvision.models.detection.transform._resize_image_and_masks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_27 = __torch__.torchvision.models.detection.transform.resize_boxes
_28 = __torch__.torchvision.models.detection.transform.resize_keypoints
'GeneralizedRCNNTransform.resize' is being compiled since it was called from 'GeneralizedRCNNTransform.forward'
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py", line 105
"of shape [C, H, W], got {}".format(image.shape))
image = self.normalize(image)
image, target_index = self.resize(image, target_index)
~~~~~~~~~~~ <--- HERE
images[i] = image
if targets is not None and target_index is not None:
Serialized File "code/__torch__/torchvision/models/detection/transform.py", line 45
pass
image0 = (self).normalize(image, )
_8 = (self).resize(image0, target_index, )
~~~~~~~~~~~~ <--- HERE
image1, target_index0, = _8
_9 = torch._set_item(images0, i, image1) |
st180948 | Solved by NAEE09 in post #6
Finally, it works I’m not sure what was the problem, but I did the training in my local pc, and I can load that model without problem. |
st180949 | The error might be raised due to an incompatibility issue between PyTorch versions assuming the Colab PyTorch version was 1.7.0 while you are using 1.6.0 locally.
Could you update your local installation and try to load the file again? |
st180950 | Thanks for you reply. I changed the colab version to 1.6.0, and it doesn’t work. I already tried to change the versions, and when I upload the model in another notebook, it doesn’t work either. |
st180951 | Did you try to update to the latest version instead of downgrading the Colab binary? |
st180952 | Finally, it works I’m not sure what was the problem, but I did the training in my local pc, and I can load that model without problem. |
st180953 | I have a tuple made like this:
mytuple = (0, 5) + tuple(1 for _ in range(len(some_tuple) - 2))
I tried to make a script of my class and I got:
torch.jit.frontend.UnsupportedNodeError: GeneratorExp aren't supported
mytuple = (0, 5) + tuple(1 for _ in range(len(some_tuple) - 2))
~ <--- HERE
So I change the code to:
mytuple = [0, 5]
for _ in range(len(some_tuple) - 2):
mytuple.append(1)
mytuple = tuple(mytuple)
but I gotthis error:
cannot statically infer the expected size of a list in this context:
mytuple = tuple(mytuple)
~~~~~~~~~~~~~ <--- HERE
I change the last line to:
mytuple = (mytuple,)
but in the end I figured out that is not a tuple of integers and it is type ‘Tuple[List[int]]’.
my last try was to unpack the list:
mytuple = tuple(*mytuple)
I faced this error:
Unexpected starred expansion. File a bug report:
mytuple = tuple(*mytuple)
~~~~~~~ <--- HERE
How can I have tuple of integers with dynamic length that is scriptable? |
st180954 | My high-level goal is to compactly serialize model architectures without parameter values. See Save model architecture only
To accomplish this goal I’ve tried to lazily initialize layers and parameters in the forward() method.
class Net4(nn.Module):
def forward(self, x):
if not hasattr(self, 'fc1'):
self.fc1 = nn.Linear(16 * 5 * 5, 120)
if not hasattr(self, 'fc3'):
self.fc2 = nn.Linear(120, 84)
if not hasattr(self, 'fc3'):
self.fc3 = nn.Linear(84, 10)
x = F.sigmoid(self.fc1(x))
x = F.sigmoid(self.fc2(x))
x = self.fc3(x)
return x
script_module4 = torch.jit.script(Net4())
script_module4.save('pytorch_model.Net4.TorchScriptModule')
I’m getting the following error:
/opt/conda/lib/python3.7/site-packages/torch/jit/_recursive.py in create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
302 property_rcbs = [p.resolution_callback for p in property_stubs]
303
--> 304 concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
305
306
RuntimeError:
Class Linear does not have an __init__ function defined:
File "<ipython-input-23-bda237bd2209>", line 8
def forward(self, x):
if not hasattr(self, 'fc1'):
self.fc1 = nn.Linear(16 * 5 * 5, 120)
~~~~~~~~~ <--- HERE
if not hasattr(self, 'fc3'):
self.fc2 = nn.Linear(120, 84)
How can I work around this error? |
st180955 | I want to have a system where the model authoring and training are separate steps. The model needs to be serialized to disk between those stages.
Keras has “model config” which describes the model architecture in a compact JSON-friendly way: https://www.tensorflow.org/api_docs/python/tf/keras/models/model_from_config 1
I want to have something similar for PyTorch. This problem is almost solved by
torch.jit.script(MyNet()).save('my_net.PyTorchScriptModule')
But I see that the result is pretty big and binary due to the fact that the weights are included.
Is there a way to not save parameters? Or to save them in un-initialized state so that they’re lazily initialized after loading?
TorchScript seems so powerful. Maybe there is a way to save the whole Module, not just the forward method + initialized constants? |
st180956 | I’ve created a model with a forward function that takes “x” as input (image of size (3,416,416)). I create a trace of the model using: module = torch.jit.trace(model, example_forward_input), then save that model using module.save("model.pt"). Then I load this model trace into an Android application. When I send an input to the model (from the phone) that is identical to the input used as “example_forward_input”, I get the correct result. However, when I use any other input tensor (same shape), I get poor results. Is this supposed to be the behaviour of the trace function? Is there a function that traces a model that can generalize to any inputs? Any guidance would be much appreciated.
For some more detail: This is a YOLOv3 based model that involves detection and classification. The classification with different inputs into the traced model gives similar results to the same inputs in the model. However, the detection locations differ (in w/h especially) when running an input that was not used as an example through the traced model.
EDIT: I’m guessing this is due to the fact that my forward module uses control-flow that is dependent on the input, as outlined here 12. However, when I try to convert the model to a script module, as outlined on that same page. I get the following error: raise NotSupportedError(ctx_range, _vararg_kwarg_err) torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults: File "C:\Users\isaac\Anaconda3\envs\SNProject\lib\site-packages\torch\nn\modules\module.py", line 85 def forward(self, *input): ~~~~~~ <--- HERE print(torch.__version__) r"""Defines the computation performed at every call. As you can see this is coming from the torch library itself. Any suggestions on how to proceed? |
st180957 | You are right about the input-dependent control flow requiring torch.jit.script instead of torch.jit.trace. Can you link the YoloV3 implementation you’re using so we can reproduce this error? |
st180958 | I solved the problem. I ended up using torch.jit.trace(), but then having my YOLOLayer inherit a ScriptModule: class YOLOLayer(torch.jit.ScriptModule):, and made my forward method:
@torch.jit.script_method def forward(self, x, targets=torch.tensor([]), img_dim=torch.tensor(416)):
and helper method
@torch.jit.script_method def compute_grid_offsets(self, grid_size): decorated with @torch.jit.script_method. After I did this I just went line-by-line fixing any errors that appeared due to incompatibility with the scripting. |
st180959 | Good to hear you fixed it! We changed the API to TorchScript in PyTorch 1.2 to make it easier to use (i.e. you no longer need to change your model to inherit from ScriptModule instead of nn.Module and you don’t need @script_method), you can read more about it here 68. But this is just sugar over the same thing you’re already doing, so if you already have it working you don’t need to change anything. |
st180960 | Hi, @IsaacBerman
I’m trying to do it but can’t reproduce it for some errors.
Could you share the fixed codes?
Is the code based on https://github.com/eriklindernoren/PyTorch-YOLOv3 26? |
st180961 | Hi @junjihashimoto,
Yes it is. The only layer you have to change is YOLOLayer. Since this was just for tracing, I commented out the loss calculations. As follows:
@torch.jit.script
def compare_size(size1, size2):
return size1 != size2
@torch.jit.script
def get_input(x):
return x
@torch.jit.script
def get_pred_boxes(x, grid):
return x + grid
@torch.jit.script
def set_grid_size(x):
return torch.tensor(x.size(2))
@torch.jit.script
def normalize_by_stride(anchors, stride):
return torch.div(anchors, stride)
`class YOLOLayer(torch.jit.ScriptModule):
“”“Detection layer”""
def __init__(self, anchors, num_classes, img_dim=416):
super(YOLOLayer, self).__init__()
self.anchors = torch.tensor(anchors)
self.num_anchors = len(anchors)
self.num_classes = num_classes
self.ignore_thres = 0.5
self.mse_loss = nn.MSELoss()
self.bce_loss = nn.BCELoss()
self.obj_scale = 1
self.noobj_scale = 100
self.metrics = {}
self.img_dim = torch.tensor(img_dim)
self.grid_size = torch.tensor(0) # grid size
self.stride = torch.tensor(0)
self.grid_x = torch.tensor([])
self.grid_y = torch.tensor([])
self.scaled_anchors = torch.tensor([])
self.anchor_w = torch.tensor([])
self.anchor_h = torch.tensor([])
@torch.jit.script_method
def compute_grid_offsets(self, grid_size):
self.grid_size = grid_size.float()
g = self.grid_size.int()
self.grid_size = self.grid_size.float()
self.stride = self.img_dim / self.grid_size
self.stride = self.stride.float()
self.grid_x = torch.arange(g).repeat(g, 1).view([1, 1, int(g.item()), int(g.item())])
self.grid_y = torch.arange(g).repeat(g, 1).t().view([1, 1, int(g.item()), int(g.item())])
self.scaled_anchors = torch.div(self.anchors, self.stride)
self.anchor_w = self.scaled_anchors[:, 0:1].reshape(1, self.num_anchors, 1, 1)
self.anchor_h = self.scaled_anchors[:, 1:2].reshape(1, self.num_anchors, 1, 1)
@torch.jit.script_method
def forward(self, x, targets=torch.tensor([]), img_dim=torch.tensor(416)):
self.img_dim = img_dim
num_samples = x.size(0)
grid_size = set_grid_size(x)
self.compute_grid_offsets(grid_size)
prediction = (
x.view(num_samples, self.num_anchors, self.num_classes + 5, grid_size, grid_size)
.permute(0, 1, 3, 4, 2)
.contiguous()
)
# Get outputs
x = torch.sigmoid(prediction[..., 0]) # Center x
y = torch.sigmoid(prediction[..., 1]) # Center y
w = prediction[..., 2] # Width
h = prediction[..., 3] # Height
pred_conf = torch.sigmoid(prediction[..., 4]) # Conf
pred_cls = torch.sigmoid(prediction[..., 5:]) # Cls pred.
#print("YOLO_LAYER: {}".format(x[0][0][0]))
# Add offset and scale with anchors
pred_boxes = torch.zeros(prediction[..., :4].shape)
pred_boxes = torch.stack((x.data+self.grid_x,y.data+self.grid_y,torch.exp(w.data)*self.anchor_w,torch.exp(h.data)*self.anchor_h),4)
output = torch.cat(
(
pred_boxes.view(num_samples, -1, 4) * self.stride,
pred_conf.view(num_samples, -1, 1),
pred_cls.view(num_samples, -1, self.num_classes),
),
-1,
)
#print(output[0][0][0])
# if targets is None:
# return output, 0
# else:
# iou_scores, class_mask, obj_mask, noobj_mask, tx, ty, tw, th, tcls, tconf = build_targets(
# pred_boxes=pred_boxes,
# pred_cls=pred_cls,
# target=targets,
# anchors=self.scaled_anchors,
# ignore_thres=self.ignore_thres,
# )
# # Loss : Mask outputs to ignore non-existing objects (except with conf. loss)
# loss_x = self.mse_loss(x[obj_mask], tx[obj_mask])
# loss_y = self.mse_loss(y[obj_mask], ty[obj_mask])
# loss_w = self.mse_loss(w[obj_mask], tw[obj_mask])
# loss_h = self.mse_loss(h[obj_mask], th[obj_mask])
# loss_conf_obj = self.bce_loss(pred_conf[obj_mask], tconf[obj_mask])
# loss_conf_noobj = self.bce_loss(pred_conf[noobj_mask], tconf[noobj_mask])
# loss_conf = self.obj_scale * loss_conf_obj + self.noobj_scale * loss_conf_noobj
# loss_cls = self.bce_loss(pred_cls[obj_mask], tcls[obj_mask])
# total_loss = loss_x + loss_y + loss_w + loss_h + loss_conf + loss_cls
# # Metrics
# cls_acc = 100 * class_mask[obj_mask].mean()
# conf_obj = pred_conf[obj_mask].mean()
# conf_noobj = pred_conf[noobj_mask].mean()
# conf50 = (pred_conf > 0.5).float()
# iou50 = (iou_scores > 0.5).float()
# iou75 = (iou_scores > 0.75).float()
# detected_mask = conf50 * class_mask * tconf
# precision = torch.sum(iou50 * detected_mask) / (conf50.sum() + 1e-16)
# recall50 = torch.sum(iou50 * detected_mask) / (obj_mask.sum() + 1e-16)
# recall75 = torch.sum(iou75 * detected_mask) / (obj_mask.sum() + 1e-16)
# self.metrics = {
# "loss": to_cpu(total_loss).item(),
# "x": to_cpu(loss_x).item(),
# "y": to_cpu(loss_y).item(),
# "w": to_cpu(loss_w).item(),
# "h": to_cpu(loss_h).item(),
# "conf": to_cpu(loss_conf).item(),
# "cls": to_cpu(loss_cls).item(),
# "cls_acc": to_cpu(cls_acc).item(),
# "recall50": to_cpu(recall50).item(),
# "recall75": to_cpu(recall75).item(),
# "precision": to_cpu(precision).item(),
# "conf_obj": to_cpu(conf_obj).item(),
# "conf_noobj": to_cpu(conf_noobj).item(),
# "grid_size": grid_size,
# }
#total_loss = 0
return output, torch.tensor(0)
` |
st180962 | Is is possible to deploy a network without classfier layer onto IOS? I just want to output the final features and compare two tensors in some way, e.g. euclidean distance. Did you deploy on IOS? |
st180963 | Care to explain how you did this?
i am using the same YOLO implementation, and i am stuck torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults: File "C:\Users\isaac\Anaconda3\envs\SNProject\lib\site-packages\torch\nn\modules\module.py", line 85 def forward(self, *input): ~~~~~~ <--- HERE print(torch.__version__) r"""Defines the computation performed at every call@IsaacBerman |
st180964 | I am using the following model which is using vgg16 as base net
import torch
import torch.nn as nn
import torch.nn.functional as F
from basenet.vgg16_bn import vgg16_bn, init_weights
class double_conv(nn.Module):
def __init__(self, in_ch, mid_ch, out_ch):
super(double_conv, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(in_ch + mid_ch, mid_ch, kernel_size=1),
nn.BatchNorm2d(mid_ch),
nn.ReLU(inplace=True),
nn.Conv2d(mid_ch, out_ch, kernel_size=3, padding=1),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True)
)
def forward(self, x):
x = self.conv(x)
return x
class CRAFT(nn.Module):
def __init__(self, pretrained=False, freeze=False):
super(CRAFT, self).__init__()
""" Base network """
self.basenet = vgg16_bn(pretrained, freeze)
""" U network """
self.upconv1 = double_conv(1024, 512, 256)
self.upconv2 = double_conv(512, 256, 128)
self.upconv3 = double_conv(256, 128, 64)
self.upconv4 = double_conv(128, 64, 32)
num_class = 2
self.conv_cls = nn.Sequential(
nn.Conv2d(32, 32, kernel_size=3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(32, 32, kernel_size=3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(32, 16, kernel_size=3, padding=1), nn.ReLU(inplace=True),
nn.Conv2d(16, 16, kernel_size=1), nn.ReLU(inplace=True),
nn.Conv2d(16, num_class, kernel_size=1),
)
init_weights(self.upconv1.modules())
init_weights(self.upconv2.modules())
init_weights(self.upconv3.modules())
init_weights(self.upconv4.modules())
init_weights(self.conv_cls.modules())
def forward(self, x):
""" Base network """
sources = self.basenet(x)
""" U network """
y = torch.cat([sources[0], sources[1]], dim=1)
y = self.upconv1(y)
y = F.interpolate(y, size=sources[2].size()[2:], mode='bilinear', align_corners=False)
y = torch.cat([y, sources[2]], dim=1)
y = self.upconv2(y)
y = F.interpolate(y, size=sources[3].size()[2:], mode='bilinear', align_corners=False)
y = torch.cat([y, sources[3]], dim=1)
y = self.upconv3(y)
y = F.interpolate(y, size=sources[4].size()[2:], mode='bilinear', align_corners=False)
y = torch.cat([y, sources[4]], dim=1)
feature = self.upconv4(y)
y = self.conv_cls(feature)
when I am using
net_scripted = torch.jit.script(net)
I am getting
NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
File "/home/ao-collab/anaconda3/envs/torch/lib/python3.8/collections/__init__.py", line 313
def namedtuple(typename, field_names, *, rename=False, defaults=None, module=None): |
st180965 | Hello All,
I Used this implementation of YOLOv3 in my code,
eriklindernoren/PyTorch-YOLOv3: Minimal PyTorch implementation of YOLOv3 (github.com) 20
i wanted to then Convert it to torch scipt and JIT , but i am represented with multiple errors.
Code Sample:
device = torch.device("cpu")
# Set up model
model = Darknet("config/yolov3.cfg", img_size=416).to(device)
# load the weights
model.load_darknet_weights("weights/yolov3.weights")
Tensor = torch.FloatTensor
img=cv2.imread("temp.jpg")
PILimg = np.array(Image.fromarray(cv2.cvtColor(img,cv2.COLOR_BGR2RGB)))
imgTensor = transforms.ToTensor()(PILimg)
imgTensor, _ = pad_to_square(imgTensor, 0)
imgTensor = resize(imgTensor, 416)
#add the batch size
imgTensor = imgTensor.unsqueeze(0)
imgTensor = Variable(imgTensor.type(Tensor))
sm = torch.jit.script(model)
sm.save("traced_resnet_model.pt")
Output:
File ".\TSTest.py", line 51, in <module>
sm = torch.jit.script(model)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_script.py", line 898, in script
obj, torch.jit._recursive.infer_methods_to_compile
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_recursive.py", line 352, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_recursive.py", line 406, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_script.py", line 388, in _construct
init_fn(script_module)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_recursive.py", line 388, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_recursive.py", line 406, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_script.py", line 388, in _construct
init_fn(script_module)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_recursive.py", line 388, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_recursive.py", line 410, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_recursive.py", line 304, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_recursive.py", line 676, in compile_unbound_method
stub = make_stub(fn, fn.__name__)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\_recursive.py", line 37, in make_stub
ast = get_jit_def(func, name, self_name="RecursiveScriptModule")
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\frontend.py", line 221, in get_jit_def
return build_def(ctx, fn_def, type_line, def_name, self_name=self_name)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\frontend.py", line 243, in build_def
param_list = build_param_list(ctx, py_def.args, self_name)
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\jit\frontend.py", line 270, in build_param_list
raise NotSupportedError(ctx_range, _vararg_kwarg_err)
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
File "C:\Users\Helaly\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 164
def _forward_unimplemented(self, *input: Any) -> None:
~~~~~~ <--- HERE
r"""Defines the computation performed at every call.
and if i chose to replace jit.script with jit.trace , an huge log is dumped on screen and nothing is saved |
st180966 | Hello!
I am trying to convert a PyTorch model to the onnx format using torchscript functionality in Python. My code consists of multiple classes that reference and draw information from one another and the problem is that I do not know how to connect all of them without errors, thereby exporting the model. No matter what I do, I get stuck somewhere along the process. Here I have a minimal reproducible example of what it is that I am trying to do:
import numpy as np
import torch
from torch import autograd
from torch import nn
from torch import optim
import torch.nn.functional as F
import onnx
import onnxruntime
import torch.onnx
class BeamState:
def __init__(self, source=None):
if not source:
self.mean_set = []
else:
self.mean_set = source.mean_set.copy()
def append(self, mean, hidden, cluster):
self.mean_set.append(mean.clone())
class Pred (torch.jit.ScriptModule):
def __init__(self):
super(Pred, self).__init__()
self.bm = BeamState()
@torch.jit.script_method
def forward(self, x):
beam_set = self.bm(3)
prediction = x* beam_set
return prediction
if __name__ == '__main__':
batch_size = 1
x = torch.randn(batch_size, 10)
p_model = Pred()
res = p_model(x)
print("If you have reached this far, it works!", res)
torch.onnx.export(p_model, x, "onnx_test.onnx", do_constant_folding=False, export_params=True, input_names = ['input'], output_names = ['output'],
example_outputs=torch.tensor([[1.0811, 1.0180, 1.0816, 1.1487, 1.1718, 1.3082, 0.8842, 0.9389, 1.3681,
1.2647]], dtype=torch.float64), dynamic_axes={'input' : {0 : 'batch_size', 1:'utterance_size'}})
print("onnx model exported")
This produces the following error message:
Traceback (most recent call last):
File "C:\Users\User\Python\Projects\onnx_tester.py", line 50, in <module>
p_model = Pred()
File "C:\Users\User\Python\lib\site-packages\torch\jit\_script.py", line 210, in init_then_script
] = torch.jit._recursive.create_script_module(self, make_stubs, share_types=not added_methods_in_init)
File "C:\Users\User\Python\lib\site-packages\torch\jit\_recursive.py", line 352, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "C:\Users\User\Python\lib\site-packages\torch\jit\_recursive.py", line 410, in create_script_module_impl
create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
File "C:\Users\User\Python\lib\site-packages\torch\jit\_recursive.py", line 304, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
RuntimeError:
Module 'Pred' has no attribute 'bm' (This attribute exists on the Python module, but we failed to convert Python type: '__main__.BeamState' to a TorchScript type.):
File "C:\Users\User\Python\Projects\onnx_tester.py", line 42
@torch.jit.script_method
def forward(self, x):
beam_set = self.bm(3)
~~~~~~~ <--- HERE
prediction = x* beam_set
return prediction
I tried minor modifications, but every time I get a different error message and I do not know where to begin. How do we go about turning such an example to onnx? |
st180967 | Hello,
I’m facing issue with saving model architecture [Serializing torch.nn.Modules]
torchcat_list1553×436 22.7 KB
with list as input
torchcat_tuple1564×325 16 KB
with tuple as input.
Please hellp. |
st180968 | Hi,
I am trying to run an pytorch object detector using triton server. I used tracing for the model and scripting for the post-processing function. For a single GPU the torchscript runs smoothly on the server.
But, in the multi-GPU case, I am not able to run the compiled script. This is because the scripted post-process function memories the GPU id (cuda:0) which I used to run the torchscript and expects all the tensor operations to be performed using that id. This invariably fails when the triton server passes any other cuda device.
Is there any workaround this? |
st180969 | How are you defining the multi-GPU use case? Could you explain your deployment and where it’s currently breaking? |
st180970 | ptrblck:
How are you defining the multi-GPU use case?
In multi-GPU usecase I start one triton server on two GPUs and place one instance of the torchscript model on each GPU using triton config.
ptrblck:
Could you explain your deployment
I am using triton server 2. What other information is required?
ptrblck:
where it’s currently breaking?
The triton server is running on cuda:0 and cuda:1
Torchscript was complied using cuda:0
The error is caused by model instance on cuda:1 of the server fails with the error message expected device cuda:1 but got device cuda:0 |
st180971 | Would it work, if you write the PyTorch model device-agnostic, i.e. use cuda:0 as now and mask the other GPUs with CUDA_VISIBLE_DEVICES? |
st180972 | Im quantizing nn.Modules (with my own API, not torch.quantization). I want to train black box user modules and convert them by simply replacing some of the modules afterwards. So far so good, but I don’t know how to properly access the control flow, which I need to to know for architectures that are not only sequential: For example, if there are residual adding connections in the forward pass, for example like so:
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.sequentialstack(x) + x
x = self.softmax(x)
return x
which Id like to replace by something like
def forward(self, x):
x = F.relu(self.fc1_quant(x))
x_residual_cache = x
x = self.sequentialstack_quant(x)
x = self.quantized_adder(x, x_residual_cache)
x = self.softmax_quant(x)
return x
What is the recommended way to do this? I have read the Intro to Torchscript but dont see immediately what the recommended way to find out about the internal control flow is; so before I throw regexes at scripted_model.forward.code, I thought I’d ask what the right way to do this is. |
st180973 | Solved by marvosyntactical in post #2
I found this quantization tutorial; the InvertedResidual class is basically what I want. So I will just change my model definition instead of modifying model code. |
st180974 | I found this quantization tutorial 2; the InvertedResidual class is basically what I want. So I will just change my model definition instead of modifying model code. |
st180975 | I’ve been trying to export a model from torch to onnx using the torch.jit.script(model()) command. I have also placed various print statements throughout my code just to make sure that it is running. However, at some point execution stops and I get the following error message:
Traceback (most recent call last):
File "C:\Users\User\Speaker-Diarization\uisrnn_plain_functions_old.py", line 269, in <module>
model = torch.jit.script(predict(x,model_args))
File "C:\Users\User\Python\lib\site-packages\torch\jit\_script.py", line 901, in script
qualified_name = _qualified_name(obj)
File "C:\Users\User\Python\lib\site-packages\torch\_jit_internal.py", line 793, in _qualified_name
raise RuntimeError("Could not get name of python class object")
RuntimeError: Could not get name of python class object
As of this point I have no idea what goes wrong. I tested it on two versions of Python: Pyhton 3.6 (Anaconda version) and Python 3.7 with PyTorch version 1.0 and 1.7 respectively. On the Anaconda version I got the following error:
Traceback (most recent call last):
File "C:\Users\User\Speaker-Diarization\uisrnn_plain_functions_old.py", line 269, in <module>
model = torch.jit.script(predict(x,model_args))
File "C:\Users\User\Anaconda3\lib\site-packages\torch\jit\__init__.py", line 689, in script
ast = get_jit_ast(fn, is_method=False)
File "C:\Users\User\Anaconda3\lib\site-packages\torch\jit\frontend.py", line 141, in get_jit_ast
source = dedent(inspect.getsource(fn))
File "C:\Users\User\Anaconda3\lib\inspect.py", line 973, in getsource
lines, lnum = getsourcelines(object)
File "C:\Users\User\Anaconda3\lib\inspect.py", line 955, in getsourcelines
lines, lnum = findsource(object)
File "C:\Users\User\Anaconda3\lib\inspect.py", line 768, in findsource
file = getsourcefile(object)
File "C:\Users\User\Anaconda3\lib\inspect.py", line 684, in getsourcefile
filename = getfile(object)
File "C:\Users\User\Anaconda3\lib\inspect.py", line 666, in getfile
'function, traceback, frame, or code object'.format(object))
TypeError: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] is not a module, class, method, function, traceback, frame, or code object
Since there aren’t many resources on this, I tried to replicate the methods found on PyTorch’s website without success. My code is as follows:
import numpy as np
import torch
from torch import autograd
from torch import nn
from torch import optim
import torch.nn.functional as F
import uisrnn
from uisrnn import loss_func
from uisrnn import utils
import onnx
import onnxruntime
class CoreRNN(nn.Module):
"""The core Recurent Neural Network used by UIS-RNN."""
def __init__(self, input_dim, hidden_size, depth, observation_dim, dropout=0):
super(CoreRNN, self).__init__()
self.hidden_size = hidden_size
if depth >= 2:
self.gru = nn.GRU(input_dim, hidden_size, depth, dropout=dropout)
else:
self.gru = nn.GRU(input_dim, hidden_size, depth)
self.linear_mean1 = nn.Linear(hidden_size, hidden_size)
self.linear_mean2 = nn.Linear(hidden_size, observation_dim)
def forward(self, input_seq, hidden=None):
output_seq, hidden = self.gru(input_seq, hidden)
if isinstance(output_seq, torch.nn.utils.rnn.PackedSequence):
output_seq, _ = torch.nn.utils.rnn.pad_packed_sequence(output_seq, batch_first=False)
mean = self.linear_mean2(F.relu(self.linear_mean1(output_seq)))
#print("Mean and hidden from Core", mean.shape, hidden.shape)
return mean, hidden
class BeamState:
"""Structure that contains necessary states for beam search."""
def __init__(self, source=None):
if not source:
self.mean_set = []
self.hidden_set = []
self.neg_likelihood = 0
self.trace = []
self.block_counts = []
else:
self.mean_set = source.mean_set.copy()
self.hidden_set = source.hidden_set.copy()
self.trace = source.trace.copy()
self.block_counts = source.block_counts.copy()
self.neg_likelihood = source.neg_likelihood
def append(self, mean, hidden, cluster):
#print("Mean and hidden from Beamstate", mean.shape, hidden.shape, cluster)
"""Append new item to the BeamState."""
self.mean_set.append(mean.clone()) #old code
self.hidden_set.append(hidden.clone()) #old code
#self.mean_set.append(mean) #old code
#self.hidden_set.append(mean) #old code
self.block_counts.append(1)
self.trace.append(cluster)
def uisrnn_onnx(model, ort_session, init_input, rnn_init_hidden):
ort_inputs = {ort_session.get_inputs()[0].name: init_input}
mean, hidden = ort_session.run(None, ort_inputs)
return mean, hidden
#@torch.jit.script
def _update_beam_state(beam_state, look_ahead_seq, cluster_seq):
"""Update a beam state given a look ahead sequence and known cluster
assignments.
Args:
beam_state: A BeamState object.
look_ahead_seq: Look ahead sequence, size: look_ahead*D.
look_ahead: number of step to look ahead in the beam search.
D: observation dimension
cluster_seq: Cluster assignment sequence for look_ahead_seq.
Returns:
new_beam_state: An updated BeamState object.
"""
loss = 0
new_beam_state = BeamState(beam_state)
for sub_idx, cluster in enumerate(cluster_seq):
if cluster > len(new_beam_state.mean_set): # invalid trace
new_beam_state.neg_likelihood = float('inf')
break
elif cluster < len(new_beam_state.mean_set): # existing cluster
last_cluster = new_beam_state.trace[-1]
loss = loss_func.weighted_mse_loss(input_tensor=torch.squeeze(new_beam_state.mean_set[cluster]), target_tensor=look_ahead_seq[sub_idx, :], weight=1 / (2 * sigma2)).cpu().detach().numpy() #old code
#loss = loss_func.weighted_mse_loss(input_tensor=torch.squeeze(torch.tensor(new_beam_state.mean_set[cluster])), target_tensor=look_ahead_seq[sub_idx, :], weight=1 / (2 * sigma2)).cpu().detach().numpy()
if cluster == last_cluster:
loss -= np.log(1 - transition_bias)
else:
loss -= np.log(transition_bias) + np.log( new_beam_state.block_counts[cluster]) - np.log( sum(new_beam_state.block_counts) + crp_alpha)
# update new mean and new hidden
mean, hidden = rnn_model(look_ahead_seq[sub_idx, :].unsqueeze(0).unsqueeze(0), new_beam_state.hidden_set[cluster]) # old code
#mean, hidden = uisrnn_onnx(onnx_model, ort_session, look_ahead_seq[sub_idx, :].unsqueeze(0).unsqueeze(0), new_beam_state.hidden_set[cluster]) #old code
#mean, hidden = uisrnn_onnx(onnx_model, ort_session, look_ahead_seq[sub_idx, :].unsqueeze(0).unsqueeze(0).cpu().detach().numpy(), new_beam_state.hidden_set[cluster])
new_beam_state.mean_set[cluster] = (new_beam_state.mean_set[cluster]*((np.array(new_beam_state.trace) == cluster).sum() -1).astype(float) + mean.clone()) / (np.array(new_beam_state.trace) == cluster).sum().astype(float) #old code
#new_beam_state.mean_set[cluster] = (new_beam_state.mean_set[cluster]*((np.array(new_beam_state.trace) == cluster).sum() -1).astype(float) + np.copy(mean)) / (np.array(new_beam_state.trace) == cluster).sum().astype(float) # use mean to predict
new_beam_state.hidden_set[cluster] = hidden.clone() #old code
#new_beam_state.hidden_set[cluster] = hidden
if cluster != last_cluster:
new_beam_state.block_counts[cluster] += 1
new_beam_state.trace.append(cluster)
else: # new cluster
init_input = autograd.Variable(torch.zeros(observation_dim)).unsqueeze(0).unsqueeze(0).to(device) #old code
#init_input = autograd.Variable(torch.zeros(observation_dim)).unsqueeze(0).unsqueeze(0).to(device).cpu().detach().numpy()
#print(init_input.shape, "Shape of init_input")
mean, hidden = rnn_model(init_input, rnn_init_hidden) #old code
#mean, hidden = uisrnn_onnx(onnx_model, ort_session, init_input, rnn_init_hidden)
loss = loss_func.weighted_mse_loss(input_tensor=torch.squeeze(mean), target_tensor=look_ahead_seq[sub_idx, :], weight=1 / (2 * sigma2)).cpu().detach().numpy() #old code
#loss = loss_func.weighted_mse_loss(input_tensor=torch.squeeze(torch.tensor(mean)), target_tensor=look_ahead_seq[sub_idx, :], weight=1 / (2 * sigma2)).cpu().detach().numpy()
loss -= np.log(transition_bias) + np.log(crp_alpha) - np.log(sum(new_beam_state.block_counts) + crp_alpha)
# update new min and new hidden
mean, hidden = rnn_model(look_ahead_seq[sub_idx, :].unsqueeze(0).unsqueeze(0), hidden) #old code
#mean, hidden = uisrnn_onnx(onnx_model, ort_session, look_ahead_seq[sub_idx, :].unsqueeze(0).unsqueeze(0), hidden) #old code
#mean, hidden = uisrnn_onnx(onnx_model, ort_session, look_ahead_seq[sub_idx, :].unsqueeze(0).unsqueeze(0).cpu().detach().numpy(), hidden)
new_beam_state.append(mean, hidden, cluster)
new_beam_state.neg_likelihood += loss
return new_beam_state
def _calculate_score(beam_state, look_ahead_seq):
#print('Beamstate from calculate_score', beam_state)
"""Calculate negative log likelihoods for all possible state allocations
of a look ahead sequence, according to the current beam state.
Args:
beam_state: A BeamState object.
look_ahead_seq: Look ahead sequence, size: look_ahead*D.
look_ahead: number of step to look ahead in the beam search.
D: observation dimension
Returns:
beam_score_set: a set of scores for each possible state allocation.
"""
look_ahead, _ = look_ahead_seq.shape
beam_num_clusters = len(beam_state.mean_set)
beam_score_set = float('inf') * np.ones(beam_num_clusters + 1 + np.arange(look_ahead))
for cluster_seq, _ in np.ndenumerate(beam_score_set):
updated_beam_state = _update_beam_state(beam_state, look_ahead_seq, cluster_seq)
beam_score_set[cluster_seq] = updated_beam_state.neg_likelihood
return beam_score_set
def load(filepath):
"""Load the model from a file.
Args:
filepath: the path of the file.
"""
var_dict = torch.load(filepath)
#rnn_model.load_state_dict(var_dict['rnn_state_dict'])
rnn_init_hidden = nn.Parameter(torch.from_numpy(var_dict['rnn_init_hidden']).to(device))
transition_bias = float(var_dict['transition_bias'])
transition_bias_denominator = float(var_dict['transition_bias_denominator'])
crp_alpha = float(var_dict['crp_alpha'])
sigma2 = nn.Parameter(torch.from_numpy(var_dict['sigma2']).to(device))
return var_dict
def predict_single(test_sequence, args):
# check type
if (not isinstance(test_sequence, np.ndarray) or
test_sequence.dtype != float):
raise TypeError('test_sequence should be a numpy array of float type.')
# check dimension
if test_sequence.ndim != 2:
raise ValueError('test_sequence must be 2-dim array.')
# check size
test_sequence_length, observation_dim = test_sequence.shape
if observation_dim != observation_dim:
raise ValueError('test_sequence does not match the dimension specified '
'by args.observation_dim.')
test_sequence = np.tile(test_sequence, (2, 1)) #args.test_iteration =2
test_sequence = autograd.Variable(torch.from_numpy(test_sequence).float()).to(device)
# bookkeeping for beam search
beam_set = [BeamState()]
for num_iter in np.arange(0, 2 * test_sequence_length, 1): #args.test_iteration= 2, args.look_ahead=1
max_clusters = max([len(beam_state.mean_set) for beam_state in beam_set])
look_ahead_seq = test_sequence[num_iter: num_iter + 1, :] #args.look_ahead=1
look_ahead_seq_length = look_ahead_seq.shape[0]
score_set = float('inf') * np.ones(np.append(10, max_clusters + 1 + np.arange(look_ahead_seq_length))) #args.beam_size = 10
for beam_rank, beam_state in enumerate(beam_set):
beam_score_set = _calculate_score(beam_state, look_ahead_seq)
score_set[beam_rank, :] = np.pad(beam_score_set, np.tile([[0, max_clusters - len(beam_state.mean_set)]], (look_ahead_seq_length, 1)), 'constant',constant_values=float('inf'))
# find top scores
score_ranked = np.sort(score_set, axis=None)
score_ranked[score_ranked == float('inf')] = 0
score_ranked = np.trim_zeros(score_ranked)
## print("score ranked: " + __name__, score_ranked.shape)
idx_ranked = np.argsort(score_set, axis=None)
updated_beam_set = []
for new_beam_rank in range(np.min((len(score_ranked), 10))): # args.beam_size=10
total_idx = np.unravel_index(idx_ranked[new_beam_rank], score_set.shape)
prev_beam_rank = total_idx[0]
cluster_seq = total_idx[1:]
updated_beam_state = _update_beam_state(beam_set[prev_beam_rank], look_ahead_seq, cluster_seq)
updated_beam_set.append(updated_beam_state)
beam_set = updated_beam_set
predicted_cluster_id = beam_set[0].trace[-test_sequence_length:]
return predicted_cluster_id
def predict(test_sequences, args):
print("Predict method")
# check type
if isinstance(test_sequences, np.ndarray):
return predict_single(test_sequences, args)
if isinstance(test_sequences, list):
return [predict_single(test_sequence, args) for test_sequence in test_sequences]
raise TypeError('test_sequences should be either a list or numpy array.')
SAVED_MODEL_NAME = 'C:/Users/User/Speaker-Diarization/pretrained/saved_model.uisrnn_benchmark'
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
observation_dim = 512
model_args, _, inference_args = uisrnn.parse_arguments()
args = uisrnn.parse_arguments()
model_args.observation_dim = observation_dim
rnn_init_hidden = nn.Parameter(torch.zeros(model_args.rnn_depth, 1, model_args.rnn_hidden_size).to(device))
var_dict = load(SAVED_MODEL_NAME)
rnn_init_hidden = nn.Parameter(torch.from_numpy(var_dict['rnn_init_hidden']).to(device))
_INITIAL_SIGMA2_VALUE = 0.1
estimate_sigma2 = (model_args.sigma2 is None)
estimate_transition_bias = (model_args.transition_bias is None)
sigma2 = nn.Parameter(torch.from_numpy(var_dict['sigma2']).to(device))
transition_bias = float(var_dict['transition_bias'])
transition_bias_denominator = float(var_dict['transition_bias_denominator'])
crp_alpha = float(var_dict['crp_alpha'])
rnn_model = CoreRNN(model_args.observation_dim, model_args.rnn_hidden_size, model_args.rnn_depth, model_args.observation_dim, model_args.rnn_dropout).to(device)
rnn_model.load_state_dict(var_dict['rnn_state_dict'])
batch_size = 1
x = np.ones((batch_size, 512)).tolist()
x = torch.randn(1, batch_size, 512)
x = np.zeros((40,512))
model = torch.jit.script(predict(x,model_args))
#model.load(SAVED_MODEL_NAME)
torch.onnx.export(uisrnnModel, x, "uisRNN_core.onnx", do_constant_folding=False, export_params=True, input_names = ['input'], output_names = ['output'], dynamic_axes={'input' : {0 : 'batch_size', 1:'utterance_size'}})
print("onnx model exported")
I also tried other tweaks, such as the use of decorator @torch.jit.script at the very top of my script. This produces the following error:
Traceback (most recent call last):
File "C:\Users\User\Speaker-Diarization\uisrnn_plain_functions_old.py", line 15, in <module>
class CoreRNN(nn.Module):
File "C:\Users\User\Python\lib\site-packages\torch\jit\_script.py", line 909, in script
" pass an instance instead".format(obj)
RuntimeError: Type '<class '__main__.CoreRNN'>' cannot be compiled since it inherits from nn.Module, pass an instance instead
Finally, if I try to export without the use of jit, I get this error:
Traceback (most recent call last):
File "C:\Users\User\Speaker-Diarization\uisrnn_plain_functions_old.py", line 271, in <module>
torch.onnx.export(model, x, "uisRNN_core.onnx", do_constant_folding=False, export_params=True, input_names = ['input'], output_names = ['output'], dynamic_axes={'input' : {0 : 'batch_size', 1:'utterance_size'}})
File "C:\Users\User\Python\lib\site-packages\torch\onnx\__init__.py", line 230, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "C:\Users\User\Python\lib\site-packages\torch\onnx\utils.py", line 91, in export
use_external_data_format=use_external_data_format)
File "C:\Users\User\Python\lib\site-packages\torch\onnx\utils.py", line 618, in _export
with select_model_mode_for_export(model, training):
File "C:\Users\User\Python\lib\contextlib.py", line 112, in __enter__
return next(self.gen)
File "C:\Users\User\Python\lib\site-packages\torch\onnx\utils.py", line 35, in select_model_mode_for_export
is_originally_training = model.training
AttributeError: 'list' object has no attribute 'training'
What should I do to export this model? Every plan fell through. I’ve tried all possible combinations of placing decorators and using the torch.jit.script and torch.jit.trace functions. I’ve also attempted to pass in an instance of the model calling model= CoreRNN(), but to no avail. Every contrived experiment from PyTorch’s website I have attempted to recreate works fine. ΙAre there any more advanced tutorials on this? Any help is greatly appreciated. |
st180976 | Hi, i added a custom IR pass in my Pytorch, and i want it could work in other people’s Pytorch, so can i convert my optimized IR back to TorchScript model file? Thankyou very much! |
st180977 | Hi all,
I try to load a torchscript model in C++, but got an error:
Screenshot from 2020-11-18 22-20-36961×354 41.2 KB
The input data is actually double type. I tried to use .double() following here but cannot run make.
My code:
#include <torch/script.h> // One-stop header.
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
std::vector<double> inputdata({ 1.0, 2.0, 3.0, 4.0, 5.0 });
std::cout << inputdata << std::endl;
auto opts = torch::TensorOptions().dtype(torch::kDouble);
torch::Tensor input = torch::from_blob(inputdata.data(), {1, (int)inputdata.size()}, opts).to(torch::kDouble);
std::cout << input << std::endl;
if (argc != 2) {
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}
torch::jit::script::Module module;
try {
// Deserialize the ScriptModule from a file using torch::jit::load().
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e) {
std::cerr << "error loading the model\n";
return -1;
}
std::cout << "ok" << std::endl;
std::vector<torch::jit::IValue> input_invar;
input_invar.push_back(input);
std::cout << "run forward" << std::endl;
at::Tensor output = module.forward(input_invar).toTensor();
std::vector<double> out_vect(output.data_ptr<double>(), output.data_ptr<double>() + output.numel());
std::cout << out_vect << std::endl;
}
Could you give me some suggestions, please ?
Best. |
st180978 | Based on the error message it seems your model is using the default FloatTensors, while you are trying to pass DoubleTensors to its forward method.
Transform the model to double or the input tensors to float and it should work. |
st180979 | Hi All,
Trying to use Torchscript to convert my model and running into an issue, would like to know what best way forward is.
Say you have nn.Module as follows:
class Model(nn.Module):
def __init__(self, hidden_size, input_size, num_layers, dropout):
super(Model, self).__init__()
self.hidden_size = hidden_size
self.input_size = input_size
self.num_layers = num_layers
self.dropout = dropout
modules = []
for layer in range(self.layers):
lstm = nn.LSTM(self.input_size if layer == 0 else self.hidden_size * 2, self.hidden_size, bidirectional=True, batch_first=True)
modules.append(lstm)
if layer != 0:
linear = nn.Linear(self.hidden_size * 2)
modules.append(linear)
if layer != self.layers - 1:
dropout = nn.Dropout(p=self.dropout)
modules.append(dropout)
self.modulelist = nn.ModuleList(modules)
def forward(self, X):
for module in self.modulelist:
if ".LSTM" in str(type(module)):
pack_sequence(...)
# Do something ...
pack_sequence(...)
elif ".Linear" in str(type(module)):
# Do something else ...
elif ".Dropout" in str(type(module)):
# Do something else ...
return X
Resulting in a ModuleList with layer structure:
LSTM -> Dropout -> LSTM -> Linear -> Dropout -> LSTM -> Linear -> Dropout -> LSTM -> Linear
I get an error when calling scripted_module = torch.jit.script(Model(hidden_size=256, input_size=512, layers=4, dropout=0.3)) when trying to check the type of the module when iterating through the ModuleList during the forward pass.
Is there a way to circumvent this issue i.e. is this kind of forward-pass supported by TorchScript? Thanks! |
st180980 | What I want to do, is run a script with PYTORCH_JIT=0, but still have some free functions optimized (i.e. opt-in mode). My quick test suggests that the following works:
torch.jit._state.enable()
@torch.jit.script
def f(x):
return x.square()
torch.jit._state.disable() #TODO:restore
Is this reliable or some global objects will be messed up? |
st180981 | Solved by googlebot in post #2
Apparently, fuser doesn’t work this way. I’ll just resort to skipping some compilations then. |
st180982 | Apparently, fuser doesn’t work this way. I’ll just resort to skipping some compilations then. |
st180983 | I would like to know if there is a way to access a traced model parameters. Or better say, is there any way to untrace a model? |
st180984 | Hello, all.
Please imagine that we want to separate the process of the Deep learning model into the feature extraction process and the classification process.
For example, the feature extraction process of AlexNet is from the first convolution layer and the last pooling layer which is located in front of the first fully connected layer. The remaining part from the first fully connected layer to the end of the Deep learning model is the classification process.
I checked that this partitioning can be achieved for the pre-trained AlexNet model in the Torchvision package. The feature extraction process is implemented in features which is a nn.Sequential instance and the classification process is implemented in classifier which is also a nn.Sequential instance.
Thus, model.features.forward(input data) gives me the result of the feature extraction and model.classifier.forward(feature data) gives me the result of the classification.
My question is whether this is also possible for the Torchscript model.
According to the tutorial of Torchscript, we use torch.jit.save(PyTorch model, filename) to convert a PyTorch model to a Torchscript model.
Then, when I load the saved Torchscript model, it is impossible to access features directly like the case of PyTorch model.
The reason what I thought is that the PyTorch model is eager mode and the Torchscript model is graph mode. Thus, the graph mode model might not able to be separated.
If my idea is correct, is there no way to separate graph mode model into two parts without conducting torch.jit.save for features and classifier?
If my idea is not correct, please share your idea.
Many thanks ! |
st180985 | Solved by vferrer in post #2
Hi
You can export custom methods with TorchScript. So, I would make your model to have a “extract_features” method that extracts them y mark it as @jit.export . So, you can call model.extract_features((x). |
st180986 | Hi
You can export custom methods with TorchScript. So, I would make your model to have a “extract_features” method that extracts them y mark it as @jit.export . So, you can call model.extract_features((x). |
st180987 | Hi,
I’m trying to use torch.jit.script to script a modified version of ultralytics yolov3
I’ve modified it a bit to fix all kinds of errors during the scripting process and now I’m getting
File "C:\ProgramData\Anaconda3\envs\yolo_pytorch\lib\site-packages\torch\jit\_recursive.py", line 325, in init_fn
cpp_module.setattr(name, orig_value)
RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details)
It happens when trying to set the attribute for the module_def, which is of type List[Dict[str, str]]
I’ve modified the module_defs for all the keys & values to be strings
I’m using torch 1.5.0 on Windows 10 64 bit
Any suggestions on how to tackle this? if more info is needed, let me know what |
st180988 | I would generally recommend to update to the latest stable version, as a lot was fixes/added into the JIT. |
st180989 | The problem still happened in 1.7.0, but then I iterated the list and found that in some cases, one of the values was initialized as int(0). I changed that to "0" and it worked
thanks! |
st180990 | Hi, i just add a pass in TorchScript IR to convert BertLayer to fastertransformer Encoder, however i find model is slow after convert to TorchScript. I get Nvprof result and find a time consuming activity:
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 57.50% 1.49484s 25200 59.319us 3.2000us 151.55us _ZN2at6native27unrolled_elementwise_kernelIZZZNS0_21copy_device_to_deviceERNS_14TensorIteratorEbENKUlvE0_clEvENKUlvE2_clEvEUlfE_NS_6detail5ArrayIPcLi2EEE16OffsetCalculatorILi1EjESC_NS0_6memory15LoadWithoutCastENSD_16StoreWithoutCastEEEviT_T0_T1_T2_T3_T4_
I watched my final TorchScript IR, and i guess it’s reason is each time it runs it will do aten::contiguous several times, like:
%1752 : Float(*, *, requires_grad=1, device=cuda:0) = aten::contiguous(%1153, %21)
aten::contiguous is needed for Tensors which will be send to custom op because they will be convert by .transpose(-1, -2) first, but aten::contiguous seems time consuming. So is there any way that i can convert model weights to constant in TorchScript IR so that aten::contiguous(weights) will be convert to Constant Tensor, or if i can do something to avoid aten::contiguous? Thankyou very much! |
st180991 | .contiguous() is copying the data, if the data isn’t stored in a contiguous memory array e.g. after a transpose.
If your kernel needs to work on a contiguous array and you need to permute the tensor (i.e. you cannot pass it in the expected shape from the beginning), I don’t think there is a workaround. |
st180992 | Thankyou for response, so can i frozen weights to Constant in TorchScript IR? I mean, i just want to add Tensor t = model.weight.data.transpose(-1, -2).contiguous() in TorchScript IR, if i convert model.weight to Constant, then Tensor t will be optimize to Constant by pass.
My current IR is:
%1128 : Float(*, *, requires_grad=1, device=cuda:0) = prim::GetAttr[name="weight"](%1126)
%1436 : Float(*, *, requires_grad=1, device=cuda:0) = aten::transpose(%1128, %10, %1147)
%1610 : Float(*, *, requires_grad=1, device=cuda:0) = aten::contiguous(%1436, %21)
And i want to covert it to:
%1060 : Tensor = prim::Constant[value=<Tensor>]() |
st180993 | Are you calling the transpose operation in the __init__ method of your model, the forward or somewhere else?
Could you transpose the parameter before and pass it to the model directly?
Also, don’t use the .data attribute as it might yield unwanted side effects. |
st180994 | Sorry, i need to write a pass for TorchScript IR so i can’t control the model, so i want to convert model’s weights to constant just in IR pass. |
st180995 | Or can i get actual Tensor of model weigth in TorchScript pass? If i have:
%weight : Tensor = prim::GetAttr[name="weight"](%1)
can i get the actual Tensor by Value %weight? |
st180996 | Yes, you should be able to get the tensor by its name. However, based on the IR it seems your graph contains the transpose and contiguous op, as it’s apparently needed in a custom layer.
I’m unsure, how you would like to avoid these operations. |
st180997 | I want to get the actual Tensor of %weight, and then create a new Constant Node by this Tensor and insert it to Graph after transpose it, so how to get tensor by name? Thankyou very much! |
st180998 | Can I register custom op with python API and load generated dynamic library with C++ in custom op register |
st180999 | converted a model (VGG19 + with just 1 FC layer) using PyTorch 1.7, when I load it on a container with 1.4, it fails to load with a “version too old” error.
Is this the intended behavior?
RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 4, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:132) |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.