id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st181200 | Hi, yes you can run:
old_prof_exec_state = torch._C._jit_set_profiling_executor(True)
old_prof_mode_state = torch._C._jit_set_profiling_mode(True)
torch._C._jit_set_num_profiled_runs(num_runs)
run your scripted function/module for some number of times,
then run:
torch.jit.last_executed_optimized_graph() |
st181201 | Hi sir,
I can visualize Prim:Profile.
Can you please explain me in detail the solution you have provided?. I cannot understand the set of commands you have given. |
st181202 | old_prof_exec_state = torch._C._jit_set_profiling_executor(True)
old_prof_mode_state = torch._C._jit_set_profiling_mode(True)
this enables the profiling executor.
torch._C._jit_set_num_profiled_runs(num_runs)
how many profiling runs we want to do before we optimize the graph.
@torch.jit.script
def foo(x):
if x.size(0) == 1:
return 1
else:
return 2
old_prof_exec_state = torch._C._jit_set_profiling_executor(True)
old_prof_mode_state = torch._C._jit_set_profiling_mode(True)
torch._C._jit_set_num_profiled_runs(1)
foo(torch.rand([1, 2]))
print(torch.jit.last_executed_optimized_graph())
foo(torch.rand([1, 2]))
print(torch.jit.last_executed_optimized_graph())
gives
graph(%x.1 : Tensor):
%1 : int = prim::Constant[value=0]() # test/test_jit.py:3742:22
%2 : int = prim::Constant[value=1]() # test/test_jit.py:3742:28
%3 : int = prim::Constant[value=2]() # test/test_jit.py:3745:23
%4 : Tensor = prim::profile(%x.1)
%5 : int = aten::size(%4, %1) # test/test_jit.py:3742:15
%6 : bool = aten::eq(%5, %2) # test/test_jit.py:3742:15
%7 : int = prim::If(%6) # test/test_jit.py:3742:12
block0():
-> (%2)
block1():
-> (%3)
= prim::profile()
return (%7)
first graph, still profiling
graph(%x.1 : Tensor):
%2 : int = prim::Constant[value=1]() # test/test_jit.py:3742:28
%10 : int = prim::BailoutTemplate_0()
%9 : Double(1:2, 2:1) = prim::BailOut[index=0](%10, %x.1)
return (%2)
with prim::BailoutTemplate_0 = graph(%x.1 : Tensor):
%1 : Double(1:2, 2:1) = prim::BailOut[index=0](%x.1)
%2 : int = prim::Constant[value=0]() # test/test_jit.py:3742:22
%3 : int = prim::Constant[value=1]() # test/test_jit.py:3742:28
%4 : int = prim::Constant[value=2]() # test/test_jit.py:3745:23
%5 : int = aten::size(%1, %2) # test/test_jit.py:3742:15
%6 : bool = aten::eq(%5, %3) # test/test_jit.py:3742:15
%7 : int = prim::If(%6) # test/test_jit.py:3742:12
block0():
-> (%3)
block1():
-> (%4)
return (%7)
second graph, optimized from profiles.
additionally, you can read
https://github.com/pytorch/pytorch/blob/53af9df557aff745edf24193ece784fd008c6f19/torch/csrc/jit/OVERVIEW.md#profiling-programs 16. |
st181203 | Hello sir,
I tried to visualize the runtime performance improvement made by convolution layer which I implemented from scratch Vs torchscript version of convolution layer Vs torch.nn.conv2d() module for 100 iterations with input (128,3,28,28), out_channel =64, kernel size=3.
Convolution layer from scratch in CUDA -> 9.366 seconds
torchscript convolution layer from scratch in CUDA -> 6.636 seconds
torch.nn.conv2d() -> 475.614 milliseconds.
My code
class conv2D(nn.Module):
def __init__(self, in_channel, out_channel, kernel_size):
super(conv2D,self).__init__()
self.weight = torch.nn.Parameter(torch.ones(out_channel,in_channel,kernel_size, kernel_size))
self.bias = torch.nn.Parameter(torch.zeros(out_channel))
self.kernel_size = kernel_size
self.in_channel = in_channel
self.out_channel = out_channel
def forward(self, image):
img_height = image.shape[3]
img_width = image.shape[2]
batch_size = image.shape[0]
out_height = img_height-self.kernel_size+1
out_width = img_width-self.kernel_size+1
output = torch.zeros(batch_size,self.out_channel,out_width,out_height)
for k in range(batch_size):
for i in range(out_height):
for j in range(out_width):
temp = torch.sum(image[k,:,j:j+self.kernel_size,i:i+self.kernel_size]*self.weight,dim=(1,2,3))
output[k,:,i,j]=torch.add(temp,self.bias)
return output
Scripting the model and running with a sample input to get an optimized graph
x = torch.ones(128,3,28,28).to("cuda")
c = conv2D(3,64,3).to("cuda")
c_s = torch.jit.script(c).to("cuda")
c_s(x)
Profiling both the scripted and normal method.
with torch.autograd.profiler.profile(use_cuda=True) as prof:
with torch.no_grad():
for i in range(100):
c(x)
print(prof.table())
with torch.autograd.profiler.profile(use_cuda=True) as prof:
with torch.no_grad():
for i in range(100):
c_s(x)
print(prof.table())
Is there any problem in my approach? and how to optimize even more?. I request you to help me with this problem. |
st181204 | Hi,
Sorry for delay. Currently we only generate new optimized kernels for a series of pointwise ops on GPU. If that is not part of your use case it is unlikely you will see speedup at the moment. |
st181205 | Here is my forward function.
def forward(self, inputs):
if self.first_time:
p3, p4, p5 = inputs
p3_in = self.f3(p3)
p4_in = self.f4(p4)
p5_in = self.f5(p5)
else:
p3_in, p4_in, p5_in = inputs
error shows:
RuntimeError:
Tensor cannot be used as a tuple:
if self.first_time:
p3, p4, p5 = inputs
~~~~~~ <— HERE
is there anyway to resolve this error? |
st181206 | I got a model trained with PyTorch 1.4. If I script and save this model with PyTorch 1.4, it will save successfully, but I need to script and save this model with PyTorch 1.3. When I save model with 1.3, I got error message:
RuntimeError: Unknown IValue type for pickling: Device (pushIValueImpl at /pytorch/torch/csrc/jit/pickler.cpp:125)
I want to know how to fix this Device issue. Thx. |
st181207 | Solved by Michael_Suo in post #2
It implies that somewhere in your model, you’re storing a device type and trying to serialize it (like self.foo = my_device). This is supported in 1.4 but not 1.3; if you want your model to work with 1.3, you’ll have to avoid storing references to devices. |
st181208 | It implies that somewhere in your model, you’re storing a device type and trying to serialize it (like self.foo = my_device). This is supported in 1.4 but not 1.3; if you want your model to work with 1.3, you’ll have to avoid storing references to devices. |
st181209 | Thx.
In my class, I have a
@property
def device():...
After I delete this part, I can convert model successfully. |
st181210 | Is it possible to save a inline’d and lowered graph as a ScriptModule and load it using torch.jit.load? My use case is to resolve the ClassType inputs into actual accessed tensor parameters. |
st181211 | @eellison: Thanks for your response. I need some help/suggestion in how to proceed. Lowering a graph (which belongs to a scriptmodule’s method) returns a lowered graph and parameters. This pass replaces the input class object (for example "self") with actual parameters belonging to "self" which are accessed throughout the graph. Now, creating a method with this lowered graph and adding this method to a module will seem like adding a method which takes original inputs along with the parameter tensors. The question I had is, how do I package these parameters and this lowered graph such that when we run the method, I would only need to provide the inputs to original ScriptModule and not these parameters? Not sure if I am explaining the question well. Please do let me know if I am not thinking straight. |
st181212 | Could we do the following?
Run LowerGraph pass: This pass will return a new Graph and a list of tensors.
Add these Tensors as constants in the graph and remove the corresponding Graph input from the graph. Replace all the input’s references with the constants references.
Create a module from this graph.
Is this a valid approach?
This will create a graph whose inputs are only the original model’s inputs. |
st181213 | Hi there,
I am trying to export and use a model defined in pytorch by using the “torch.jit.script” method. When I try to use the exported model the result obtained is awful if you compare with the original results. The model loads pretrained weigths and after all of these I use the method “torch.jit.script” and save it to the file. I have found some related issue in github 1, however I have not managed to solve the problem.
Any idea of the problem? Thank you in advance. |
st181214 | Solved by eusebioaguilera in post #3
Thank you for your response. Finally, I have managed to get similar results using the exported model. The problem was that the tensor was transposed by using a custom ToTensor transform instead of using the standard ToTensor transform. |
st181215 | Do you have some code to reproduce the issue? It’s hard to say what’s going on without more information |
st181216 | Thank you for your response. Finally, I have managed to get similar results using the exported model. The problem was that the tensor was transposed by using a custom ToTensor transform instead of using the standard ToTensor transform. |
st181217 | Hi all,
I’ve been trying to use jit scripting but I seem to be getting no success. I’ve now tried to create a very simple test case as shown. I’m running this on conda python 3.7, cuda 10.1, pytorch 1.5.1, CentOS 7.6 installed as the website says.
import torch
@torch.jit.script
def simple_kernel(x1, y1, x2, y2):
xi = torch.max(x1, x2)
yi = torch.max(y1, y2)
zi = xi+yi
return zi
x1, y1, x2, y2 = torch.randn(4, 10_000_000, device='cuda')
zz = simple_kernel(x1, y1, x2, y2)
simple_kernel.graph_for(x1, y1, x2, y2)
print(simple_kernel.graph)
When I run this, I see the following:
(base) [jlquinn test]$ PYTORCH_FUSION_DEBUG=1 python jittst1.py
graph(%x1.1 : Tensor,
%y1.1 : Tensor,
%x2.1 : Tensor,
%y2.1 : Tensor):
%12 : int = prim::Constant[value=1]()
%xi.1 : Tensor = aten::max(%x1.1, %x2.1) # jittst1.py:6:9
%yi.1 : Tensor = aten::max(%y1.1, %y2.1) # jittst1.py:7:9
%zi.1 : Tensor = aten::add(%xi.1, %yi.1, %12) # jittst1.py:8:9
return (%zi.1)
From what I’ve read, this should be a simple case for pytorch to fuse, consisting of simple pointwise operations, but it appears that there is no fusion happening. Can anyone enlighten me as to what I’m missing?
Thanks
Jerry |
st181218 | Hi,
In pytorch 1.5 it is necessary to enable to profile guided optimization to visualize the fusion group. Thus
torch._C._jit_set_profiling_executor(False)
@torch.jit.script
def simple_kernel(x1, y1, x2, y2):
xi = torch.max(x1, x2)
yi = torch.max(y1, y2)
zi = xi+yi
return zi
zz = simple_kernel(x1, y1, x2, y2)
print(simple_kernel.graph_for(x1,y1,x2,y2))
Will give the optimal answer |
st181219 | Thanks for taking the time to respond. Is this documented anywhere yet? And more importantly, when I run a torch.jit.script function, it will optimize after it’s been run a few times, right? |
st181220 | Hey,
I plan to export a torchscript graph represantation by using trace(...) command, and then using trace.graph(). after that, I plan to apply some optimisations for the given graph and then I want to revert the result back to ScriptModule class.
I am familiar with: torch._C._create_function_from_graph('forward', graph)
but it returns ScriptFunction class. I would like to use it as a module (ScriptModule class).
I need it as a module class because I need to be able to access its parameters by parameters() method.
thanks, Omer. |
st181221 | if you are mutating the graph in-place, the original ScriptModule should reflect your optimizations. Today we have no public way of creating a ScriptModule from a graph alone |
st181222 | Thanks for responding,
what do you mean by “in-place” in this case ?
lets assume that I have a traced module that I got by: torch.jit.trace(model, sample_inputs).
do you mean that I need to change its “graph” field or “inlined_graph” field to make the module to represent the new graph? (thats mean that all its parameters should be changed if needed) |
st181223 | @Michael_Suo In case we do not want to modify the graph in-place,
that is, we want an nn.Module that, given a graph, we will have the same forward function as the graph,
and will have same parameters/buffers/modules as used in the graph,
Can we automatically create such nn.Module given a graph by inferring everything needed? |
st181224 | I am trying to serialize my model to run in C++, but even before C++ I’m testing in python both the original and the trace model outputs on the same input and get different results, here is my python code:
import torch
from torchvision import models
import torch.nn as nn
model_file = r'D:\workspace\....\model.pt'
model_ft = models.resnet50()
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 6)
model_ft.load_state_dict(torch.load(model_file))
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model_ft, example)
# Test both outputs of original and traced model to compare
model_ft.eval()
with torch.no_grad():
output_model = model_ft(torch.zeros(1, 3, 224, 224))
output_traced_model = traced_script_module(torch.zeros(1, 3, 224, 224))
print('output_model = ' + str(output_model))
print('output_traced_model = ' + str(output_traced_model))
# save traced model
traced_script_module.save("traced_resnet_model.pt")
and variables 'output_model ’ and 'output_traced_model ’ are completely different:
output_model = tensor([[-0.0805, 0.2096, 0.0873, -0.0468, -0.1598, -0.0375]])
output_traced_model = tensor([[-0.4763, 1.5731, 0.2112, 0.3496, -1.6906, 0.0191]],
grad_fn=<DifferentiableGraphBackward>) |
st181225 | The difference is created, because you are running the eager model in eval(), while the traced model was in train() mode.
From the docs 9:
In the returned ScriptModule 2, operations that have different behaviors in training and eval modes will always behave as if it is in the mode it was in during tracing, no matter which mode the ScriptModule is in. |
st181226 | Thanks for the reply.
I confirm that switching the original model to eval() mode before creating the traced model solves the issue. |
st181227 | Hi everyone,
is there an easy way to convert a torchscripted model back to an eager model ?
Example:
eager_model = MyModel()
scripted_model = torch.jit.script(eager_model)
recovered_eager_model = some_function(scripted_model) ### could not find anything about it in the docs |
st181228 | Solved by tom in post #2
No, and it is strongly advised that you keep your source code around when doing stuff with JITed models.
That said, you can probably get a reduced model by walking the traced module, setting the children in init and using the .forward of each module.
This would get you something you can run and re… |
st181229 | No, and it is strongly advised that you keep your source code around when doing stuff with JITed models.
That said, you can probably get a reduced model by walking the traced module, setting the children in init and using the .forward of each module.
This would get you something you can run and re-trace, but you loose how it was built (i.e. the inits).
Fun fact: In building a PyTorch-TVM bridge 2 I built a PyTorch autograd function (but with TVM as backend instead of calling PyTorch) corresponding to the traced model.
Best regards
Thomas |
st181230 | Hello everyone, I’m new to ONNX and I’m trying to convert a model where I need do some for-loop assignmens like the code below,
import torch
import torch.nn as nn
@torch.jit.script
def create_alignment_v2():
base_mat = torch.zeros(2, 2, 2)
for i in range(base_mat.size(0)):
base_mat[i][0][0] = 1
return base_mat
class ToyModule(nn.Module):
def __int__(self):
super().__init__()
def forward(self, duration_predictor_output):
alignment = create_alignment_v2()
# output = alignment @ x
return alignment
def test():
module = ToyModule()
module.eval()
x = torch.rand(2, 28, 384)
alignment = module(x)
torch.onnx.export(module, x, 'toy.onnx',
export_params=True,
opset_version=10,
do_constant_folding=True,
verbose=True,
input_names=['seq'],
output_names=['alignment'],
dynamic_axes={'seq': {0: 'batch', 1: 'sequence'},}
)
test()
And the error message is:
Traceback (most recent call last):
File "/mfs/fangzhiqiang/workspace/tts/FastSpeech/jit_script.py", line 83, in <module>
test()
File "/mfs/fangzhiqiang/workspace/tts/FastSpeech/jit_script.py", line 78, in test
dynamic_axes={'seq': {0: 'batch', 1: 'sequence'},}
File "/home/fangzhiqiang/miniconda3/envs/tts/lib/python3.6/site-packages/torch/onnx/__init__.py", line 168, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/home/fangzhiqiang/miniconda3/envs/tts/lib/python3.6/site-packages/torch/onnx/utils.py", line 69, in export
use_external_data_format=use_external_data_format)
File "/home/fangzhiqiang/miniconda3/envs/tts/lib/python3.6/site-packages/torch/onnx/utils.py", line 488, in _export
fixed_batch_size=fixed_batch_size)
File "/home/fangzhiqiang/miniconda3/envs/tts/lib/python3.6/site-packages/torch/onnx/utils.py", line 351, in _model_to_graph
fixed_batch_size=fixed_batch_size, params_dict=params_dict)
File "/home/fangzhiqiang/miniconda3/envs/tts/lib/python3.6/site-packages/torch/onnx/utils.py", line 120, in _optimize_graph
torch._C._jit_pass_onnx_prepare_inplace_ops_for_onnx(graph)
IndexError: vector::_M_range_check: __n (which is 2) >= this->size() (which is 2)
In fact, if I remove the assignment operation, the graph can be built succesfully. I wonder whether this is a bug and how to convert such model to ONNX. Thanks for any reply~ |
st181231 | I got the following warning when loading torchscript model in libtorch:
Warning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). (_cudnn_impl at /pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1269)
But when I added flatten_parameters() in my code like this:
class BidirectionalLSTM(nn.Module):
def __init__(self, nIn, nHidden, nOut):
super(BidirectionalLSTM, self).__init__()
self.rnn = nn.LSTM(nIn, nHidden, bidirectional=True)
self.embedding = nn.Linear(nHidden * 2, nOut)
def forward(self, input):
self.rnn.flatten_parameters()
recurrent, _ = self.rnn(input)
T, b, h = recurrent.size()
t_rec = recurrent.view(T * b, h)
output = self.embedding(t_rec) # [T * b, nOut]
output = output.view(T, b, -1)
return output
I got this error:
Traceback (most recent call last):
File "to_torchscript.py", line 252, in <module>
tt = torch.jit.script(teacher_model)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1203, in script
return torch.jit.torch.jit._recursive.recursive_script(obj)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 173, in recursive_script
return copy_to_script_module(mod, overload_stubs + stubs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 95, in copy_to_script_module
torch.jit._create_methods_from_stubs(script_module, stubs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1423, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 195, in make_strong_submodule
new_strong_submodule = recursive_script(module)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 116, in recursive_script
return create_constant_iterable_module(mod)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 233, in create_constant_iterable_module
modules[key] = recursive_script(submodule)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 173, in recursive_script
return copy_to_script_module(mod, overload_stubs + stubs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 95, in copy_to_script_module
torch.jit._create_methods_from_stubs(script_module, stubs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1423, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 181, in create_method_from_fn
stub = torch.jit.script_method(fn, _jit_internal.createResolutionCallbackFromClosure(fn))
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1280, in script_method
ast = get_jit_def(fn, self_name="ScriptModule")
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/frontend.py", line 169, in get_jit_def
return build_def(ctx, py_ast.body[0], type_line, self_name)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/frontend.py", line 209, in build_def
build_stmts(ctx, body))
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/frontend.py", line 127, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/frontend.py", line 127, in <listcomp>
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/frontend.py", line 185, in __call__
return method(ctx, node)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/frontend.py", line 283, in build_Assign
rhs = build_expr(ctx, stmt.value)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/frontend.py", line 185, in __call__
return method(ctx, node)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/frontend.py", line 442, in build_Call
args = [build_expr(ctx, py_arg) for py_arg in expr.args]
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/frontend.py", line 442, in <listcomp>
args = [build_expr(ctx, py_arg) for py_arg in expr.args]
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/frontend.py", line 184, in __call__
raise UnsupportedNodeError(ctx, node)
torch.jit.frontend.UnsupportedNodeError: GeneratorExp aren't supported:
at /home/dai/py36env/lib/python3.6/site-packages/torch/nn/modules/rnn.py:105:31
any_param = next(self.parameters()).data
if not any_param.is_cuda or not torch.backends.cudnn.is_acceptable(any_param):
return
# If any parameters alias, we fall back to the slower, copying code path. This is
# a sufficient check, because overlapping parameter buffers that don't completely
# alias would break the assumptions of the uniqueness check in
# Module.named_parameters().
all_weights = self._flat_weights
unique_data_ptrs = set(p.data_ptr() for p in all_weights)
~ <--- HERE
if len(unique_data_ptrs) != len(all_weights):
return
with torch.cuda.device_of(any_param):
import torch.backends.cudnn.rnn as rnn
# NB: This is a temporary hack while we still don't have Tensor
# bindings for ATen functions
with torch.no_grad():
'__torch__.BidirectionalLSTM.forward' is being compiled since it was called from '__torch__.teacher.forward'
at to_torchscript.py:221:8
def forward(self, input):
# conv features
conv = self.cnn(input)
b, c, h, w = conv.size()
assert h == 1, "the height of conv must be 1"
conv = conv.squeeze(2)
conv = conv.permute(2, 0, 1) # [w, b, c]
# rnn features
output = self.rnn(conv)
~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
T, b, h = output.size()
output = output.view(T, b, -1)
How can I call flatten_parameters() in torchscript?And are there other ways to get rid of the warning?
Looking forward to your reply. |
st181232 | Same situation here.
And my current workaround is just suppress the warning message.
import warnings
warnings.filterwarnings('ignore') |
st181233 | Hi,
I tried to use the Pytorch build-in transformerencoder for my model training. In inference, I tried to quantize it and export it for libtorch in C++. I found there was some strange error when I use both quantize and torch.jit.script. However, if I only use quantize or torch.jit.script, it works well.
I am using Pytorch 1.5.1 and cuda 10.2
The error is as follow when it calls torch.jit.script(module)
RuntimeError: method cannot be used as a value: File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/activation.py”, line 831 self.in_proj_weight, self.in_proj_bias, self.bias_k, self.bias_v, self.add_zero_attn, self.dropout, self.out_proj.weight, self.out_proj.bias, ~~~~~~~~~~~~~~~~~~~~ <— HERE training=self.training, key_padding_mask=key_padding_mask, need_weights=need_weights,
You could reproduce this error easily by running the following sample code.
import torch class MyModule(torch.nn.Module):
def init(self, hidden_size, nhead):
super(MyModule, self).init()
encoder_layer = torch.nn.TransformerEncoderLayer(hidden_size, nhead=nhead,
dim_feedforward=hidden_size, dropout=0.1, activation=‘gelu’)
encoder_norm = torch.nn.LayerNorm(hidden_size)
self.encoder = torch.nn.TransformerEncoder(encoder_layer, 20, encoder_norm)
def forward(self, x):
out = self.encoder(out)
return out
my_module = MyModule(1024, 8)
my_module = torch.quantization.quantize_dynamic(my_module, dtype=torch.qint8)
sm = torch.jit.script(my_module)
torch.jit.save(sm, ‘test.jit.model’) |
st181234 | I have some models currently in production, using torch’s C++ API and traced torchscripts, ala these instructions: https://pytorch.org/docs/master/generated/torch.jit.trace.html 2
I also have the model’s state dicts saved away.
Overall this solution works great.
The models and torchscripts were generated using pytorch 1.3
I want to update the next version of the system to pytorch 1.5
Retraining is possible, but not preferred, and certainly not a long term solution.
Is there a procedure for updating my existing models, from either the saved torchscripts or the state dicts?
Thanks |
st181235 | The easiest way is to instantiate the model in Python, load the state dict and trace the model again.
This is something to know, even though you can run the traced model without the source, you’ll regret not keeping it around at some point.
Best regards
Thomas |
st181236 | Hello, As mentioned here 46 torchvision ops is currently not supported in torchscript. I wanted to know is there a timeline on when will torchscript support these functions. |
st181237 | The documentation might be slightly out of date if you compile both PyTorch and torchvision yourself or take the nightlies, because it seems to work for me:
In [1]: import torch, torchvision
In [2]: @torch.jit.script
...: def fn(a, b):
...: return torchvision.ops.nms(a, b, 0.3)
...:
In [3]: fn.graph
Out[3]:
graph(%a.1 : Tensor,
%b.1 : Tensor):
%5 : Function = prim::Constant[name="nms"]()
%4 : float = prim::Constant[value=0.29999999999999999]() # <ipython-input-2-cfe32fda8960>:3:37
%6 : Tensor = prim::CallFunction(%5, %a.1, %b.1, %4) # <ipython-input-2-cfe32fda8960>:3:11
return (%6)
In [4]: fn(torch.randn(10, 4), torch.randn(10))
Out[4]: tensor([2, 8, 6, 7, 0, 9, 4, 5, 3, 1])
Best regards
Thomas |
st181238 | @tom Thanks. It is working for me now in torchscript as well. But, the NMS now fails in the TRTserver.
Unknown builtin op: torchvision::nms.
Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript. |
st181239 | Well, you might need to load the custom ops for that. I know nothing about TRTserver. |
st181240 | I’m experimenting with a model that uses a GMM at the output, implemented using functionality from torch.distributions. Can this be visualized via tensorboard?
When calling writer.add_graph on the model, I get the following error:
/opt/conda/lib/python3.7/site-packages/torch/utils/tensorboard/writer.py in add_graph(self, model, input_to_model, verbose)
705 if hasattr(model, 'forward'):
706 # A valid PyTorch model should have a 'forward' method
--> 707 self._get_file_writer().add_graph(graph(model, input_to_model, verbose))
708 else:
709 # Caffe2 models do not have the 'forward' method
/opt/conda/lib/python3.7/site-packages/torch/utils/tensorboard/_pytorch_graph.py in graph(model, args, verbose)
289 print(e)
290 print('Error occurs, No graph saved')
--> 291 raise e
292
293 if verbose:
/opt/conda/lib/python3.7/site-packages/torch/utils/tensorboard/_pytorch_graph.py in graph(model, args, verbose)
283 with torch.onnx.set_training(model, False): # TODO: move outside of torch.onnx?
284 try:
--> 285 trace = torch.jit.trace(model, args)
286 graph = trace.graph
287 torch._C._jit_pass_inline(graph)
/opt/conda/lib/python3.7/site-packages/torch/jit/__init__.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, _force_outplace, _module_class, _compilation_unit)
873 return trace_module(func, {'forward': example_inputs}, None,
874 check_trace, wrap_check_inputs(check_inputs),
--> 875 check_tolerance, _force_outplace, _module_class)
876
877 if (hasattr(func, '__self__') and isinstance(func.__self__, torch.nn.Module) and
/opt/conda/lib/python3.7/site-packages/torch/jit/__init__.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, _force_outplace, _module_class, _compilation_unit)
1025 func = mod if method_name == "forward" else getattr(mod, method_name)
1026 example_inputs = make_tuple(example_inputs)
-> 1027 module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, _force_outplace)
1028 check_trace_method = module._c._get_method(method_name)
1029
RuntimeError: Tracer cannot infer type of (MixtureSameFamily(
Categorical(<redacted>),
MultivariateNormal(<redacted>)))
:Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type MixtureSameFamily.
The last line clarifies in no uncertain terms . I am curious what workarounds there are. |
st181241 | Using tensorwatch (0.9.1) to draw network:
tw.draw_model(net, (1, 3, 512, 512))
PyTorch(1.5.1), torchvision(0.6.1)
RuntimeError Traceback (most recent call last)
in
----> 1 tw.draw_model(net, (1, 3, 512, 512))
g:\medicine\winvenv\lib\site-packages\tensorwatch_init_.py in draw_model(model, input_shape, orientation, png_filename)
33 def draw_model(model, input_shape=None, orientation=‘TB’, png_filename=None): #orientation = ‘LR’ for landscpe
34 from .model_graph.hiddenlayer import pytorch_draw_model
—> 35 g = pytorch_draw_model.draw_graph(model, input_shape)
36 return g
37
g:\medicine\winvenv\lib\site-packages\tensorwatch\model_graph\hiddenlayer\pytorch_draw_model.py in draw_graph(model, args)
33 args = torch.ones(args)
34
—> 35 dot = draw_img_classifier(model, args)
36 return DotWrapper(dot)
37
g:\medicine\winvenv\lib\site-packages\tensorwatch\model_graph\hiddenlayer\pytorch_draw_model.py in draw_img_classifier(model, dataset, display_param_nodes, rankdir, styles, input_shape)
61 try:
62 non_para_model = distiller.make_non_parallel_copy(model)
—> 63 g = SummaryGraph(non_para_model, dummy_input)
64
65 return sgraph2dot(g, display_param_nodes, rankdir, styles)
g:\medicine\winvenv\lib\site-packages\tensorwatch\model_graph\hiddenlayer\summary_graph.py in init(self, model, dummy_input, apply_scope_name_workarounds)
94 nodes = graph.nodes()
95 elif hasattr(jit, ‘_get_trace_graph’):
—> 96 trace, _ = jit._get_trace_graph(model_clone, dummy_input, _force_outplace=True)
97 graph = trace
98 nodes = graph.nodes()
g:\medicine\winvenv\lib\site-packages\torch\jit_init_.py in _get_trace_graph(f, args, kwargs, _force_outplace, return_inputs, _return_inputs_states)
276 if not isinstance(args, tuple):
277 args = (args,)
–> 278 outs = ONNXTracedModule(f, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
279 return outs
280
g:\medicine\winvenv\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
–> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
g:\medicine\winvenv\lib\site-packages\torch\jit_init_.py in forward(self, *args)
359 in_vars + module_state,
360 _create_interpreter_name_lookup_fn(),
–> 361 self._force_outplace,
362 )
363
RuntimeError: 0 INTERNAL ASSERT FAILED at …\torch\csrc\jit\ir\alias_analysis.cpp:318, please report a bug to PyTorch. We don’t have an op for aten::uniform but it isn’t a special case. Argument types: Tensor, float, float, None, |
st181242 | Could you create an issue on GitHub 5 so that we could track it, please?
Would it be possible to provide a minimal code snippet to reproduce this issue? |
st181243 | Pardon me, I cannot replicate the errors on my project.
I changed the model structure, recreate the virtualenv envrionment, then I run the jupyter-notebook a few seconds ago and got all thing worked right, the model could be visualized, displaed on jupyter-notebook or saved as .png format on my disk.
I’m not sure whether there was wrong with the installation of pytorch or flaws existed in the model at that time.
But I’m sure the torchsummary could be executed to display the network channels and sizes any time.
As the problem disappeared, shall I create an issue on GitHub? |
st181244 | brifuture:
As the problem disappeared, shall I create an issue on GitHub?
I don’t think it would be helpful, if you (and thus we) cannot reproduce it.
In case you are running into this issue again and can reproduce it, please create an issue. |
st181245 | Hello,
I observed in PyTorch’s source code some functions that follow the pattern below (for example this one 3):
def my_function(a, b, c, d=None, e=None):
if not torch.jit.is_scripting():
tens_ops = (a, b, c)
if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops):
return handle_torch_function(my_function , a, b, c, d=d, e=e)
# something else here
Could anybody please tell me what’s going on there? In particular, what do torch.jit.is_scripting and handle_torch_function do?
And more importantly: Where can I learn those things in detail so that I can myself write similar custom functions and stop asking stupid questions like these? It seems that these are about TorchScript 2, but I wanted to ask just in case, before diving into that.
Thank you very much in advance for your help! |
st181246 | Solved by ptrblck in post #2
torch.jit.is_scripting() can be used as a guard for Python-only methods, which are not exportable.
Since the JIT cannot export all arbitrary Python objects, you could thus use different “paths” in your code for the Python execution and the scripting of the mdoel.
handle_torch_function sounds like … |
st181247 | torch.jit.is_scripting() can be used as a guard for Python-only methods, which are not exportable.
Since the JIT cannot export all arbitrary Python objects, you could thus use different “paths” in your code for the Python execution and the scripting of the mdoel.
handle_torch_function sounds like a custom function definition.
f10w:
Where can I learn those things in detail so that I can myself write similar custom functions and stop asking stupid questions like these?
As these features are currently being developed, the documentation and tutorials might not be up to date, so please ask these kind of questions here.
torch.jit.is_scripting can be found in the current master docs 182, but might also be updated before the next release. |
st181248 | My model is:
import torch.nn as nn
class Net(nn.Module):
def __init__(self, num_input, num_hidden, num_classes, dropout,
activation='tanh'):
super(Net, self).__init__()
self.dropout = nn.Dropout(dropout)
self.fc1 = nn.Linear(num_input, num_hidden)
self.fc2 = nn.Linear(num_hidden, num_classes)
if activation == 'tanh':
self.activation_f = torch.tanh
elif activation == 'relu':
self.activation_f = torch.relu
def forward(self, x):
x = self.activation_f(self.fc1(x))
x = self.dropout(x)
x = self.fc2(x)
return x
I call my model for instance as:
model = Net(14,512,2,0.2).to(device)
However once I use TorchScript as:
traced_model = torch.jit.trace(model, torch.zeros([1, 14], dtype=torch.float))
I receive the following error:
IndexError: The shape of the mask [2] at index 0 does not match the shape of the indexed tensor [1, 2] at index 0
I know that if I use model.eval() I don’t receive any error BUT I want to use my model for training and not evaluation. Does anybody know any solution or workaround for such problem?
PS: I am using PyTorch version 1.4. |
st181249 | I am unable to reproduce this error with Python 3.8.3 and pytorch-1.4.0 from conda.
Based on your error message, it seems like your input shape might be the problem. Try using an input with shape [14] as the input during tracing:
traced_model = torch.jit.trace(model, torch.zeros([14], dtype=torch.float))
Stepping back a bit, I would advise you to use scripting (torch.jit.script) rather than tracing for this use case. The reason is that control flow is not visible to tracing and from what I remember, the backward pass for dropout predicates whether or not the incoming gradient should be backpropagated based on whether the corresponding input was passed through during the forward pass. |
st181250 | Hi, but it is not possible to take input of size [14], where is the batch dimension ([1,14])? I don’t think the dimension is the problem since once I fix the dropout rate to 0.0 then there is no error message and it works just fine. |
st181251 | I still cannot reproduce your error. Can you provide more details on your setup? There is an environment details collection script in the PyTorch repository. |
st181252 | Hi Folks,
I’m trying to export a model to cpp via the tracing method (Following this 9 but with a more complex model), but running “torch.jit.trace(model, input_x)” fails with exception “TracedModules don’t support parameter sharing between modules”.
Unfortunately, there is no indication as to which modules or which parameters are shared. And I have no idea how to find out. How can I find out where it hangs?
(FYI, the module I’m trying to export is this one: Code on Github 17)
I’m grateful for any pointer towards a solution (I also tried the annotation method but also failed I guess some of the model’s modules don’t support that). |
st181253 | I’m not sure about the tracing - but i was able to script it successfully with the following
class DecoderBlockLinkNet(torch.jit.ScriptModule):
def __init__(self, in_channels, n_filters):
super().__init__()
self.relu = nn.ReLU(inplace=True)
# B, C, H, W -> B, C/4, H, W
self.conv1 = nn.Conv2d(in_channels, in_channels // 4, 1)
self.norm1 = nn.BatchNorm2d(in_channels // 4)
# B, C/4, H, W -> B, C/4, 2 * H, 2 * W
self.deconv2 = nn.ConvTranspose2d(in_channels // 4, in_channels // 4, kernel_size=4,
stride=2, padding=1, output_padding=0)
self.norm2 = nn.BatchNorm2d(in_channels // 4)
# B, C/4, H, W -> B, C, H, W
self.conv3 = nn.Conv2d(in_channels // 4, n_filters, 1)
self.norm3 = nn.BatchNorm2d(n_filters)
@torch.jit.script_method
def forward(self, x):
x = self.conv1(x)
x = self.norm1(x)
x = self.relu(x)
x = self.deconv2(x)
x = self.norm2(x)
x = self.relu(x)
x = self.conv3(x)
x = self.norm3(x)
x = self.relu(x)
return x
class UNet16(torch.jit.ScriptModule):
__constants__ = ["conv1", "conv2", "conv3", "conv4", "conv5"]
def __init__(self, num_classes=1, num_filters=32, pretrained=False):
"""
:param num_classes:
:param num_filters:
:param pretrained:
False - no pre-trained network used
True - encoder pre-trained with VGG11
"""
super().__init__()
self.num_classes = num_classes
self.pool = nn.MaxPool2d(2, 2)
self.encoder = torchvision.models.vgg16(pretrained=pretrained).features
self.relu = nn.ReLU(inplace=True)
self.conv1 = nn.Sequential(self.encoder[0],
self.relu,
self.encoder[2],
self.relu)
self.conv2 = nn.Sequential(self.encoder[5],
self.relu,
self.encoder[7],
self.relu)
self.conv3 = nn.Sequential(self.encoder[10],
self.relu,
self.encoder[12],
self.relu,
self.encoder[14],
self.relu)
self.conv4 = nn.Sequential(self.encoder[17],
self.relu,
self.encoder[19],
self.relu,
self.encoder[21],
self.relu)
self.conv5 = nn.Sequential(self.encoder[24],
self.relu,
self.encoder[26],
self.relu,
self.encoder[28],
self.relu)
self.center = DecoderBlock(512, num_filters * 8 * 2, num_filters * 8)
self.dec5 = DecoderBlock(512 + num_filters * 8, num_filters * 8 * 2, num_filters * 8)
self.dec4 = DecoderBlock(512 + num_filters * 8, num_filters * 8 * 2, num_filters * 8)
self.dec3 = DecoderBlock(256 + num_filters * 8, num_filters * 4 * 2, num_filters * 2)
self.dec2 = DecoderBlock(128 + num_filters * 2, num_filters * 2 * 2, num_filters)
self.dec1 = ConvRelu(64 + num_filters, num_filters)
self.final = nn.Conv2d(num_filters, num_classes, kernel_size=1)
@torch.jit.script_method
def forward(self, x):
conv1 = self.conv1(x)
conv2 = self.conv2(self.pool(conv1))
conv3 = self.conv3(self.pool(conv2))
conv4 = self.conv4(self.pool(conv3))
conv5 = self.conv5(self.pool(conv4))
center = self.center(self.pool(conv5))
dec5 = self.dec5(torch.cat([center, conv5], 1))
dec4 = self.dec4(torch.cat([dec5, conv4], 1))
dec3 = self.dec3(torch.cat([dec4, conv3], 1))
dec2 = self.dec2(torch.cat([dec3, conv2], 1))
dec1 = self.dec1(torch.cat([dec2, conv1], 1))
if self.num_classes > 1:
x_out = F.log_softmax(self.final(dec1), dim=1)
else:
x_out = self.final(dec1)
return x_out
model = UNet16() |
st181254 | Thanks for your answer! I think you may have posted the wrong method: The model contains DecoderBlock but you posted DecoderBlockLinkNet.
In the meantime, I was lucky to stumble upon this trick: Wrap parameters in deepcopy 106. I tried it and wrapped all references to self.encoder with copy.deepcopy and it now it traces and exports! Seems like the shared parameter was actually self.encoder!
I’m still verifying that the exported model is actually working as expected but am optimistic. I’ll write again soon… |
st181255 | hello,in my net the deepcopy method doesn’t work,could you show me where to add deepcopy,thank you |
st181256 | i use tensorboardx to view the structure of a multi-task model. but faild with this error、、、 |
st181257 | has anyone compared the throughput of a model optimized by both jit and tensorRT? |
st181258 | It seems to depend on the specific network. The biggest speedup I’ve seen was close to three times as fast as PyTorch. The lowest was about one and a quarter times as fast. |
st181259 | Thanks for sharing. That’s not too bad. could you share which models showed such speedup? and perhaps the speedup data on some models you have tried? I would really appreciate any details.
Thanks again. |
st181260 | jit and trt are two different things.
our team are looking into pytorch for a long time. jit is front-end while trt is back-end.
Always, jit is from python. it optimizes pytorch codes and tries to merge some ops before running the forward. If you dig it, you will find jit and eager call the same op set and just little diff.
However, trt is another accelerated engine, which depends on Nvidia’s GPU. trt will fuse the ops as possible as it can. fused kernel can reduce cost of discrete kernel calls.
besides, trt has other exiting features. |
st181261 | Hi @alanzhai219,
Is it possible to optimize a model using torch2trt then scripting the optimized model using torch script/trace? If yes, Will I get better optimization? |
st181262 | It is not recommended. torch2trt is designed to help developers deploy their script/trace model in TensorRT.
In detail, script/trace just interpreters original PyTorch into IR graph and then torch2trt maps and fuses such graph in trt.
I never try the opposite flow. If you succeed, please let me know.
Thanks,
Alan Zhai |
st181263 | Hello, I’m implementing an LSTM to predict today’s stock price using the past 10 days’ close price.
Therefore, my input is [batch_size, sequence_len = 10, input_size = 1] since there is only one feature every day.
Then I set batch_size = 50 and implement a train_loader with TensorDataset and DataLoader.
My LSTM model is defined as
class LSTM(nn.Module):
def __init__(self, input_dim, hidden_dim, batch_size, output_dim, num_layers, dropout = 0.1):
super(LSTM, self).__init__()
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.batch_size = batch_size
self.num_layers = num_layers
self.dropout = dropout
self.output_dim = output_dim
# Define the LSTM layer
self.lstm = nn.LSTM(input_size = input_dim, hidden_size = hidden_dim, num_layers = num_layers, dropout = dropout, batch_first = True)
# Define the output layer, fully connected
self.linear = nn.Linear(hidden_dim, output_dim)
def init_hidden(self):
# initialise our hidden state
return (torch.zeros(self.num_layers, self.batch_size, self.hidden_dim),
torch.zeros(self.num_layers, self.batch_size, self.hidden_dim))
def forward(self, x):
# Forward pass through LSTM layer
# shape of out: [input_size, batch_size, hidden_dim]
# shape of self.hidden: (a, b), where a and b both have shape (num_layers, batch_size, hidden_dim).
out, self.hidden = self.lstm(x, self.init_hidden)
y_pred = self.linear(out[:, -1, :])
return y_pred
And when I train the model,
INPUT_SIZE = 1 # number of features
HIDDEN_SIZE = 64
NUM_LAYERS = 3
OUTPUT_SIZE = 1
learning_rate = 0.001
num_epochs = 150
model = LSTM(INPUT_SIZE, HIDDEN_SIZE, batch_size, OUTPUT_SIZE, NUM_LAYERS)
loss_fn = torch.nn.MSELoss()
optimiser = torch.optim.Adam(model.parameters(), lr = learning_rate)
model.hidden = model.init_hidden()
model.train()
for local_batch, local_labels in train_loader:
# Transfer to GPU
local_batch = local_batch.float().to(device)
local_labels = local_labels.float().to(device)
print(local_batch.shape) # torch.Size([50, 10, 1])
print(local_labels.shape) # torch.Size([50, 1])
# Forward pass
y_pred = model(local_batch)
loss = loss_fn(y_pred, local_labels)
optimiser.zero_grad()
# Backward pass
loss.backward()
# Update parameters
optimiser.step()
I got this error in y_pred = model(local_batch) at /usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py in check_forward_args(self, input, hidden, batch_sizes):
-> 522 self.check_hidden_size(hidden[0], expected_hidden_size,
523 'Expected hidden[0] size {}, got {}')
TypeError: ‘method’ object is not subscriptable
I’m new to LSTM, any hints are appreciable! |
st181264 | Solved by tom in post #2
this passes the init_hidden method, you likely want init_hidden() instead. (Why you would need to do this when 0 initial hidden it is implicit in PyTorch, I don’t know.)
Best regards
Thomas |
st181265 | Kieran:
out, self.hidden = self.lstm(x, self.init_hidden)
this passes the init_hidden method, you likely want init_hidden() instead. (Why you would need to do this when 0 initial hidden it is implicit in PyTorch, I don’t know.)
Best regards
Thomas |
st181266 | Having a custom C++ operation it’s possible to register a symbolic function for it using
register_op(name, symbolic, namespace, opset)
With custom Python operation it’s possible to wrap it with an autograd.Function having static symbolic method, but it’s much less agile in term of different opsets’ support. Is there a way to register symbolic for a Python operation in the same way, as it’s done for C++ ones. |
st181267 | PyTorch 1.5.0
Hello, I am using PyTorch to minimise a cost function. No training involved. I was wondering whether using TorchScript rather than pure Python could have led to a speed up. However, I am not sure whether it is possible to back propagate in TorchScript.
Let us consider the following toy class:
class Example(nn.Module):
def forward(self, x):
y = torch.tensor([0], dtype=x.dtype)
y.requires_grad = True
return y
If I call example_scripted = torch.jit.script(Example()), I get the following error:
RuntimeError:
Tried to set an attribute: grad on a non-class: Tensor:
The issue seems the requires_grad. I am wondering whether it is possible to use TorchScript to back propagate. Is it?
Please see also https://github.com/pytorch/pytorch/issues/40561 7
Thank you. |
st181268 | Solved by googlebot in post #2
Yeah, backprop works, but some python code won’t compile. In your (contrived) case, tensor(…).requires_grad_(True) would compile, but you don’t usually need to be explicit about requires_grad, as it propagates with operations, and there is also .detach(). |
st181269 | Yeah, backprop works, but some python code won’t compile. In your (contrived) case, tensor(…).requires_grad_(True) would compile, but you don’t usually need to be explicit about requires_grad, as it propagates with operations, and there is also .detach(). |
st181270 | Yes you can train see this book:
pytorch.org
PyTorch 14
An open source deep learning platform that provides a seamless path from research prototyping to production deployment.
Picture11552×309 109 KB |
st181271 | Here is my model definition:
class ResNetTC8(nn.Module):
def __init__(self, n_classes, n_channels, n_mfcc):
super().__init__()
conv_size = (9, 1)
self.conv0 = nn.Conv2d(n_mfcc, n_channels[0], (3, 1), padding=(1, 0), bias=False)
self.conv1 = nn.Conv2d(n_channels[0], n_channels[1], conv_size, padding=((101+8)//2, 0), bias=False, stride=2)
self.conv2 = nn.Conv2d(n_channels[1], n_channels[1], conv_size, padding=(9//2, 0), bias=False)
self.skip_conv1 = nn.Conv2d(n_channels[0], n_channels[1], 1, padding=((101)//2, 0), bias=False, stride=2)
self.bn1 = nn.BatchNorm2d(n_channels[1])
self.bn2 = nn.BatchNorm2d(n_channels[1])
self.skip_bn1 = nn.BatchNorm2d(n_channels[1])
self.conv3 = nn.Conv2d(n_channels[1], n_channels[2], conv_size, padding=((101+8)//2, 0), bias=False, stride=2)
self.conv4 = nn.Conv2d(n_channels[2], n_channels[2], conv_size, padding=(9//2, 0), bias=False)
self.skip_conv2 = nn.Conv2d(n_channels[1], n_channels[2], 1, padding=((101)//2, 0), bias=False, stride=2)
self.bn3 = nn.BatchNorm2d(n_channels[2])
self.bn4 = nn.BatchNorm2d(n_channels[2])
self.skip_bn2 = nn.BatchNorm2d(n_channels[2])
self.conv5 = nn.Conv2d(n_channels[2], n_channels[3], conv_size, padding=((101+8)//2, 0), bias=False, stride=2)
self.conv6 = nn.Conv2d(n_channels[2], n_channels[3], conv_size, padding=(9//2, 0), bias=False)
self.skip_conv3 = nn.Conv2d(n_channels[2], n_channels[3], 1, padding=((101)//2, 0), bias=False, stride=2)
self.bn5 = nn.BatchNorm2d(n_channels[3])
self.bn6 = nn.BatchNorm2d(n_channels[3])
self.skip_bn3 = nn.BatchNorm2d(n_channels[3])
self.avg = nn.AvgPool2d((101, 1))
self.dropout = nn.Dropout()
self.output = nn.Linear(n_channels[3], 36)
def forward(self, x):
x = x.reshape([-1, x.shape[2], x.shape[1], 1])
x = self.conv0(x)
y0 = self.bn1(F.relu(self.conv1(x)))
x = self.bn2(F.relu(self.conv2(y0))) + self.skip_bn1(F.relu(self.skip_conv1(x)))
y1 = self.bn3(F.relu(self.conv3(x)))
x = self.bn4(F.relu(self.conv4(y1))) + self.skip_bn2(F.relu(self.skip_conv2(x)))
y2 = self.bn5(F.relu(self.conv5(x)))
x = self.bn6(F.relu(self.conv6(y1))) + self.skip_bn3(F.relu(self.skip_conv3(x)))
x = self.dropout(self.avg(x)).squeeze()
return self.output(x)
Model trains without error and scripting yields no errors in Python. However, running using Libtorch 1.5 on Xcode yields the following error:
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/torch/nn/modules/module/___torch_mangle_67.py", line 55, in forward
input = torch.reshape(x, [-1, int(_22), int(_23), 1])
_24 = (_21).forward(input, )
input0 = torch.relu((_20).forward(_24, ))
~~~~~~~~~~~~ <--- HERE
_25 = (_18).forward((_19).forward(input0, ), )
input1 = torch.relu(_25)
File "code/__torch__/torch/nn/modules/module/___torch_mangle_46.py", line 8, in forward
def forward(self: __torch__.torch.nn.modules.module.___torch_mangle_46.Module,
argument_1: Tensor) -> Tensor:
input = torch._convolution(argument_1, self.weight, None, [2, 2], [54, 0], [1, 1], False, [0, 0], 1, False, False, True)
~~~~~~~~~~~~~~~~~~ <--- HERE
return input
Seems like it’s getting stuck on the second convolution layer, but no idea why. |
st181272 | Hello, everyone.
I have to use spconv.conv.py,
I got error in this place.
Traceback (most recent call last):
File “trace_smallnet.py”, line 27, in
sm = torch.jit.script(net_middle)
File “/home/liangdao/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 1162, in script
return _convert_to_script_module(obj)
File “/home/liangdao/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 1812, in _convert_to_script_module
return WeakScriptModuleProxy(mod, stubs)
File “/home/liangdao/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 1386, in init_then_register
original_init(self, *args, **kwargs)
File “/home/liangdao/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 1736, in init
_create_methods_from_stubs(self, stubs)
File “/home/liangdao/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 1347, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
File “/home/liangdao/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 982, in _make_strong_submodule
new_strong_submodule = _convert_to_script_module(module)
File “/home/liangdao/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 1812, in _convert_to_script_module
return WeakScriptModuleProxy(mod, stubs)
File “/home/liangdao/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 1386, in init_then_register
original_init(self, *args, **kwargs)
File “/home/liangdao/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 1736, in init
_create_methods_from_stubs(self, stubs)
File “/home/liangdao/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 1347, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
RuntimeError:
Arguments for call are not valid.
The following operator variants are available:
aten::_set_item(Tensor l, int idx, Tensor(b) el) -> (Tensor):
Expected a value of type ‘List[Tensor]’ for argument ‘l’ but instead found type ‘Dict[str, Tensor]’.
aten::_set_item(int l, int idx, int el) -> (int):
Expected a value of type ‘List[int]’ for argument ‘l’ but instead found type ‘Dict[str, Tensor]’.
aten::_set_item(float l, int idx, float el) -> (float):
Expected a value of type ‘List[float]’ for argument ‘l’ but instead found type ‘Dict[str, Tensor]’.
aten::_set_item(bool l, int idx, bool el) -> (bool):
Expected a value of type ‘List[bool]’ for argument ‘l’ but instead found type ‘Dict[str, Tensor]’.
aten::_set_item(t l, int idx, t(b) el) -> (t):
Could not match type Dict[str, Tensor] to List[t] in argument ‘l’: Cannot match List[t] to Dict[str, Tensor].
aten::_set_item(Dict(str, t)(a) l, str idx, t(b) v) -> ():
Could not match type Tuple[Tensor, Tensor, Tensor, Tensor, Tensor] to t in argument ‘v’: Type variable ‘t’ previously matched to type Tensor is matched to type Tuple[Tensor, Tensor, Tensor, Tensor, Tensor].
aten::_set_item(Dict(int, t)(a) l, int idx, t(b) v) -> ():
Expected a value of type ‘Dict[int, t]’ for argument ‘l’ but instead found type ‘Dict[str, Tensor]’.
aten::_set_item(Dict(float, t)(a) l, float idx, t(b) v) -> ():
Expected a value of type ‘Dict[float, t]’ for argument ‘l’ but instead found type ‘Dict[str, Tensor]’.
The original call is:
at /home/liangdao/.local/lib/python3.6/site-packages/spconv/conv.py:320:13
self.output_padding,
self.subm,
self.transposed,
use_hash=self.use_hash)
outids = temp__[0]
indice_pairs = temp__[1]
indice_pair_num = temp__[2]
indice_dict[self.indice_key] = (outids, indices,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... <--- HERE
indice_pairs,
indice_pair_num,
spatial_shape)
print(indice_dict[self.indice_key])
if self.fused_bn:
assert self.bias is not None
out_features = ops.fused_indice_conv(
features, self.weight, self.bias, indice_pairs.to(device),'__module__.__torch__.torchplus.tools.DefaultArgLayer.forward' is being compiled since it was called from '__module__.__torch__.models.middle.SpMiddleFHD.forward'
at /home/liangdao/second.pytorch/impl_DL/py_cxx_integration/py_scripts/sample_model/models/middle.py:119:9
def forward(self, voxel_features, coors):
# coors[:, 1] += 1
coors = coors.int()
ret = spconv.SparseConvTensor(voxel_features, coors,self.sparse_shape,self.batch_size)
#print(self.time_process)
ret = self.SubMConv3d_0(ret,ret.spatial_shape,ret.indice_dict,self.time_process,ret.indices)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
ret.features = F.relu(self.BatchNorm1d_16(ret.features))
self.time_process += 1
#print(self.time_process)
ret = self.SubMConv3d_00(ret,ret.spatial_shape,ret.indice_dict,self.time_process,ret.indices)
ret.features = F.relu(self.BatchNorm1d_16(ret.features))
self.time_process += 1
#print(self.time_process)
ret = self.SpConv3d_0(ret,ret.spatial_shape,ret.indice_dict,self.time_process,ret.indices)
ret.features = F.relu(self.BatchNorm1d_32(ret.features))
can somebody help me to solve this issue? |
st181273 | While reading about the optimization that can be made using torchscript, I came to know that fusing broadcasting operation increases execution time. I am unable to understand the reason behind it.
Can any please help me understand this? |
st181274 | Hi,
I am trying to use libtorch in a C++ application that’s restricted to the gcc-4.8 ABI, which forces me to link against libtorch-1.2.0 (I was told newer version of libtorch aren’t compatible with that ABI).
This application needs to load a pre-trained model. That model was trained and saved from a python script that needs to use pytorch-1.3.0 or newer.
Trying to load the serialized TorchScript model aborts the program with this error:
terminate called after throwing an instance of 'c10::Error'
what(): [enforce fail at inline_container.cc:137] . PytorchStreamReader failed closing reader: file not found
This happens both from the C++ and the python API. Is this a compatibility issue?
This is easily reproducible: from an environment with either pytorch-1.3.1 or 1.5.0, run
import torch
class TestModule(torch.nn.Module):
def __init__(self, N, M):
super().__init__()
self.weight = torch.nn.Parameter(torch.rand(N, M))
def forward(self, x):
return self.weight.mv(x)
my_module = TestModule(10,20)
test_module = torch.jit.script(my_module)
test_module.save("test_module.pt")
Then from an environment with pytorch-1.2.0:
import torch
m = torch.jit.load('test_module.pt')
If the file was written from 1.5.0, I get
RuntimeError: version_number <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at ../caffe2/serialize/inline_container.cc:131, please report a bug to PyTorch. Attempted to read a PyTorch file with version 2, but the maximum supported version for reading is 1. Your PyTorch installation may be too old.
which is clearly a compatibility issue. But if the file was written from 1.3.1, I get the “file not found” error above (note this is a process abort, not an exception, from both C++ and python API).
I could not find documentation on TorchScript file compatibility across versions. Is that information available somewhere?
Thanks,
A. |
st181275 | Hi, we maintain backwards compatibility but not forwards compatiblity. New versions will work with older models, but older versions will not work with newer models. |
st181276 | Hi all,
I’ve run into some behavior that I can’t explain. Below is a snippet of code that was lifted from this discussion: https://github.com/pytorch/pytorch/issues/24235 3.
What I’ve found is that if bias=False in testSubMod then the compilation runs with no problem and outputs test.onnx as a file but if bias=True then the compilation fails with the stack trace shown below.
I’m pretty new to working with ONNX and TorchScript so any insights would be really appreciated!
Thanks,
Zach
import torch
import onnx
class testSubMod(torch.nn.Module):
def __init__(self, rnn_dims=32):
super().__init__()
self.lin = torch.nn.Linear(32, 32, bias=True) # < This, this bit here!
def forward(self, out):
for _ in torch.arange(8):
out = self.lin(out)
return out
class test(torch.nn.Module):
def __init__(self, rnn_dims=32):
super().__init__()
self.submod = torch.jit.script(testSubMod())
def forward(self, x):
out = torch.ones(
[
x.size(0),
x.size(1)
],
dtype=x.dtype,
device=x.device
)
return self.submod(out)
if __name__=='__main__':
model = test()
model = torch.jit.script(model)
input_data = torch.ones((32, 32, 32)).float()
output = model(input_data)
torch.onnx.export(model, input_data,
'test.onnx', example_outputs=output)
onnx_model = onnx.load("test.onnx")
print(onnx.helper.printable_graph(onnx_model.graph))
/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/utils.py:617: UserWarning: ONNX export failed on ATen operator dim because torch.onnx.symbolic_opset9.dim does not exist
.format(op_name, opset_version, op_name))
Traceback (most recent call last):
File "compile.py", line 38, in <module>
'test.onnx', example_outputs=output)
File "/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/__init__.py", line 143, in export
strip_doc_string, dynamic_axes, keep_initializers_as_inputs)
File "/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/utils.py", line 66, in export
dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)
File "/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/utils.py", line 382, in _export
fixed_batch_size=fixed_batch_size)
File "/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/utils.py", line 262, in _model_to_graph
fixed_batch_size=fixed_batch_size)
File "/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/utils.py", line 132, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/__init__.py", line 174, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/utils.py", line 644, in _run_symbolic_function
torch._C._jit_pass_onnx_block(b, new_block, operator_export_type, env)
File "/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/__init__.py", line 174, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/utils.py", line 618, in _run_symbolic_function
op_fn = sym_registry.get_registered_op(op_name, '', opset_version)
File "/home/USERNAME/anaconda3/envs/rain_man/lib/python3.7/site-packages/torch/onnx/symbolic_registry.py", line 91, in get_registered_op
return _registry[(domain, version)][opname]
KeyError: 'dim' |
st181277 | TorchScript Errors are usually very informative.
EMPTY_FLOAT = torch.zeros(0).float()
class Bug(nn.Module):
def __init__(self):
super(Bug, self).__init__()
def forward(self,
my_bool: bool,
x=EMPTY_FLOAT):
if my_bool:
y = x
else:
y = EMPTY_FLOAT
Then run
bug = Bug()
bug_script = torch.jit.script(bug)
Then comes the compile time error
RuntimeError:
python value of type 'Tensor' cannot be used as a value:
File "<ipython-input-310-549c5631a7a3>", line 15
if my_bool:
~~~~~~~~~~~... <--- HERE
y = x
The problem seems to be EMPTY_FLOAT if I use torch.zeros(0).float() instead it compiles correctly. |
st181278 | Very strange, do you mind filing an issue on GitHub and we can follow up there? Thanks! |
st181279 | Hi
I have some problem converting my pytorch model into onnx. My input to the model is a list(since I define my own data loader collate_fn), where length of the list is the batch size and each element in the list is a custom tuple of elements. My output is a tensor with shape: batchsize x height x width
The code converting model to onnx:
# Export the model
torch.onnx.export(model, # model being run
cuda(X), # model input (or a tuple for multiple inputs)
“final.onnx”, # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=9, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = [‘input’], # the model’s input names
output_names = [‘pred_traj_tensor’], # the model’s output names
dynamic_axes={‘input’ : {0 : ‘batch_size’}, # variable lenght axes
‘pred_traj_tensor’ : {0 : ‘batch_size’}}, verbose=True)
Runtime Error:
File “/home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/onnx/utils.py”, line 144, in _optimize_graph
torch._C._jit_pass_onnx_peephole(graph, _export_onnx_opset_version, fixed_batch_size)
RuntimeError: ArrayRef: invalid index Index = 0; Length = 0 (at at /opt/conda/conda-bld/pytorch_1579022034529/work/c10/util/ArrayRef.h:197)
Really don’t know how to debug further. Any help is greatly appreciated !
Here is the detailed stack trace.
Traceback (most recent call last):
File "learning_algorithms/prediction/datasets/apollo_vehicle_trajectory_dataset/convert_onnx.py", line 113, in
'pred_traj_tensor' : {0 : 'batch_size'}})
File "/home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/onnx/__init__.py", line 148, in export
strip_doc_string, dynamic_axes, keep_initializers_as_inputs)
File "/home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/onnx/utils.py", line 66, in export
dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)
File "/home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/onnx/utils.py", line 418, in _export
fixed_batch_size=fixed_batch_size)
File "/home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/onnx/utils.py", line 298, in _model_to_graph
fixed_batch_size=fixed_batch_size, params_dict=params_dict)
File "/home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/onnx/utils.py", line 144, in _optimize_graph
torch._C._jit_pass_onnx_peephole(graph, _export_onnx_opset_version, fixed_batch_size)
RuntimeError: ArrayRef: invalid index Index = 0; Length = 0 (at at /opt/conda/conda-bld/pytorch_1579022034529/work/c10/util/ArrayRef.h:197)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7fb2baa2c627 in /home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: + 0x6f484b (0x7fb2ed63184b in /home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #2: + 0x7003ec (0x7fb2ed63d3ec in /home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #3: torch::jit::PeepholeOptimizeONNX(std::shared_ptr&, int, bool) + 0x125 (0x7fb2ed642985 in /home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: + 0x6b496c (0x7fb2ed5f196c in /home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #5: + 0x28c076 (0x7fb2ed1c9076 in /home/weide/anaconda3/envs/fuel-py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #36: __libc_start_main + 0xe7 (0x7fb30ac9ab97 in /lib/x86_64-linux-gnu/libc.so.6) |
st181280 | Hi @weidezhang, seems like an ONNX bug. I would suggest raising an issue on the pytorch repo. |
st181281 | Hi @eellison, could you help me to see who is able to look at the issue on github ? I have updated the details(model and test script) on github. |
st181282 | I have already created this 2 issue.
Many times we want to know if we are in the middle of exporting to ONNX to select a different path (say MemoryEfficientSwish vs Swish). Passing a boolean around all the time is not a great idea. So far I have been using torch._C._get_tracing_state() but it is private API.
I just noticed there is torch.onnx.is_in_onnx_export() . Is it equivalent to torch._C._get_tracing_state() ? |
st181283 | I was trying to run this code section and with my limited understanding of jit, thought I can get a performance improvement on the function.
@torch.jit.script
def nms_pytorch2(dets, thresh):
overlap = bbox_overlap(dets, dets)
treshold_matrix = torch.tril((overlap > thresh), diagonal=-1)
# Tensor elements indicate whether box should be kept
is_maximum = treshold_matrix.new_ones(dets.shape[0])
# loop over all boxes with highest confidence in the scene
# Apply this vectorized over all boxes in the batch.
for box in treshold_matrix.unbind(-1):
# Disable all other boxes in the same scene if the current box is not
# disabled.
is_maximum = is_maximum & ~box
# Also disable the overlaps of boxes which getting disabled right now.
treshold_matrix &= ~box.unsqueeze(-2)
return is_maximum
However, I got the following error and am not sure how to handle it
torch.jit.frontend.NotSupportedError: unsupported kind of augumented assignment: BitAnd:
# Also disable the overlaps of boxes which getting disabled right now.
treshold_matrix &= ~box.unsqueeze(-2)
~~ <--- HERE
return is_maximum
Your help would be really appreciated |
st181284 | Just figured the issue out, the operations need to be non in-place. Just putting a temp variable and assigning value of this temp variable in the next line solved the issue for me. |
st181285 | I noticed some annoying differences between nn.Module and ScriptModule
torch.__version__ = 1.4.0
Example:
class DataClass(nn.Module):
my_data: List[int]
def __init__(self):
super(DataClass, self).__init__()
self.my_data = []
def forward(self, new: int):
self.my_data.append(new)
return self.my_data
nn.Module:
dat = DataClass()
dat(0)
dat.my_data.append(1)
print(dat.my_data)
>>> [0, 1]
ScriptModule
dat = DataClass()
script_dat = torch.jit.script(dat)
script_dat(0)
script_dat.my_data.append(1)
print(script_dat.my_data)
>>> [0] |
st181286 | I was having trouble with memory usage of my traced model in C++, and I discovered that .eval() doesn’t change the requires_grad for the parameters in my ScriptModule. Is this intended behaviour? I can say that as a user this was not expected behaviour. As a user I would like it to work as I expect, to warn, or to raise. Given that I can manually set the requires_grad behaviour, it seems like my expected behaviour is possible?
I think the underlying cause is that my_script_module.layer is a RecursiveScriptModule and has no .children().
PyTorch 1.5
import torch
class MyScriptModule(torch.jit.ScriptModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(1, 1, bias=False)
my_script_module = MyScriptModule()
# [True]
print([p.requires_grad for p in my_script_module.parameters()])
my_script_module.eval()
# [True] :(
print([p.requires_grad for p in my_script_module.parameters()])
for p in my_script_module.parameters():
p.requires_grad = False
# [False] :)
print([p.requires_grad for p in my_script_module.parameters()])
I know this isn’t a totally normal thing to be doing. For what it’s worth, I am subclassing ScrtipModule in this way so that I can do the following. Maybe I should do something differently?
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.inner = MyScriptModule()
def forward(self, x):
# stuff
x = self.inner(x)
# more stuff
return x
class MyScriptModule(torch.jit.ScriptModule):
""" Psuedo-code """
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(1, 1, bias=False)
@torch.jit.script_method
def forward(self, x):
out = torch.zeros_like(x)
for i in range(x.size()[0]):
out = self.layer(x)
return out
my_module = MyModule()
my_module.eval()
torch.jit.trace(my_module, sample) |
st181287 | alekseynp:
I discovered that .eval() doesn’t change the requires_grad for the parameters in my ScriptModule. Is this intended behaviour?
Yes, this is expected and also won’t change the requires_grad attributes of an “eager” model.
model.eval() and model.train() change the internal self.training flag of all modules recursively starting from the parent module. By doing so, the behavior of some layers will be changed.
E.g. dropout will be disabled and batchnorm layers will use their running statistics to normalize the incoming data instead of calculating the batch statistics.
If you want to freeze the parameters, you would have to set their .requires_grad attribute to False.
You could of course use both in combination, e.g. freeze all parameters, but leave the dropout layers enabled, or let all parameters train, but use the running stats of batchnorm layers. |
st181288 | Mainly looking for information about aten and prim operators. Is there documentation to look at all of them? What is the total number of them? It seems there’s a C API to use them directly but is there a Python API as well? Basically I want to be able to build a torchscript graph node by node with specific operators. |
st181289 | The aten and prim namespaces are considered internal details so we don’t really have user-facing documentation explaining them. The closest (for aten) is maybe https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/README.md 2.
There is are Python bindings to the PyTorch IR in (https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/python/python_ir.cpp 2), but once again they are considered internal. |
st181290 | Hi, I’m currently trying to JIT compile CenterTrack with either tracing or scripting.
CenterTrack uses DCNv2 1. I’ve found some resources that mentioned I needed to fix a couple things first:
change code that uses Python integers instead of tensors. Got that from: https://github.com/xi11xi19/CenterNet2TorchScript 4
possibly update to torch 1.5 by making code changes described here: https://github.com/CharlesShang/DCNv2/pull/58 2 (applied these to the above DCNv2 version instead of the original version)
Now I’ve removed some of the initial errors I got when trying to trace, but am stuck the current ones.
When I try to trace:
# Try trace
input_var = torch.zeros([1, 3, 736, 1280], dtype=torch.float32).cuda()
try:
traced_script_module = torch.jit.trace(detector.model, input_var)
except Exception as e:
print(f'Exception (trace): {e}')
The only thing that is returned is:
Exception (trace): Only tensors, lists and tuples of tensors can be output from traced functions
And when I instead try torch script I have two issues, a small one being requirements of default values for tensors, but a larger one being heavy use of *args as inputs throughout the codebase.
I suspect that the torch trace issue comes from some forward() methods returning dictionaries. Is that not supported in v1.5? Is there a recommended alternative? And also is there a way to find out where in my python code the error “Only tensors, lists and tuples of tensors can be output from traced functions” refers to?
Any help is greatly appreciated! |
st181291 | Hi, may I know where I can find a list of JIT-supported or not supported operations in Pytorch please? |
st181292 | supported PyTorch stuff: https://pytorch.org/docs/master/jit_builtin_functions.html#builtin-functions 8
unsupported PyTorch stuff: https://pytorch.org/docs/master/jit_unsupported.html#jit-unsupported 9 |
st181293 | Thank you so much. But I can’t find SyncBatchNorm in either page. Is there something wrong? |
st181294 | I have currently started learning torchscript and tried to visualize the optimized graph but I am unsuccessful. The following is my code
@torch.jit.script
def cell_end(ingate, forgetgate, cellgate, outgate, cx):
ingate = torch.sigmoid(ingate)
forgetgate = torch.sigmoid(forgetgate)
cellgate = torch.tanh(cellgate)
outgate = torch.sigmoid(outgate)
cy = (forgetgate * cx) + (ingate * cellgate)
hy = outgate * torch.tanh(cy)
return hy, cy
For the above function i got the following graph
graph(%ingate.1 : Tensor,
%forgetgate.1 : Tensor,
%cellgate.1 : Tensor,
%outgate.1 : Tensor,
%cx.1 : Tensor):
%19 : int = prim::Constant[value=1]()
%ingate.3 : Tensor = aten::sigmoid(%ingate.1) # <ipython-input-2-8da29633480c>:3:13
%forgetgate.3 : Tensor = aten::sigmoid(%forgetgate.1) # <ipython-input-2-8da29633480c>:4:17
%cellgate.3 : Tensor = aten::tanh(%cellgate.1) # <ipython-input-2-8da29633480c>:5:15
%outgate.3 : Tensor = aten::sigmoid(%outgate.1) # <ipython-input-2-8da29633480c>:6:14
%15 : Tensor = aten::mul(%forgetgate.3, %cx.1) # <ipython-input-2-8da29633480c>:8:10
%18 : Tensor = aten::mul(%ingate.3, %cellgate.3) # <ipython-input-2-8da29633480c>:8:30
%cy.1 : Tensor = aten::add(%15, %18, %19) # <ipython-input-2-8da29633480c>:8:10
%23 : Tensor = aten::tanh(%cy.1) # <ipython-input-2-8da29633480c>:9:19
%hy.1 : Tensor = aten::mul(%outgate.3, %23) # <ipython-input-2-8da29633480c>:9:9
%27 : (Tensor, Tensor) = prim::TupleConstruct(%hy.1, %cy.1)
return (%27)
Running with an input
inp = torch.randn(5, 10, 4)
cell_end(*inp)
Even after running the input over the graph I got the same graph
graph(%ingate.1 : Tensor,
%forgetgate.1 : Tensor,
%cellgate.1 : Tensor,
%outgate.1 : Tensor,
%cx.1 : Tensor):
%5 : int = prim::Constant[value=1]()
%ingate.3 : Tensor = aten::sigmoid(%ingate.1) # <ipython-input-2-8da29633480c>:3:13
%forgetgate.3 : Tensor = aten::sigmoid(%forgetgate.1) # <ipython-input-2-8da29633480c>:4:17
%cellgate.3 : Tensor = aten::tanh(%cellgate.1) # <ipython-input-2-8da29633480c>:5:15
%outgate.3 : Tensor = aten::sigmoid(%outgate.1) # <ipython-input-2-8da29633480c>:6:14
%10 : Tensor = aten::mul(%forgetgate.3, %cx.1) # <ipython-input-2-8da29633480c>:8:10
%11 : Tensor = aten::mul(%ingate.3, %cellgate.3) # <ipython-input-2-8da29633480c>:8:30
%cy.1 : Tensor = aten::add(%10, %11, %5) # <ipython-input-2-8da29633480c>:8:10
%13 : Tensor = aten::tanh(%cy.1) # <ipython-input-2-8da29633480c>:9:19
%hy.1 : Tensor = aten::mul(%outgate.3, %13) # <ipython-input-2-8da29633480c>:9:9
%15 : (Tensor, Tensor) = prim::TupleConstruct(%hy.1, %cy.1)
return (%15)
I don’t know the reason behind it, Anyone please help me with this |
st181295 | For example:
I use the static quantization tutorial and generate a scripted quantized model.
https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 23
Since the quantized model is different from the float model owing to quantizing (combining conv+bn into conv and so on). So I can’t obtain the .py file descripting the quantized model after quantization. Therefore I would like to know how to generate caffe prototxt which draws the whole network? |
st181296 | Can you elaborate on your higher-level goals a little bit? We don’t have any direct way to produce a caffe2 model from a PyTorch model, but you can see a description of the compiled model like so
model = torch.jit.load(model_file)
print(model.graph) |
st181297 | What about exporting to caffe2 in an indirect way? Is it possible to somehow use the scale/zero_point and get the same outputs as in PyTorch? |
st181298 | Hi,
I am trying to train on linux (python) and do inference on windows with c++ static lib application.
When calling torch::jit::script::Module::Forward(), following error occurs.
The application with dll does not fail.
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/Model.py", line 37, in forward
_19 = (_6).forward((_7).forward((_8).forward(_18, ), ), )
input0 = torch.cat([(_5).forward(_19, ), _15], 1)
_20 = (_3).forward((_4).forward(input0, ), )
~~~~~~~~~~~ <--- HERE
_21 = (_2).forward((_14).forward2(_20, ), )
return (_0).forward((_1).forward(_21, ), )
File "code/__torch__/CompactModel.py", line 36, in forward
_18 = (_9).forward((_10).forward(_17, ), )
_19 = (_6).forward((_7).forward((_8).forward(_18, ), ), )
input0 = torch.cat([(_5).forward(_19, ), _15], 1)
~~~~~~~~~ <--- HERE
_20 = (_3).forward((_4).forward(input0, ), )
_21 = (_2).forward((_14).forward2(_20, ), )
Traceback of TorchScript, original code (most recent call last):
/docker_share/source/Model.py(193): forward
/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py(534): _slow_forward
/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py(548): __call__
/usr/local/lib/python3.8/site-packages/torch/jit/__init__.py(1027): trace_module
/usr/local/lib/python3.8/site-packages/torch/jit/__init__.py(873): trace
./source/jitrace.py(700): <module>
RuntimeError: error in LoadLibraryA
The error message changed from “LoadLibraryA” to “GetProcAddress” when I placed “caffe2_nvrtc.dll” next to the application. caffe2_nvrtc.dll is created under build/bin and caffe2_nvrtc.lib is not created.
Is caffe2_nvrtc.dll related this problem ?
Do you have any suggestion ?
Thank you. |
st181299 | There are NO DLLs in my application dir.
The cuda and cudnn libraries are in the $PATH.
Although I do not want to use dynamic link library,
application succeeded by below.
link official distributed “caffe2_nvrtc.lib”
place “caffe2_nvcrt.dll” next to the application.
So, to succeed with static library, I tried below and it didn’t work.
change CMakeFiles.txt under “caffe2” directory as below and run build.
565c565
< add_library(caffe2_nvrtc SHARED ${ATen_NVRTC_STUB_SRCS})
---
> add_library(caffe2_nvrtc ${ATen_NVRTC_STUB_SRCS})
caffe2_nvrtc.lib is created (and caffe2_nvrtc.dll is not).
link it to the app.
But it does work if “caffe2_nvrtc.dll” (the official one) is placed next to the application.
The difference between the loaded DLLs when the application succeeds or fails are :
caffe2_nvrtc.dll
nvrtc64_101_0.dll
So what should I do to succeed with static cuffe2_nvrtc.lib? |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.