id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st181500 | Hi,
I already solved the problem, you were right. I accidentally passed a float as batch size, which caused this error. |
st181501 | Hi! How should I declare an optional BatchNorm submodule in a valid TorchScript way?
class Downsample(nn.Module):
def __init__(
self,
in_channels: int,
out_channels: int,
kernel_size: int,
apply_batchnorm: bool = False,
):
super(Downsample, self).__init__()
self.conv = nn.Conv2d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=2,
bias=False,
padding=1,
)
self.apply_batchnorm = apply_batchnorm
if self.apply_batchnorm:
self.bn = nn.BatchNorm2d(out_channels)
def forward(self, x):
x = self.conv(x)
if self.apply_batchnorm:
x = self.bn(x)
x = F.leaky_relu(x, 0.3)
return x
When trying to run torch.jit.script(Downsample(1, 2, 3)) I get following error.
RuntimeError:
Module 'Downsample' has no attribute 'bn' :
File "path/to/file.py", line 100
x = self.conv(x)
if self.apply_batchnorm:
x = self.bn(x)
~~~~~~~ <--- HERE
x = F.leaky_relu(x, 0.3)
return x
I understand that in TorchScript every variable must have single static type. Adding bn: Optional[nn.BatchNorm2d] to the class definition does not help. |
st181502 | Solved by driazati in post #2
This is close, the issue is that when self.apply_batchnorm is false, there is no bn attribute on the module, so it cannot be accessed / checked. So
if self.apply_batchnorm:
self.bn = nn.BatchNorm2d(out_channels)
turns into
if self.apply_batchnorm:
self.bn =… |
st181503 | This is close, the issue is that when self.apply_batchnorm is false, there is no bn attribute on the module, so it cannot be accessed / checked. So
if self.apply_batchnorm:
self.bn = nn.BatchNorm2d(out_channels)
turns into
if self.apply_batchnorm:
self.bn = nn.BatchNorm2d(out_channels)
else:
self.bn = None
Another piece is that to call an optional module, the compiler must be able to figure out that it is not None when it is called, so instead of if self.apply_batchnorm you have to do if self.bn is not None.
This is the full working example
class Downsample(nn.Module):
def __init__(
self,
in_channels: int,
out_channels: int,
kernel_size: int,
apply_batchnorm: bool = False,
):
super(Downsample, self).__init__()
self.conv = nn.Conv2d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=2,
bias=False,
padding=1,
)
self.apply_batchnorm = apply_batchnorm
if self.apply_batchnorm:
self.bn = nn.BatchNorm2d(out_channels)
else:
self.bn = None
def forward(self, x):
x = self.conv(x)
if self.bn is not None:
x = self.bn(x)
x = F.leaky_relu(x, 0.3)
return x
torch.jit.script(Downsample(1, 2, 3)) |
st181504 | U can also use nn.Identity to replace the batchnorm. For example, in __init__, u can write:
if self.apply_batchnorm:
self.bn = nn.BatchNorm2d(out_channels)
else:
self.bn = nn.Identity() |
st181505 | Thank you driazati and G.M, I have also found another solution in Github issues in the meantime.
github.com/pytorch/vision
make shufflenet and resnet scriptable
pytorch:master ← eellison:shufflenet
opened
Aug 28, 2019
eellison
+6
-2
github.com/pytorch/pytorch
TorchScript doesn't handle submodule being None 5
opened
Sep 12, 2019
closed
Jan 27, 2020
dzhulgakov
🐛 Bug
If one of the submodules is None, TorchScript doesn't recognize it as an attribute, unless explicitly specified in __constants__ lists.
Obviously,...
jit
jit-backlog |
st181506 | Hello,
I am able to see the FusionGroup in the graph dump as part of optimization passes. But, I am not seeing that in the saved ScriptModule. How can I be able to save the last executed optimized graph (last_executed_optimized_graph, which includes FusionGroup) in the ScriptModule?
Thank you |
st181507 | The code for a ScriptModule is saved in its un-optimized form for a number of stability and performance related reasons. When it’s loaded, it gets re-compiled and re-optimized, so you don’t have to worry about the FusionGroup getting saved. |
st181508 | Thank you for the response @driazati . While I agree that re-compilation and re-optimization could be done during load time, not all optimizations are quick enough to do for every load. If there is an opportunity to save the optimized model (on the target hardware), we wont be incurring the optimization cost. Do you recommend or suggest ways in which we can achieve this? |
st181509 | Bumping this question up. Is it possible to save an optimized graph along with its fusiongroups/subgraphs in a scriptmodule? |
st181510 | This is not possible today but may be enabled in the future as we build out support for different backends (e.g. this PR to enable lowered graphs for consumption by backend passes 8) |
st181511 | Torch.cat throws error for tensor lists when used within torchscript.
Kindly let me know of a fix/workaround.
Here is a minimal example to reproduce the bug.
import torch
import torch.nn as nn
"""
Smallest working bug for torch.cat torchscript
"""
class Model(nn.Module):
"""dummy model for showing error"""
def __init__(self):
super(Model, self).__init__()
pass
def forward(self):
a = torch.rand([6, 1, 12])
b = torch.rand([6, 1, 12])
out = torch.cat([a, b], axis=2)
return out
if __name__ == '__main__':
model = Model()
print(model()) # works
torch.jit.script(model) # throws error
This code throws the following error:
File "/home/anil/.conda/envs/rnn/lib/python3.7/site-packages/torch/jit/__init__.py", line 1423, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
RuntimeError:
Arguments for call are not valid.
The following operator variants are available:
aten::cat(Tensor[] tensors, int dim=0) -> (Tensor):
Keyword argument axis unknown.
aten::cat.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!)):
Argument out not provided.
The original call is:
at smallest_working_bug_torch_cat_torchscript.py:19:14
def forward(self):
a = torch.rand([6, 1, 12])
b = torch.rand([6, 1, 12])
out = torch.cat([a, b], axis=2)
~~~~~~~~~ <--- HERE
return out
Thank you for your consideration |
st181512 | Solved by driazati in post #2
PyTorch supports the axis keyword arg for numpy compatibility, but it looks like there is a bug where this isn’t translating into TorchScript. Most TorchScript ops use dim in place of axis (the meaning is the same), so if you change that in your code it should work, i.e. torch.cat([a, b], axis=2) be… |
st181513 | PyTorch supports the axis keyword arg for numpy compatibility, but it looks like there is a bug where this isn’t translating into TorchScript. Most TorchScript ops use dim in place of axis (the meaning is the same), so if you change that in your code it should work, i.e. torch.cat([a, b], axis=2) becomes torch.cat([a, b], dim=2). |
st181514 | Hi, I’m trying to get an optimized graph from graph executor via ScriptModule’s graph_for(...) method. A simple test case is below:
import torch
conv = torch.nn.Conv2d(in_channels=3, out_channels=8, kernel_size=3)
# To avoid dealing with prim::Bailout stuff
torch._C._jit_set_profiling_executor(False)
inp = torch.rand(1, 3, 224, 224)
trace = torch.jit.trace(conv, inp).eval()
print(trace.graph_for(inp))
But it seems graph executor erases shape information on construction, so the output of graph_for has shape information removed.
graph(%self : __torch__.torch.nn.modules.module.Module,
%input : Float(*, *, *, *)):
%23 : int[] = prim::Constant[value=[0, 0]]()
%22 : int[] = prim::Constant[value=[1, 1]]()
%4 : int = prim::Constant[value=1]() # /home/masa/projects/deep/pytorch/torch/nn/modules/conv.py:345:0
%13 : bool = prim::Constant[value=0]() # /home/masa/projects/deep/pytorch/torch/nn/modules/conv.py:345:0
%20 : bool = prim::Constant[value=1]() # /home/masa/projects/deep/pytorch/torch/nn/modules/conv.py:345:0
%2 : Float(*) = prim::GetAttr[name="bias"](%self)
%3 : Float(*, *, *, *) = prim::GetAttr[name="weight"](%self)
%21 : Float(*, *, *, *) = aten::_convolution(%input, %3, %2, %22, %23, %22, %13, %23, %4, %13, %13, %20) # /home/masa/projects/deep/pytorch/torch/nn/modules/conv.py:345:0
return (%21)
If I know that my input shape is always fixed, is there a way to add explicit shape information? I know that the output of trace has shape information preserved, but I want an optimized graph available via graph_for(...). |
st181515 | Could you please elaborate on your use case? The exact static shapes were added to enable backends to generate more efficient kernels. I’m not sure they will be added to serialization or any other public interface.
WIthout,
torch._C._jit_set_profiling_executor(False) we don’t capture the exact shapes only tensor ranks. Even the rank information, I believe, considered internal for use by backends and optimization passes. |
st181516 | ok, my use case is to translate Torchscript IR to TVM 1 compiler’s IR. I have a repo https://github.com/masahi/torchscript-to-tvm 9 which demonstrates translating torch models to TVM, compile and run it under TVM and get the identical output as torch.
Right now I take the output of torch.jit.trace(...) which has shape information attached, and for each torch operator node I translate it to corresponding one in TVM. Since TVM is mostly a static compiler, shape information is required.
TVM has its own set of optimization passes, so it is no problem to take the unoptimized input torch IR. Currently I’m applying the Torchscript inlining pass to remove prim::CallMethod wrapping, but it seems rather ad hoc to me and I rather want to apply other optimization passes in torch as well.
I know I can apply each optimization passes manually (and I do for the inline pass), but the API prefix torch._C._jit_pass* suggests they are not “officially” supported, so I’m not sure if I want to use them directly. Since I discovered that I can access the optimized graph via graph_for(...) method, I’m looking to see if this is something I can use.
I disabled the profiling executor because it adds prim::Bailout and prim::BailoutTemplate nodes for which I have no idea how to translate to TVM. Since input shape is static in TVM, I think they are not relevant to my use case, so I don’t want to see them in the input IR. |
st181517 | I would suggest that you run it with the profiling executor a few times with inputs that cover the different Tensor dimensions you expect to use, and then add a pass to remove Bailout nodes. This should give you a Graph with shape information to maximum generality. |
st181518 | There are also a few passes that will be landed shortly in Pytorch JIT that should help with this conversion. Freezing and a functionalization pass. I will comment here when they are landed. |
st181519 | Thanks, if removing bailout is possible that would definitely work for me. At a quick glance there is no pass to remove Bailout nodes in torch, but I’ll try if I can do it from “userland”.
Another API I’m interested in is _propagate_shapes function:
github.com
pytorch/pytorch/blob/8527ba8b7059d8c7df6fc2c11271ceefbf91ce5a/torch/csrc/jit/script/init.cpp#L476-L485
static std::shared_ptr<Graph> _propagate_shapes(
Graph& graph,
std::vector<at::Tensor> inputs,
bool with_grad = false) {
Stack stack(inputs.begin(), inputs.end());
auto retval = graph.copy();
setInputTensorTypes(*retval, stack, /*complete=*/false);
PropagateInputShapes(retval);
return retval;
}
Do you know the use case of this function and if this is something I should take a look? |
st181520 | This API is only useful with the executor that isn’t the profiled executor, so I don’t think it applies. Yes, the pass doesn’t exactly exist as you need it. The logic should pretty much be the same as right here 14, except you are always removing the guards. |
st181521 | Hi All,
I’m looking at the TorchVision implementation of GoogLeNet and I see that in the inception block is used __constants__ in the class definition. I read the documentation but I still don’t understand how it works and what happen if I remove it. In my understanding is something used by TorchScript but I don’t have yet the full picture (probably because I need to learn more about TorchScript).
I also tried:
inception_block = Inception(192, 64, 96, 128, 16, 32, 32)
inception_block = torch.jit.script(inception_block)
inception_block
And I don’t receive any error even if I remove __constants__ = ['branch2', 'branch3', 'branch4'] in the class definition.
class Inception(nn.Module):
__constants__ = ['branch2', 'branch3', 'branch4']
def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch5x5red, ch5x5, pool_proj,
conv_block=None):
super(Inception, self).__init__()
if conv_block is None:
conv_block = BasicConv2d
self.branch1 = conv_block(in_channels, ch1x1, kernel_size=1)
self.branch2 = nn.Sequential(
conv_block(in_channels, ch3x3red, kernel_size=1),
conv_block(ch3x3red, ch3x3, kernel_size=3, padding=1)
)
self.branch3 = nn.Sequential(
conv_block(in_channels, ch5x5red, kernel_size=1),
conv_block(ch5x5red, ch5x5, kernel_size=3, padding=1)
)
self.branch4 = nn.Sequential(
nn.MaxPool2d(kernel_size=3, stride=1, padding=1, ceil_mode=True),
conv_block(in_channels, pool_proj, kernel_size=1)
)
def _forward(self, x):
branch1 = self.branch1(x)
branch2 = self.branch2(x)
branch3 = self.branch3(x)
branch4 = self.branch4(x)
outputs = [branch1, branch2, branch3, branch4]
return outputs
def forward(self, x):
outputs = self._forward(x)
return torch.cat(outputs, 1)
Can you explain me better what is the utility of adding constants and what happen if I don’t do it in a case like this one?
Thanks,
Mario |
st181522 | Solved by driazati in post #4
Ah yeah, the case you’re seeing here is a hack that we’ve been using for a while, basically some modules can have optional submodules (i.e. either the submodule can be present or None). When we script something we just so happen to add submodules first, then constants, skipping any names that are al… |
st181523 | The relevant docs can be found in the “ How do I store attributes on a ScriptModule” question here 74. The idea is that if the jit knows certain values are constant and unchanging, then more aggressive optimizations and re-ordering can be done with those values versus things that are just regular attributes on a model.
So when you remove __constants__ or a Final type annotation the model’s behavior shouldn’t change, but less information is available to the jit about what it can do with your code. The jit’s type system will enforce that these values are not mutated, so that can make your code cleaner as well. |
st181524 | Thanks Driazati!
I still have some confusion by the fact that here we are marking as __constants__ blocks that have weights that will change during back-propagation, again I’m not an expert of jit so is possible that I’m missing something that doesn’t regard __constants__ at all.
With __constants__ are we just saying that the shape and type will not change but we are ok if the parameter values change?
I understand something like
__constants__ = ['a'] followed by a self.a = 4 because I can see that 4 is a constant value that can be hard-coded but I’m still confused by how blocks with weights are treated.
Thanks again,
Mario |
st181525 | Ah yeah, the case you’re seeing here is a hack that we’ve been using for a while, basically some modules can have optional submodules (i.e. either the submodule can be present or None). When we script something we just so happen to add submodules first, then constants, skipping any names that are already present on the module, so it either gets added as a normal submodule (ignoring the entry in __constants__) or as a None constant. If the compiler sees a None constant in an if-statement, it will skip compilation of the code inside the if, allowing us to support uses like downsample in resnet 24.
So if you see a nn.Module in __constants__, all it really means is Optional[nn.Module], we just have this kind-of-nonsense way to specify that. |
st181526 | I have a code that looks like this
import torch as tc
from torch import jit
@jit.script
def func(inp):
return inp<<1
a = tc.tensor([3,4,5])
func(a)
when I run it, I got the error:
torch.jit.frontend.NotSupportedError: unsupported binary operator: LShift:
File "example.py", line 5
@jit.script
def func(inp):
return inp<<1
~~ <--- HERE
However, if I change inp<<1 to inp.__lshift__(1), it works. This also happens in C++. Is there a reason behind this or is it just that support for binary operators are not fully implemented yet? I’m using Pytorch 1.4.0 on windows with the corresponding version of libtorch.
Thanks. |
st181527 | Solved by driazati in post #2
Looks like this was just an oversight, can you file a bug with this info so we can have it in our issue tracker? (We’re happy to accept any PRs to add it too!) |
st181528 | Looks like this was just an oversight, can you file a bug 6 with this info so we can have it in our issue tracker? (We’re happy to accept any PRs to add it too!) |
st181529 | Hi,
I have some version problems. And need your help. Let discuss it.
I am using the distiller 2 to prune my object detection model.
Then I need to deploy the pruned model on a Jetson TX2 with TensorRT 6.0.
However, for the distiller, it requires the JIT module of PyTorch 1.3.1 to get the graph a module such that the distiller can do the actual pruning (not just putting a zeros mask).
After getting the pruned model, I exported it to an ONNX model with the PyTorch 1.3.1, however, I found that the ONNX parser of TensorRT 6.0 is not compatible with PyTorch 1.3.1, but with PyTorch 1.2.
I have raised this issue on github 3.
Now I have trouble.
How can I make them work together?
I have tried two ways, but both of them failed.
I created two conda environments, one is with PyTorch 1.3.1 and distiller, called env1.3, another is with PyTorch 1.2.0, called env1.2
Under env1.3, I saved the whole model (not just state_dict) with torch.save and pickle.dump, but when I tried to use torch.load or pickle.load to load the model under env1.2, an error will occury, saying that there is no module called distiller.
Under env1.3, I used torch.jit.trace and get a ScriptModulde, and then I used torch.jit.save to save the model to the disk. However, when I used torch.jit.load under env1.2 to load the jit model, I got
terminate called after throwing an instance of ‘c10::Error’
what(): [enforce fail at inline_container.cc:137] . PytorchStreamReader failed closing reader: file not found
frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::string const&, void const*) + 0x47 (0x7f9f1c67ee17 in /home/rizhao/anaconda3/envs/torch/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: caffe2::serialize::PyTorchStreamReader::valid(char const*) + 0x6b (0x7f9f1f60775b in /home/rizhao/anaconda3/envs/torch/lib/python3.6/site-packages/torch/lib/libtorch.so)
frame #2: caffe2::serialize::PyTorchStreamReader::~PyTorchStreamReader() + 0x1f (0x7f9f1f6077af in /home/rizhao/anaconda3/envs/torch/lib/python3.6/site-packages/torch/lib/libtorch.so)
frame #3: + 0x3c17637 (0x7f9f206e6637 in /home/rizhao/anaconda3/envs/torch/lib/python3.6/site-packages/torch/lib/libtorch.so)
frame #4: torch::jit::import_ir_module(std::shared_ptrtorch::jit::script::CompilationUnit, std::string const&, c10::optionalc10::Device, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > >&) + 0x1d0 (0x7f9f206ed220 in /home/rizhao/anaconda3/envs/torch/lib/python3.6/site-packages/torch/lib/libtorch.so)
frame #5: + 0x4d69dc (0x7f9f66ddf9dc in /home/rizhao/anaconda3/envs/torch/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #6: + 0x1d3ef4 (0x7f9f66adcef4 in /home/rizhao/anaconda3/envs/torch/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #22: __libc_start_main + 0xf0 (0x7f9f75830830 in /lib/x86_64-linux-gnu/libc.so.6)
Aborted (core dumped)
Now, I have no idea how to solve it. Any cues will be much appreciated. |
st181530 | Are you able to upload the binary (maybe to a GitHub repo or something) you are trying to load so we can try to reproduce this issue? |
st181531 | Thanks. Can I just upload the ONNX model exported by PyTorch 1.3 or PyTorch 1.2? Besides, I can also upload the exported JIT module. |
st181532 | Rizhao_Cai:
upload the ONNX model exported by PyTorch 1.3 or PyTorch 1.2? Besides, I can also upload the exported
Upload everything you can, at the very least the model you load that causes the error to be thrown |
st181533 | Hello everyone. I recently loading a torchscript model in C++, when i use the model to infer, the first pass takes about 20s, while the others take only about 0.5s.
Has anyone ever done any related work or met the same problem?
is there any way to disable the optimization or choose the optimization level or after optimization we can save the model(or the computation graph)?
or is it an Inevitable warm-up process?
I’d appreciate if anybody can help me! Thanks in advance! |
st181534 | Solved by G.M in post #6
Maybe this post helps? Speed of Custom RNN is SUPER SLOW |
st181535 | Hello huoge,
It’s likely that you’re correct in the assessment that optimization is what’s making the first pass slow. However, 20 seconds seems pretty high and we’d like to understand what exactly is happening here. Do you mind sharing your serialized model file so we can have a look? Also, which version of PyTorch are you using?
We don’t currently have an optimization level option.
James |
st181536 | Hello James,
Thanks for replying me, I build the pytorch from source and the version is 1.4.0a0+93db2b8
the model is a modified transformer. I transfer the model to torchscript model by using script_model = torch.jit.script(model). (and definitely some other work for jit-compatible)
by the way, I use the torch.quantization.quantize_dynamic to quantize the model, then the first pass
costs about 12.5s.
this is parts of print(script_model):
image505×787 9.6 KB
I only use torch.jit.script(), should I mix the trace and script?
and how can I submit the serialized model file to you? for some reason, I can’t give you the
modifiled transformer, but I can provide the orginal transformer serialized model file for you which first pass costs 32.83s while the others take about 9s(yeah, it’s about 20s again). The origin and modifiled transformer are same net architecture but have different inference functions, so they have same problem in first pass. |
st181537 | Do you solve a “warm up” scripted model problem?
Simplest way to share this problem is:
import torchvision, torch, time
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
model = torch.jit.script(model)
model.eval()
x = [torch.randn((3,224,224))]
for i in range(3):
start = time.time()
model(x)
print(‘Time elapsed: {}’.format(time.time()-start))
Output:
Time elapsed: 38.297527551651
Time elapsed: 6.655704021453857
Time elapsed: 6.651334762573242
So, can anybody help with explaining how I can load and run scripted model without this "warm up’?
Thanx. |
st181538 | I have converted a model to scipt module using torch.jit.scipt. However, I find there is some difference between the output of the original model and the output of the scripted model like:
model.eval()
scripted_model = torch.jit.script(model)
# there is some difference between output and scripted_output
output = model(inputs)
scripted_output = scripted_model(inputs)
Each element has an average difference of 1e-6
Is this phenomenon normal? |
st181539 | Yes, this small difference is expected due to the limited precision for float32. |
st181540 | Is there any way to give a type hint to the output of torch.jit._wait?
The below example fails to compile to torchscript
import torch
from typing import List
def process(i: int) -> int:
return i + 1
@torch.jit.script
def process_many(l: List[int]) -> List[int]:
futs: List[torch.jit.Future] = []
out: List[int] = []
for v in l:
futs.append(torch.jit._fork(process, v))
for f in futs:
out.append(torch.jit._wait(f))
return out
with the error
File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 1281, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
RuntimeError:
Unknown type name 'torch.jit.Future':
File "./demo.py", line 10
@torch.jit.script
def process_many(l: List[int]) -> List[int]:
futs: List[torch.jit.Future] = []
~~~~~~~~~~~~~~~~ <--- HERE
out: List[int] = []
However, if i change the for loop to a list comprehension, it compiles fine
@torch.jit.script
def process_many(l: List[int]) -> List[int]:
futs = [torch.jit._fork(process, v) for v in l]
out: List[int] = []
for f in futs:
out.append(torch.jit._wait(f))
return out
# >>> print(process_many([0, 3, 5])
# [1, 4, 6]
What haven’t been able to figure out is if torch.jit.Future is the correct type annotation for me to be using here. For now, it seems possible to work around this just using a list comprehension, but it could get ugly if the logic becomes more complex.
I’m on 1.4.0, if it makes any difference |
st181541 | Solved by driazati in post #2
Our future support is really only half done (it’s still an internal API, hence the _ at the beginning of fork and wait) so I don’t think there’s a way to do this without the list comprehension you mentioned. Could you file a bug report here? |
st181542 | Our future support is really only half done (it’s still an internal API, hence the _ at the beginning of fork and wait) so I don’t think there’s a way to do this without the list comprehension you mentioned. Could you file a bug report here 5? |
st181543 | Thanks! just discovered that there seems to be a bug report out for this already
github.com/pytorch/pytorch
[jit] Fix future type annotation in python 20
opened
Sep 21, 2019
wanchaol
We should fix the Future type annotation to make it also work in python, fork will immediately execute the function single...
high priority
jit
jit-backlog
triaged |
st181544 | When testing if the prediction is correct after the model is transferring to script model, I found that the inference time is twice longer than pytorch model.
# pytorch 1.4, python 3.7
out1 = model(input) # forward time: 0.05 sec
ScriptModel = torch.jit.script(model, input)
out2 = ScriptModel(input) # forward time: 0.12 sec |
st181545 | The first pass does some extra work (details in the link below), if you warm up ScriptModel by calling it a few times first the time should improve.
Speed of Custom RNN is SUPER SLOW jit
Hi,
Based on code here
https://github.com/pytorch/pytorch/blob/master/benchmarks/fastrnns/custom_lstms.py
I write an example to compare the cumputation capability of native lstm and custom lstm.
But I found that the speed of custom lstm is 100 times slower than native lstm class.
here is my test code:
import torch.nn as nn
import time
from models.custom_lstm import LSTMLayer, LSTMCell, script_lstm, LSTMState
input_size = 1024
cell_size =2048
batch_size =20
seq_len = 200
native_lstm=n… |
st181546 | As the snippet below, script model actually get slower than average time 0.12sec in the first iteration in for loop as @driazati mentioned above . However, after the first iteration, the rest is ~ 0.12 sec/img and is still twice longer than pytorch model.
Put more information here: batch_size=1
pytorch 1.4, python 3.7
# pytorch model
for i in range(image_number):
out1 = model(input) # forward time: 0.05 sec
#torchscript model
ScriptModel = torch.jit.script(model, input)
for i in range(image_number):
out2 = ScriptModel(input) # forward time: 0.12 sec if i > 0, 3 sec if i==0 |
st181547 | If you are using the native lstm, it is normal that the native one is faster than touchscript. Also, according to here 58, increasing the batch size can reduce the difference in performance. And according to my own experience, increasing the hidden size can also reduce the difference. |
st181548 | Hi everyone, I am a new to Torch Script.
I have traced my model When I loading and predicting result based on my model had some errors.
Anyone can have me figure it out?
Input:
at::Tensor f_f = torch::tensor({269, 90, 32, 269, 65, 85, 17, 269, 104, 13, 4, 21, 13, 269, 15, 95, 5, 269, 41, 30, 21, 29, 270, 270},+);
at::Tensor f_p = torch::tensor({3, 7, 13, 17, 22, 23},torch::kFloat32);
at::Tensor b_f = torch::tensor({270, 270, 29, 21, 30, 41, 269, 5, 95, 15, 269, 13, 21, 4, 13, 104, 269, 17, 85, 65, 269, 32, 90, 269},torch::kFloat32);
at::Tensor b_p = torch::tensor({23, 20, 16, 10, 6, 1},torch::kFloat32);
at::Tensor w_f = torch::tensor({1020, 1083, 4027, 3087, 262, 8765},torch::kFloat32);
f_f = at::reshape(f_f , {24, 1});
f_p = at::reshape(f_p , {6, 1});
b_f = at::reshape(b_f , {24, 1});
b_p = at::reshape(b_p , {6, 1});
w_f = at::reshape(w_f , {6, 1});
inputs.push_back(f_f);
inputs.push_back(f_p);
inputs.push_back(b_f);
inputs.push_back(b_p);
inputs.push_back(w_f);
at::Tensor output = module.forward(inputs).toTensor();
Error:
terminate called after throwing an instance of 'std::runtime_error'
what(): Expected tensor for argument #1 'indices' to have scalar type Long; but got CPUFloatType instead (while checking arguments for embedding)
The above operation failed in interpreter, with the following stack trace:
at code/__torch__/torch/nn/modules/module.py:8:12
op_version_set = 1
class Module(Module):
__parameters__ = ["weight", ]
training : bool
weight : Tensor
def forward(self: __torch__.torch.nn.modules.module.Module,
forw_sentence: Tensor) -> Tensor:
input = torch.embedding(self.weight, forw_sentence, -1, False, False)
~~~~~~~~~~~~~~~ <--- HERE
return input
def forward1(self: __torch__.torch.nn.modules.module.Module,
tensor: Tensor) -> Tensor:
input = torch.embedding(self.weight, tensor, -1, False, False)
return input
Compiled from code /home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py(1484): embedding
/home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py(114): forward
/home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(516): _slow_forward
/home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(530): __call__
/home/bao/Desktop/segment_vtcc_test/lm_lstm_crf/model/lm_lstm_crf.py(222): forward
/home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(516): _slow_forward
/home/bao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py(530): __call__
/home/bao/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py(1034): trace_module
/home/bao/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py(882): trace
/home/bao/Desktop/segment_vtcc_test/convert_model_1.py(132): <module>
Thank in advance. |
st181549 | Hey, are you able to post a repro of how to produce your traced model? Either a link to a .pt file or (even better) the code you use to trace and save the model.
From what I can tell the input to this module forw_sentence is being used as the indices for the embedding operation (which, as in the error, must be of type Long instead of Float), but it’s hard to tell what the actual error is without the full repro. |
st181550 | Hello,
After updating PyTorch 1.0.1 -> 1.4.0 I encountered some unwanted behavior. When creating a ScriptModule, the order of its Parameters changes. I’ll attach an example based on the https://github.com/pytorch/benchmark/blob/master/rnns/fastrnns/custom_lstms.py file.
Two classes are defined. The only difference between the two is that one is a ScriptModule and the other an nn.Module. By plotting their parameter sizes we see that they are now stored in a different order than declared in the class (they also stop matching the native cell version). It might cause some backward compatibility problems (e.g. in the case of ‘custom_lstms.py’ the ‘test_script…’ functions stopped working).
import torch
import torch.nn as nn
from torch.nn import Parameter
import torch.jit as jit
from typing import List, Tuple
from torch import Tensor
class LSTMCell(jit.ScriptModule):
def __init__(self, input_size, hidden_size):
super(LSTMCell, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.weight_ih = Parameter(torch.randn(4 * hidden_size, input_size))
self.weight_hh = Parameter(torch.randn(4 * hidden_size, hidden_size))
self.bias_ih = Parameter(torch.randn(4 * hidden_size))
self.bias_hh = Parameter(torch.randn(4 * hidden_size))
@jit.script_method
def forward(self, input, state):
# type: (Tensor, Tuple[Tensor, Tensor]) -> Tensor
return input
class LSTMCellNoJit(nn.Module):
def __init__(self, input_size, hidden_size):
super(LSTMCellNoJit, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.weight_ih = Parameter(torch.randn(4 * hidden_size, input_size))
self.weight_hh = Parameter(torch.randn(4 * hidden_size, hidden_size))
self.bias_ih = Parameter(torch.randn(4 * hidden_size))
self.bias_hh = Parameter(torch.randn(4 * hidden_size))
def forward(self, input, state):
return input
net = LSTMCell(2, 10)
pars = [p for p in net.parameters()]
for i in range(len(pars)):
print(pars[i].shape)
Output:
torch.Size([40, 10])
torch.Size([40])
torch.Size([40])
torch.Size([40, 2])
net = LSTMCellNoJit(2, 10)
pars = [p for p in net.parameters()]
for i in range(len(pars)):
print(pars[i].shape)
Output:
torch.Size([40, 2])
torch.Size([40, 10])
torch.Size([40])
torch.Size([40]) |
st181551 | Solved by driazati in post #2
Hey thanks for the report / repro! This is an issue that’s been open for a while, we’re planning to fix it soon |
st181552 | Hey thanks for the report / repro! This is an issue that’s been open for a while, we’re planning to fix it soon
github.com/pytorch/pytorch
ScriptModule and nn.Module parameter ordering difference 7
opened
Aug 22, 2019
wanchaol
class eagerNet(nn.Module):
def __init__(self):
super(eagerNet, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.fc1 = nn.Linear(320, 50)
self.fc2...
jit
jit-backlog |
st181553 | I’m using Pytorch 1.2.0 with Python 3.7, and I can’t enumerate through a ModuleList.
I’m using this code
for i,direction in enumerate(self.directions):
state = states[i]
out, out_state = direction(inp, state)
outputs += [out]
output_states += [out_state]
, which is based from here 3. and I’m getting
RuntimeError:
'module' object is not iterable:
from
for i,direction in enumerate(self.directions):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... <--- HERE
I’ve clicked the link 1 about enumerating in the original file(the link above), it said that enumerating is already supporterd. Since I can’t use it, can someone please tell me what should I do.
Thanks. |
st181554 | Solved by eellison in post #2
Hi, @G.M, this is fixed in our most recent release (1.4). Could you try updating your pytorch version and giving it another go? |
st181555 | Hi, @G.M, this is fixed in our most recent release (1.4). Could you try updating your pytorch version and giving it another go? |
st181556 | But enumerate doesn’t seem to be in the 1.4.0 doc. And do u know where can I find a release note or something similar to that for Pytorch? |
st181557 | Good point, I think that was an omission. I’ll add a section to iterators to the docs shortly and link it here. |
st181558 | Hi, turns out we already had a (small) section here: https://pytorch.org/docs/master/jit_language_reference.html#iterables 3 |
st181559 | Currently, I load the torch script model as jit module.
I want to print all those parameters. So I want to convert jit::Value data strcture to Tensor.
if(value.type()->expect<torch::jit::TensorType>()) {
auto tensor = value.toTensor();
}
I try to use same function of IValue but it does not work. Any idea?
Thanks |
st181560 | IValue doesn’t have a .type() method since it’s a generic container for values (similar to Python’s PyObject). You should be able to just call .toTensor() to convert it to a tensor (make sure to guard it with a call to .isTensor() to make sure the IValue is what you expect).
You can find more details here 25. We are actively working on improving the JIT’s C++ API so issues like this will be more clear in the future. |
st181561 | Thanks a lot, @driazati.
I am using torch::jit::Value, not IValue .
I know how to convert IValue to tensor by toTensor. How about for for torch::jit::Value or torch::jit::Node? |
st181562 | Sorry for the confusion, you can think of torch::jit::Value as a variable in the graph (i.e. it has no concrete value until the graph is run), so there’s really no way to extract its value since it doesn’t have one. You can read more about it here 23. If you are trying to extract intermediate values from the graph, there’s no way to do that currently but you can follow along at this issue 15. |
st181563 | I got it. So actually jit::Value is just symbol variable.
Thanks a lot, @driazati |
st181564 | Hi I couldn’t grammatically quite understand the following example
m = nn.Conv1d(16, 33, 3, stride=2)
input = torch.randn(20, 16, 50)
output = m(input)
I think the type of nn.Conv1d is class, and accordingly m should be an instance of the class. Why in the third line, m(input) is used like a function?
Actually, it seems that >>> output = m(input) executes the function forward(self, input) defined in the class nn.Conv1d. So, to be grammatically correct, I think >>> output = m(input) should be changed into >>> output = m.forward(input) . But >>> output = m(input) indeed works.
Could anybody explain a bit about this? What grammatical rule does it follow?
The source code for Conv1d is at the following URL
https://pytorch.org/docs/master/_modules/torch/nn/modules/conv.html#Conv1d 2
Thank you very much! |
st181565 | nn.Module (which nn.Conv1d and all the nn. modules extend), implements the __call__ magic method so that it can be used like a function as in your example. The implementation can be found here 2. Using __call__ has some extra processing for forward/backward hooks, but, as you have found, its main job is to call forward() |
st181566 | Hi, I’ve been trying to migrate some code from torch 1.0.1 to torch 1.3.1 but I’m struggling with an error that appears only with the latest version.
Here is a minimal reproducible code to illustrate my problem (just a random MLP with embedding layers for first 3 categorical columns) :
import torch
class NN_emb_long(torch.nn.Module):
def __init__(self,):
super(NN_emb_long, self).__init__()
emb_layers = []
for i in range(3):
emb_layers.append(torch.nn.Embedding(5, 2))
self.emb_layers = torch.nn.ModuleList(emb_layers)
self.lin1 = torch.nn.Linear(8, 16)
self.lin_out = torch.nn.Linear(16, 2)
def forward(self, x):
embs_list = []
for i in range(3):
embs = self.emb_layers[i](x[:,i].long())
embs_list.append(embs)
post_embed = torch.cat([x[:, torch.Tensor([3,4]).long()]]+embs_list, dim=1)
res = self.lin1(post_embed)
res = torch.nn.ReLU()(res)
res = self.lin_out(res)
return res, post_embed
NN = NN_emb_long()
input_example = torch.ones((10, 5)).requires_grad_(True)
probas, post_embeddings = NN(input_example)
grad_outputs = torch.ones(input_example.shape[0],2)
G = torch.autograd.grad(outputs=probas,
inputs=post_embeddings,
grad_outputs=grad_outputs,
only_inputs=True,
retain_graph=True
)[0]
print(G.shape)
# until this everything should work on both version
# taking the trace shows an error only with torch 1.3.1
basic_trace = torch.jit.trace(NN, input_example, check_trace=True)
probas, post_embeddings = basic_trace(input_example)
grad_outputs = torch.ones(10,2)
G = torch.autograd.grad(outputs=probas,
inputs=post_embeddings,
grad_outputs=grad_outputs,
only_inputs=True,
retain_graph=True,
)[0]
print(G.shape)
This code should run fine with torch 1.0.1 but fail with the following error with torch 1.3.1:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-11-d55fb440d76f> in <module>
6 grad_outputs=grad_outputs,
7 only_inputs=True,
----> 8 retain_graph=True,
9 )[0]
10 print(G.shape)
.cache/poetry/engine-py3.6/lib/python3.6/site-packages/torch/autograd/__init__.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused)
155 return Variable._execution_engine.run_backward(
156 outputs, grad_outputs, retain_graph, create_graph,
--> 157 inputs, allow_unused)
158
159
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
Could you please help me understand what happens here?
Thank you! |
st181567 | Hey sorry about the delay! Not sure what the actual bug is, but this code seems to work fine again for me on master. Could you try it with the nightly version of PyTorch or try with 1.4 (which should be released very soon!)? |
st181568 | Thank you, it seems that the problem is indeed solved by upgrading directly to 1.4. |
st181569 | Hello @driazati
I work with @Sebastien_Fischman and indeed it works in 1.4.0 now.
But we still have a problem : the models we trained beforehand on version 1.0.1 and saved as a JIT does not work anymore, when trying to load them.
I searched for a while, and I found that inside the JIT the problematic code was
_12 = torch.index(_11, [annotate(Tensor, None), cont_idxs])
With new version, code would be something like
_12 = torch.index(_11, annotate(List[Optional[Tensor]],[None, cont_idxs]))
If I changed manually the code for this new line, it works.
Problem is we are using PyTorch models in production, and manually modifying the models will be an issue.
Do you have a migration script to update the JIT file ? or any solution for JIT to be retrocompatible ? |
st181570 | Are you able to share the model file that was saved on 1.0.1? The JIT should always be 100% backwards compatible so this sounds like a bug. |
st181571 | Sure, here is the JIT file : https://srv-file9.gofile.io/download/TCKSJe/9e06eca9fd9c2e.pt 3
I could not add it to the message.
This JIT was also not working in 1.3 version also @driazati |
st181572 | @driazati looks like sharing a model with this kind of link is a bad idea
@Hartorn account has been locked for sharing this ^^ but it’s safe to open if you wish to have a look at it! |
st181573 | Hello @driazati,
Sorry to be a bother, but did you have time to have a look at the model ?
Do you think this bug of pytorch will be fixed, or should we find a way to update/modify our JIT file to be compatible ?
Regards |
st181574 | Hey, I wasn’t able to download the file (I get a "You are not authorized to download this file " error), but I think I reproduced the same issue. I think this patch 3 should fix the bug. If you are able to build PyTorch from source (see below), you can verify that it fixes the issue
git clone https://github.com/pytorch/pytorch --depth=1
cd pytorch
git fetch origin driazati/fix_annotate
git reset --hard origin/driazati/fix_annotate
# Now build PyTorch from source
# https://github.com/pytorch/pytorch/#from-source |
st181575 | thanks @driazati, I think we’ll go for a migration script on our side anyway! Thanks for the help, really appreciate it. |
st181576 | I have a custom autograd.Function:
class raw(autograd.Function):
@staticmethod
def forward(ctx, inp):
ctx.a = (inp * inp + 1).reciprocal()
ctx.b = ctx.a.sqrt()
return inp * ctx.b
@staticmethod
def backward(ctx, grad_output):
return grad_output * ctx.a * ctx.b
and I want to trace this function, but when I do:
jit.trace(raw.apply, example_inputs=tc.randn(1))
, I get the error from this line:
......................
jit.trace(raw.apply, example_inputs=tc.randn(1))
File "...\Python37\lib\site-packages\torch\jit\__init__.py", line 903, in trace
name = _qualified_name(func)
File "...\Python37\lib\site-packages\torch\_jit_internal.py", line 696, in _qualified_name
"__module__ can't be None.".format(name))
RuntimeError: Could not get qualified name for class 'apply': __module__ can't be None.
This code used to work for pytorch 1.1.0, but after I updated it to 1.4.0 recently, I got this error. I’m using Python3.7.3 on windows10. |
st181577 | Reproduced. Could you please open a new issue at https://github.com/pytorch/pytorch/issues 19? |
st181578 | In this thread 5, I learned how to compile for Android the file pytorch/binaries/speed_benchmark_torch.cc, which lets you benchmark the speed of torchscript models. However, for debugging purposes, I’d like to build this file for my desktop computer’s Intel CPU, so that I can better debug some failures I am getting.
Does anyone know how to compile speed_benchmark_torch.cc for desktop CPU? |
st181579 | Solved by solvingPuzzles in post #2
I figured out a way to do it. I do a fresh build of pytorch as follows:
cd pytorch
export BUILD_BINARY=1
python setup.py build_ext
An x86 CPU compatible (rather than Android or iOS compatible) binary appears here:
pytorch/build/bin/speed_benchmark_torch
And, I was able to run it successfully on… |
st181580 | I figured out a way to do it. I do a fresh build of pytorch as follows:
cd pytorch
export BUILD_BINARY=1
python setup.py build_ext
An x86 CPU compatible (rather than Android or iOS compatible) binary appears here:
pytorch/build/bin/speed_benchmark_torch
And, I was able to run it successfully on my desktop computer’s CPU. |
st181581 | Hi,
i have a problem when converting torch script module in to ONNX module,
but it’s works just fine on “normal” module,
i open a bug request about that at https://github.com/pytorch/pytorch/issues/30512 13,
but if i could convert the torch script module to nn module it’s will solve my issue,
Thanks,
Liron
code example:(you can ignore the test_normal )
def _test_normal(self, num_classes, dummy_input):
model = torchvision.models.resnet18(num_classes=num_classes)
model_state_fixed = {}
for k, v in self._model_state.items():
k_fixed = k[3:len(k)]
model_state_fixed[k_fixed] = v
model.load_state_dict(model_state_fixed)
torch.onnx.export(model, dummy_input, "/app_data/test_torch_script/torch_script_test_normal.onnx")
def convert(self):
loaded = torch.jit.load(self._torch_script_path)
# loaded.load_state_dict(self._model_state)
dummy_input = torch.randn(1, 3, 224, 224)
target = loaded(dummy_input)
self._test_normal(num_classes=len(target[0]), dummy_input=dummy_input)
torch.onnx.export(loaded, dummy_input, self._out_onnx_path, verbose=True,
operator_export_type=torch.onnx.OperatorExportTypes.ONNX,
example_outputs=target) |
st181582 | Solved by Liron_Mor_Yosef in post #3
thanks for your replay,
how can i exported model to torch script and than load it back and export to onnx?
to save as torch script i am using the following code:
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
traced = torch.jit.trace(model, dummy_input)
traced.save("/app_data/test_torch_s… |
st181583 | There isn’t a way to extract an nn.Module from a compiled ScriptModule. Is it possible for you to instead export your original module instead of a ScriptModule?
For some background, torch.onnx.export will use torch.jit.trace to get an exportable graph from an nn.Module. The ONNX exporter does not support all the features of TorchScript (e.g. if you used torch.jit.script to compile your model, it may not be possible to export that compiled module to ONNX), but relying on torch.jit.trace enforces that only supported features are used. |
st181584 | thanks for your replay,
how can i exported model to torch script and than load it back and export to onnx?
to save as torch script i am using the following code:
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
traced = torch.jit.trace(model, dummy_input)
traced.save("/app_data/test_torch_script/torch_script_test.zip")
can i change the above code in order to use it’s output for exporting to onnx? |
st181585 | Correct me if this isn’t a JIT problem but I would like to see the resulting output after each node in a graph (traced graph specifically). Is there any way to do this easily? My current approach (which would only look at layers) is to modify the model to save each layer’s output and return that in the forward function but this isn’t ideal as I would like to look at each node and ideally not modify the model. |
st181586 | There is no way to do this today, we’ve had this issue 3 open for a while which describes what you want. Until we implement something like that storing the results manually is the only way to go. A cleaner way than adding a giant return to your forward may be to store intermediate results as attributes on the module, something like
class X(nn.Module):
layer1: torch.Tensor
layer2: torch.Tensor
def __init__(self):
self.layer1 = None
self.layer2 = None
self.fc1 = nn.Linear(10, 10)
def forward(self, x):
self.layer1 = self.fc1(x)
self.layer2 = self.fc1(self.layer1)
return self.layer2
torch.jit.script(X())
torchvision has a similar problem, they use this class 3 as a workaround. |
st181587 | You could try to use tensor2trt 21 to directly use TensorRT with the Python API. However, as far as I know, the supported methods might be limited, so you would have to check, if your current model uses only supported layers.
A bit unrelated to your question, but might also be interesting: you could have a look ar this blogpost 15 to see how to deploy Tacotron2 and Waveglow to TensorRT7. |
st181588 | Hi,
recently I found that JIT does not create DifferentiableGraph nodes corresponding to Conv and BN operations.
Both ops are not registered in symbolic_script.h and considered to be “non-differentiable” in the sense that their gradients are always computed by Autograd engine, which loses the opportunity of making bigger DifferentiableGraphs.
What’s the reason behind such non-optimal behavior of JIT? |
st181589 | using traced model after jit compilation to do forward pass, torch.clamp() seems to introduce requires_grad to be True to a forward. In normal mode, it is fine but when I use the traced model this happens. To confirm torch.clamp() is introducing this I print x.requires_grad before and after the clamp operation.
This bug seems weird. Any fix? |
st181590 | My model is Object detection model and it contains interpolate layer. Model is converting to quantized model perfectly but when I am doing torch.jit.script it is giving issues.
I tried changing torch and torchvision to currently available nightly version but it is still giving issues.
Error while in stable version of torch and torchvision -
Arguments for call are not valid.
The following variants are available:
aten::__interpolate(Tensor input, int? size=None, float[]? scale_factor=None, str mode='\156\145\141\162\145\163\164', bool? align_corners=None) -> (Tensor):
Expected a value of type 'Optional[List[float]]' for argument 'scale_factor' but instead found type 'int'.
aten::__interpolate(Tensor input, int[]? size=None, float[]? scale_factor=None, str mode='\156\145\141\162\145\163\164', bool? align_corners=None) -> (Tensor):
Expected a value of type 'Optional[List[float]]' for argument 'scale_factor' but instead found type 'int'.
aten::__interpolate(Tensor input, int? size=None, float? scale_factor=None, str mode='\156\145\141\162\145\163\164', bool? align_corners=None) -> (Tensor):
Expected a value of type 'Optional[float]' for argument 'scale_factor' but instead found type 'int'.
aten::__interpolate(Tensor input, int[]? size=None, float? scale_factor=None, str mode='\156\145\141\162\145\163\164', bool? align_corners=None) -> (Tensor):
Expected a value of type 'Optional[float]' for argument 'scale_factor' but instead found type 'int'.
Error after moving to nighly version of torch and torchvision -
Arguments for call are not valid.
The following variants are available:
aten::__interpolate(Tensor input, int? size=None, float[]? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None) -> (Tensor):
Expected a value of type 'Optional[List[float]]' for argument 'scale_factor' but instead found type 'int'.
aten::__interpolate(Tensor input, int[]? size=None, float[]? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None) -> (Tensor):
Expected a value of type 'Optional[List[float]]' for argument 'scale_factor' but instead found type 'int'.
aten::__interpolate(Tensor input, int? size=None, float? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None) -> (Tensor):
Expected a value of type 'Optional[float]' for argument 'scale_factor' but instead found type 'int'.
aten::__interpolate(Tensor input, int[]? size=None, float? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None) -> (Tensor):
Expected a value of type 'Optional[float]' for argument 'scale_factor' but instead found type 'int'. |
st181591 | Solved by driazati in post #2
It’s hard to tell without more context, are you able to show the code that is causing the error so we can reproduce it on our end?
For some background, TorchScript does not do coercion from int to float like Python does, so when you’re calling interpolate you might have to do something like float(m… |
st181592 | It’s hard to tell without more context, are you able to show the code that is causing the error so we can reproduce it on our end?
For some background, TorchScript does not do coercion from int to float like Python does, so when you’re calling interpolate you might have to do something like float(my_scale_factor) where my_scale_factor is an int. |
st181593 | Is there any JIT performance measurements? Does it makes a model any faster or the only benefit of involving JIT is ability to save model and perform inference in any other environment except python? |
st181594 | Yes, we do monitor the performance of certain bits. For example the recent PyTorch blog on RNN speedups 266 discusses benchmarks we’ve been monitoring quite closely and continue to work against. ResNet performance it also regularly checked.
That said, whether any given model sees significant speedups, depends.
I always give the ballpark figure of 10% speedup for moving from Python to C++ - I got this number from a couple of specific models, e.g. when you do a “1-1” translation into C++ of the LLTM model used in the C+±Extension tutorial. Your model will see different numbers. A similar speedup probably is there for the JIT.
Where the JIT really get large speedups is when one of the optimizations can fully come into play. E.g. if you have chains of elementwise operations, they will be fused into a single kernel. As those are typically memory-bound, fusing two elementwise ops will be ~2x as fast as doing them separately.
Best regards
Thomas |
st181595 | I traced the BERT 7 model from HuggingFace PyTorchTransformers library and getting following results for 10 iterations.
a) Using Python runtime for running the forward: 979292 µs
import time
model = torch.jit.load('models_backup/2_2.pt')
x = torch.randint(2000, (1, 14), dtype=torch.long, device='cpu')
start = time.time()
for i in range(10):
model(x)
end = time.time()
print((end - start)*1000000, "µs")
b) Using C++ runtime for running the forward: 3333758 µs which is almost 3x of what Python
torch::Tensor x = torch::randint(index_max, {1, inputsize}, torch::dtype(torch::kInt64).device(torch::kCPU));
input.push_back(x);
#endif
// Execute the model and turn its output into a tensor.
auto outputs = module->forward(input).toTuple();
auto start = chrono::steady_clock::now();
for (int16_t i = 0; i<10; ++i)
{
outputs = module->forward(input).toTuple();
}
auto end = chrono::steady_clock::now();
cout << "Elapsed time in microseconds : "
<< chrono::duration_cast<chrono::microseconds>(end - start).count()
<< " µs" << endl;
@tom any suggestions on what am I missing ? |
st181596 | You are not even doing the comparison I had in mind. - If the C++/uses the JIT, you compare JIT called from Python vs JIT called from C++, and that should really have the same speed modulo constant overhead (which is not 6s).
Are you using the same inputs, libtorch, environment,…?
Best regards
Thomas |
st181597 | Hi, Tom
Which layers can be seen as chains of elementwise ops? Any detailed benchmarks? Thank you very much in advance.
Best regards,
Edward |
st181598 | I teach that in my PyTorch internals training, if you’re near Munich and want to book a seat…
But so the theory answer is any sequence of elementwise ops and the practical answer is anything that you see merged into fusion groups in myfn.graph_for(*my_inputs) . (Only is done on GPU by default.)
In addition to the blog post linked above 42, there is my blog post on optimizing LSTM backwards 41 and an old talk from me using this on a simple example IoU in detail 24.
Obviously a lot more is to be had from extending what can be fused TorchTVM 64 is a (highly experimental) approach to hook into TVM which is a (also experimental in my experience) framework that can optimize also reductions, which the JIT cannot .
Personally, I think a lot more could be had, but I’m not sure who makes that a top priority at the moment.
Best regards
Thomas |
st181599 | Hello, I am testing out different types in PyTorch and noticed when calling torch.jit.trace where the input is float16/half, I am getting a runtime error (RuntimeError: “add_cpu/sub_cpu” not implemented for ‘Half’). Does tracing not work for float16? I believe I also tested with int32 inputs which work. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.