id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st181600 | I think this might be more about operations that PyTorch supports on GPU than the types.
Does the same code run in plain PyTorch?
Best regards
Thomas |
st181601 | You are right in that this doesn’t seem to work for eager mode (guessing this is plain PyTorch) as just calling model(input) causes the same runtime error. But I am not running on a GPU right now (just a macbook). I guess I can probably change the category and rename the question. I guess Half is just not supported for CPU? |
st181602 | I’m trying to code up a module from jit.ScriptModule that looks like this:
class Model(jit.ScriptModule):
def __init__(self, size):
super().__init__()
self.rnn = nn.GRUCell(size, size)
@jit.script_method
def forward(self, hidden, inputs):
something = [torch.empty(0)] * 10
return self.rnn(inputs, hidden)
however, it triggers the following errors:
RuntimeError:
arguments for call are not valid:
for operator aten::mul(Tensor self, Tensor other) -> Tensor:
expected a value of type Tensor for argument ‘self’ but found Tensor[]
I found the reason is that list comp is not available it jit, is there any fix? |
st181603 | Hi, I just wanted to ask if it’s possible to compute gradients inside a traced (torch.jit.trace) model.
We want to use torchscript to manually train models in mobile devices by computing gradients and recomputing new parameters. I tried to create simple model like this.
class ReturnGradientsModel(torch.nn.Module):
def forward(self, input, w, b):
result = input * w + b
L = (10 - result).sum()
L.backward()
w = w - w.grad * 1e-2
b = b - b.grad * 1e-2
return w, b
torch.jit.trace(ReturnGradientsModel(), (torch.rand(2,2, requires_grad=True),
torch.rand(2,2, requires_grad=True),
torch.rand(2,2, requires_grad=True)))
but it just returns the error
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
even though I explicitly passed tensors with grad enabled.
I found this issue : https://github.com/pytorch/pytorch/issues/15644 21 . But it’s for onnx, not sure if it applies to torch.jit.trace as well.
I’m grateful for any help. |
st181604 | Hello, I am wondering if it is possible to change the last layer (or more) of a loaded torchscript model?
This would be useful for changing the number of categories a torchscript model could predict after training again.
Right now I get an odd error when trying to overwrite the last layer manually.
>> model = torch.jit.load("model_cpu.pth")
>> model.last_layer
RecursiveScriptModule(original_name=Linear)
>> new_last_layer = torch.jit.script(torch.nn.Linear(a, b))
>> new_last_layer
RecursiveScriptModule(original_name=Linear)
>> model.last_layer = new_last_layer
>> a.forward(torch.rand([x, y, z ... ]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: isTensor() INTERNAL ASSERT FAILED at ../aten/src/ATen/core/ivalue_inl.h:86, please report a bug to PyTorch. Expected Tensor but got Bool
The above operation failed in interpreter.
Traceback (most recent call last):
/Users/****/venv/lib/python3.7/site-packages/torch/nn/functional.py(1370): linear
/Users/****/venv/lib/python3.7/site-packages/torch/nn/modules/linear.py(87): forward
/Users/****/venv/lib/python3.7/site-packages/torch/nn/modules/module.py(525): _slow_forward
/Users/****/venv/lib/python3.7/site-packages/torch/nn/modules/module.py(539): __call__
/Users/****/*****/ops/models.py(277): forward
/Users/****/venv/lib/python3.7/site-packages/torch/nn/modules/module.py(525): _slow_forward
/Users/****/venv/lib/python3.7/site-packages/torch/nn/modules/module.py(539): __call__
/Users/****/venv/lib/python3.7/site-packages/torch/jit/__init__.py(997): trace_module
/Users/****/venv/lib/python3.7/site-packages/torch/jit/__init__.py(858): trace
convert.py(91): main
convert.py(110): <module>
Serialized File "code/__torch__/ops/models.py", line 1052
input153 = torch.flatten(x31, 1, -1)
input154 = torch.dropout(input153, 0.5, False)
base_out = torch.addmm(bias52, input154, torch.t(weight105), beta=1, alpha=1)
~~~~~~~ <--- HERE
_512 = ops.prim.NumToTensor(torch.size(base_out, 1))
input_tensor = torch.view(base_out, [-1, 8, int(_512)]) |
st181605 | @jhhurwitz I don’t think there’s an easy way of doing that currently. See this issue for more context: https://github.com/pytorch/pytorch/issues/21064 129 |
st181606 | @eellison Is it on the roadmap to add an easy way to do this? Looks like the issue has stalled out a little bit.
In the meantime, even an example of the hard way to do this would be useful if you happened to know of any.
Thanks! |
st181607 | I am taking in a trace of a model and would like to check what type the tensor is supposed to be. Is there a way to grab this info from torch._C.Node (ie: get float, double, from https://pytorch.org/docs/stable/tensors.html 24)? Or in general any way to grab that info starting from the trace? |
st181608 | You can grab it in this way:
def some_fn(x):
return x + x
some_input = torch.randn(2, 2)
my_traced_function = torch.jit.trace(some_fn, [some_input])
for input in my_traced_function.graph.inputs():
traced_tensor_type = input.type()
# Prints "Float"
print(traced_tensor_type.scalarType())
# However, note that the interpreter will still run with differently typed
# tensors
my_traced_function(torch.ones(2, 2, dtype=torch.long)) |
st181609 | I see that this is getting it for inputs but what about intermediate nodes in the graph? It seems inputs are of type torch._C.Value which has a decorated type field but torch._C.Node doesn’t. |
st181610 | Node represents the whole operation (some inputs, an operation, and some outputs). The inputs and outputs are represented as Values, you can get them like so:
def some_fn(x):
return x + x
some_input = torch.randn(2, 2)
my_traced_function = torch.jit.trace(some_fn, [some_input])
print(my_traced_function.graph)
for node in my_traced_function.graph.nodes():
for node_input in node.inputs():
if isinstance(node_input.type(), torch._C.TensorType):
print(node_input.type().scalarType())
You can read more about the internal representations here 33. |
st181611 | Thanks for the link, really helpful, somehow didn’t find it in my initial search. And got it, this makes a lot of sense now. |
st181612 | Hi,
I am just wondering: it is possible to plugin extra logic between layers of the neural network model (serialized) during serving?
Any help or idea would be greatly appreciated.
Best, |
st181613 | Could you explain your use case a bit more?
I.e. what does serving mean in the context and what would you like to add?
Are you referring to some deployment software / platform or just a plain PyTorch model, you would like to change? |
st181614 | Thanks for the reply.
Yeah, sure. I am referring to the prediction process once the serialized model (say, a DNN model) is loaded.
I am not sure whether pytorch serving supports extra logic injection between each of the neural network layer (during the prediction process). I am suspecting that JIT does not provide such API. Does it require fundamental changes to the source code to enable such a feature?
Best, |
st181615 | I convert fastai to torchscript then run same converted model on both Android and PC(Macbook).
Result is different.
Code convert:
import torch
example = torch.rand(1, 3, 64, 64)
traced_script_module = torch.jit.trace(torch_model, example)
traced_script_module.save("model.pt")
Run on PC:
img = cv2.imread('aaanx_7.jpg')
img = img/255.0
img = np.moveaxis(img, -1, 0)
img = np.array(np.expand_dims(img, axis=0))
traced_script_module.forward(torch.from_numpy(img).float())
Run on Android:
bitmap = BitmapFactory.decodeStream(getAssets().open("aaanx_7.jpg"));
Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor(bitmap,
new float[]{0.0f, 0.0f, 0.0f}, new float[]{1.0f, 1.0f, 1.0f});
Tensor outputTensor = module.forward(IValue.from(inputTensor)).toTensor();
Tensor output is different.
Please help me understand that.
Thanks |
st181616 | Is the output significantly different or does it just have small numerical differences? If it is significantly different please open an issue. |
st181617 | It is significantly different
I train resnet50 to predict extended mnist and result is different number with all images |
st181618 | I checking with input image and confuse with TensorImageUtils.bitmapToFloat32Tensor.
It is little-endian and it seem to be not same with PC.
I am not sure. Do you have any suggestion?
Thanks |
st181619 | I created a model using fastai and save into torchscript.
When I run torchscript model it wrong input
My model: resnet50
then error:
Expected 4-dimensional input for 4-dimensional weight 64 3 7 7, but got 3-dimensional input of size [39, 28, 3] instead
code:
import torch
example = torch.rand(1, 3, 64, 64)
traced_script_module = torch.jit.trace(torch_model, example)
traced_script_module.save(“model.pt”)
img = cv2.imread(‘image.jpg’)
traced_script_module.forward(torch.from_numpy(img))
Have anybody help me? |
st181620 | Is the result of torch.from_numpy(img) the same dimensions as your example inputs? When using tracing, you must make sure that the example inputs given are representative. |
st181621 | I’m recently interested in Touchscript, so I went to look at the custom LSTMs using Touchscript. In particular, I’m looking at this file. As a result, I have a few questions.
1. I understand that combining the weights of a linear transformation are for speeding up the code. But why compute it separately for inputs and hiddens, as it is here:
hx, cx = state gates = (torch.mm(input, self.weight_ih.t()) + self.bias_ih + torch.mm(hx, self.weight_hh.t()) + self.bias_hh) ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)
Can someone explain the benefit in comparison to first concatenating the tensors and compute together?
**2.**Following the previous question, according to my understanding, the code mentioned above is trying to apply linear transformation separately to inputs and hiddens and add them up at last. However, 2 biases are added together, as:
gates = (torch.mm(input, self.weight_ih.t()) + self.bias_ih + torch.mm(hx, self.weight_hh.t()) + self.bias_hh)
I think the 2 biases can be replaced by a single one with the exact same effect, so I want to ask what’s the difference between adding 1 bias and 2 biases.
**3.**If combing several operations can speed up the code, then instead of using: ` ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)
ingate = torch.sigmoid(ingate)
forgetgate = torch.sigmoid(forgetgate)
cellgate = torch.tanh(cellgate)
outgate = torch.sigmoid(outgate)`
, why not just:
gates[:, 0:3*self.hidden_size].sigmoid_() # Doesn't have to be in-place ingate, forgetgate, outgate, cellgate = gates.chunk(4, 1) cellgate=cellgate.tanh()
Aren’t the operations in this way more “combined together”, or is it that combing operations doesn’t effect element-wise operations much?
Thanks for your time |
st181622 | Example:
@jit.script
class Foo:
...
@jit.script
class Quux:
...
@jit.script
class Bar:
def __init__(self, cell):
# cell may be either a Foo or a Quux creator
self.cell1 = cell()
self.cell2 = cell()
def forward(self, x):
self.sub(x, self.cell1)
self.sub(x, self.cell2)
def sub(self, x: Tensor, cell: ?????):
... |
st181623 | Very interesting, I have a model like this:
class YOLOv3(nn.Module):
def __init__(self, num_classes=80, ignore_thre=0.7, label_smooth=False, rfb=False, vis=False, asff=False):
super(YOLOv3, self).__init__()
self.module_list = build_yolov3_modules(
num_classes, ignore_thre, label_smooth, rfb)
self.level_0_fusion = ASFF(level=0, rfb=rfb, vis=vis)
self.level_0_header = YOLOv3Head(anch_mask=[6, 7, 8], n_classes=num_classes, stride=32, in_ch=1024,
ignore_thre=ignore_thre, label_smooth=label_smooth, rfb=rfb)
self.level_1_fusion = ASFF(level=1, rfb=rfb, vis=vis)
self.level_1_header = YOLOv3Head(anch_mask=[3, 4, 5], n_classes=num_classes, stride=16, in_ch=512,
ignore_thre=ignore_thre, label_smooth=label_smooth, rfb=rfb)
self.level_2_fusion = ASFF(level=2, rfb=rfb, vis=vis)
self.level_2_header = YOLOv3Head(anch_mask=[0, 1, 2], n_classes=num_classes, stride=8, in_ch=256,
ignore_thre=ignore_thre, label_smooth=label_smooth, rfb=rfb)
def forward(self, x, targets=None, epoch=0):
output = []
route_layers = []
for i, module in enumerate(self.module_list):
x = module(x)
if i in [6, 8, 17, 24, 32]:
route_layers.append(x)
if i == 19:
x = torch.cat((x, route_layers[1]), 1)
if i == 26:
x = torch.cat((x, route_layers[0]), 1)
print(len(route_layers))
fused_0 = self.level_0_fusion(route_layers[2], route_layers[3], route_layers[4])
x = self.level_0_header(fused_0)
output.append(x)
fused_1 = self.level_1_fusion(route_layers[2], route_layers[3], route_layers[4])
x = self.level_1_header(fused_1)
output.append(x)
fused_2 = self.level_2_fusion(route_layers[2], route_layers[3], route_layers[4])
x = self.level_2_header(fused_2)
output.append(x)
return torch.cat(output, 1)
This model when using torch.onnx.export it generate a ONNX model contains only an input and a Constant, log like this:
%level_2_header.Feature_adaption.dconv.weight : Float(256, 256, 3, 3),
%level_2_header.Feature_adaption.dconv.bias : Float(256),
%level_2_header.conv.weight : Float(340, 256, 1, 1),
%level_2_header.conv.bias : Float(340)):
%559 : Float(1, 52500, 85) = onnx::Constant[value=<Tensor>]()
return (%559)
interesting thing is that, somehow I comment out these lines:
def forward(self, x, targets=None, epoch=0):
output = []
route_layers = []
for i, module in enumerate(self.module_list):
x = module(x)
if i in [6, 8, 17, 24, 32]:
route_layers.append(x)
if i == 19:
x = torch.cat((x, route_layers[1]), 1)
if i == 26:
x = torch.cat((x, route_layers[0]), 1)
# print(len(route_layers))
# fused_0 = self.level_0_fusion(route_layers[2], route_layers[3], route_layers[4])
# x = self.level_0_header(fused_0)
# output.append(x)
# fused_1 = self.level_1_fusion(route_layers[2], route_layers[3], route_layers[4])
# x = self.level_1_header(fused_1)
# output.append(x)
# fused_2 = self.level_2_fusion(route_layers[2], route_layers[3], route_layers[4])
# x = self.level_2_header(fused_2)
# output.append(x)
# return torch.cat(output, 1)
return x
It can trace normal ONNX model.
Does anybody can help me debug this weird issue? |
st181624 | Does anybody could help me debug it?
One bitcoin is for you if you can found the root reason. |
st181625 | Actually,. this model comes from this repo: https://github.com/ruinmessi/ASFF 6
I am adding onnx export for it |
st181626 | I got this error when exporting model to ONNX:
/home/dai/py36env/lib/python3.6/site-packages/torch/onnx/utils.py:651: UserWarning: ONNX export failed on primitive operator Uninitialized; please report a bug
warnings.warn("ONNX export failed on primitive operator {}; please report a bug".format(op_name))
Traceback (most recent call last):
File "onnx_demo.py", line 280, in <module>
do_constant_folding=True,input_names=["input"],output_names=["boxes"]
File "/home/dai/py36env/lib/python3.6/site-packages/torch/onnx/__init__.py", line 143, in export
strip_doc_string, dynamic_axes, keep_initializers_as_inputs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/onnx/utils.py", line 66, in export
dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/onnx/utils.py", line 382, in _export
fixed_batch_size=fixed_batch_size)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/onnx/utils.py", line 262, in _model_to_graph
fixed_batch_size=fixed_batch_size)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/onnx/utils.py", line 132, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/onnx/__init__.py", line 174, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/onnx/utils.py", line 652, in _run_symbolic_function
symbolic_fn = sym_registry.get_registered_op(symbolic_name, '', opset_version)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/onnx/symbolic_registry.py", line 91, in get_registered_op
return _registry[(domain, version)][opname]
KeyError: 'prim_Uninitialized'
This error could come from these two lines:
if xy_text.size(0) == 0:
return torch.tensor(0)
What’s the function of prim_Uninitialized? How can I define it? Looking forward to your replay. |
st181627 | Solved by eellison in post #2
prim::Uninitialized is used to represent values which the compiler can prove will never be used. As you correctly guessed, it’s introduced by the early return statement there. You can work around this by removing early returns, breaks, and continues from your scripted function.
Generally, you shoul… |
st181628 | prim::Uninitialized is used to represent values which the compiler can prove will never be used. As you correctly guessed, it’s introduced by the early return statement there. You can work around this by removing early returns, breaks, and continues from your scripted function.
Generally, you should expect ONNX to work with tracing and with anything scripted you will likely run into some issues. |
st181629 | I got the following error while converting model to torchscript.
Traceback (most recent call last):
File "ocr.py", line 206, in <module>
ts = torch.jit.script(ocr)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1203, in script
return torch.jit.torch.jit._recursive.recursive_script(obj)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 173, in recursive_script
return copy_to_script_module(mod, overload_stubs + stubs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 95, in copy_to_script_module
torch.jit._create_methods_from_stubs(script_module, stubs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1423, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
RuntimeError:
builtin cannot be used as a value:
at ocr.py:80:23
img_tensor = resize_img(img_tensor)
# self.show(img_tensor)
img_tensor_ = (img_tensor - 0.5) / 0.5
img_tensor_ = img_tensor_.permute(2,0,1).unsqueeze(0) # --> n x c x h x w
boxes = self.east(img_tensor_)
new_boxes = []
pass_list:List[int] = []
for i in range(boxes.shape[0]):
~~~~~~~~~~~ <--- HERE
for j in range(i+1,boxes.shape[0]):
if i in pass_list or j in pass_list:
continue
cx1,cy1,angle1 = box_analyse(boxes[i])
cx2,cy2,angle2 = box_analyse(boxes[j])
if torch.atan(torch.abs(torch.abs(cy2 - cy1) / torch.abs(cx2 - cx1))) - (90 - (angle1 + angle2) / 2) * 3.141592653 / 180 < 0.1:
b = torch.cat([boxes[i][:8].reshape(4, 2), boxes[j][:8].reshape(4, 2)], dim=0)
new_box = torch.ops.my_ops.min_rect(b)
new_boxes.append(new_box.reshape(-1))
The variable boxes is a tensor returned from model east, is tensor.shape a builtin function in torchscript?
By the way, how can I debug in torchscript?
I met many problems in torchscript, it will be great if there’s a way to debug torchscript. |
st181630 | Solved by dalalaa in post #7
I found boxes is not tensor, it’s Optional[Tensor]. It’s inferred as Optional[Tensor] because of the control flow in the east model. The forward method of model is like this:
def forward(self,x):
if condition1:
return None
else:
return boxes
It’s inferred as Optional[Tensor… |
st181631 | Firstly, you should use boxes.size(). Secondly, https://pytorch.org/docs/stable/jit.html#disable-jit-for-debugging 22 will give you some help and my tip is using @torch.jit.ignore to check the functionality. |
st181632 | Thank you, but boxes.size(0) and boxes.size()[0] doesn’t work.
I met the same error using boxes.size(0) and boxes.size()[0]:
Arguments for call are not valid.
The following operator variants are available:
aten::size.int(Tensor self, int dim) -> (int):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'Optional[Tensor]'.
aten::size(Tensor self) -> (int[]):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'Optional[Tensor]'.
The original call is:
at ocr.py:80:23
img_tensor = resize_img(img_tensor)
# self.show(img_tensor)
img_tensor_ = (img_tensor - 0.5) / 0.5
img_tensor_ = img_tensor_.permute(2,0,1).unsqueeze(0) # --> n x c x h x w
boxes = self.east(img_tensor_)
# self.show(img_tensor,boxes)
new_boxes = []
pass_list:List[int] = []
for i in range(boxes.size(0)):
~~~~~~~~~~ <--- HERE
And my code works well with environment variable PYTORCH_JIT=0.
What’s the difference between boxes.size()/boxes.shape in python and torchscript? |
st181633 | torch.Tensor has no attribute named shape while numpy has. I thought boxes is a tensor, so I suggest you to use size(). If it is a numpy array, torchscript doesn’t support it. |
st181634 | I’m sure boxes is a tensor, its value is as following:
tensor([[230.1149, 237.2053, 456.3304, 232.7983, 456.9069, 262.5309, 230.6913,
266.9381, 202.8954],
[331.7273, 324.1699, 679.4318, 317.2317, 680.0470, 348.1467, 332.3426,
355.0850, 88.9529],
[231.7705, 204.8157, 500.3940, 200.7935, 500.8643, 232.1562, 232.2409,
236.1783, 60.0000],
[231.7874, 157.3486, 441.9472, 153.3222, 442.4890, 181.6230, 232.3291,
185.6494, 47.9981],
[230.4137, 63.0557, 326.3553, 59.4965, 327.4994, 90.2845, 231.5578,
93.8438, 40.9682],
[152.8475, 326.7796, 311.2224, 324.2658, 311.6898, 353.6737, 153.3150,
356.1875, 28.9999],
[154.1073, 113.9810, 260.5620, 112.2871, 260.9745, 138.2966, 154.5198,
139.9904, 27.9924],
[293.3543, 111.8462, 390.4526, 108.3323, 391.4826, 136.8042, 294.3844,
140.3180, 13.9621],
[152.4046, 159.5459, 219.9668, 157.7393, 220.6146, 181.9012, 153.0524,
183.7079, 12.9934],
[153.1970, 206.8427, 218.1942, 205.0301, 218.8990, 230.2958, 153.9019,
232.1084, 7.0000],
[153.0565, 68.3798, 217.7016, 67.7295, 217.9489, 92.0420, 153.3038,
92.6922, 6.9623]], grad_fn=<IndexBackward>)
Its type and shape is:
torch.FloatTensor torch.Size([11, 9]) |
st181635 | I found boxes is not tensor, it’s Optional[Tensor]. It’s inferred as Optional[Tensor] because of the control flow in the east model. The forward method of model is like this:
def forward(self,x):
if condition1:
return None
else:
return boxes
It’s inferred as Optional[Tensor] because the return value of forward can be None or tensor.
And size() method if not supported by Optional[Tensor]. |
st181636 | Specifically, when writing TC-like loops in JIT-ed functions.
My issues are:
I haven’t been able to get good performance out of jit.script. My use case might be a little too dynamic?
When JIT-ing, I have no control on any kinds of optimizations. I can’t nudge the jitter to fuse a particular sequence of operations, for example. So I can’t make use of the jitter to eliminate OOM errors.
Trying to do it with loops is generally slower than using existing maps and reductions with (memory-hungry) intermediates.
tensor_comprehensions is not available on Windows, as far as I can see |
st181637 | I got this error while calling tensor.size(0) in my project:
RuntimeError:
Arguments for call are not valid.
The following operator variants are available:
aten::size.int(Tensor self, int dim) -> (int):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'Optional[Tensor]'.
aten::size(Tensor self) -> (int[]):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'Optional[Tensor]'.
The original call is:
at ocr.py:78:23
img_tensor = resize_img(img_tensor)
# self.show(img_tensor)
img_tensor_ = (img_tensor - 0.5) / 0.5
img_tensor_ = img_tensor_.permute(2,0,1).unsqueeze(0) # --> n x c x h x w
boxes = self.east(img_tensor_)
# self.show(img_tensor,boxes)
new_boxes = []
pass_list:List[int] = []
for i in range(boxes.size(0)):
~~~~~~~~~~ <--- HERE
How can I get the first dimension size of a tensor in torchscript? |
st181638 | Solved by huoge in post #2
https://pytorch.org/docs/stable/jit.html#optional-type-refinement
here you can get information about Optional |
st181639 | https://pytorch.org/docs/stable/jit.html#optional-type-refinement 530
here you can get information about Optional |
st181640 | The model inherited from torch.nn.Module also can be converted to TorchScript, in which case should we use torch.jit.ScriptModule instead of torch.nn.Module? |
st181641 | Solved by driazati in post #2
You shouldn’t inherit from torch.jit.ScriptModule, that was an old API that we replaced in v.1.2.0
See here for details |
st181642 | You shouldn’t inherit from torch.jit.ScriptModule, that was an old API that we replaced in v.1.2.0
See here 83 for details |
st181643 | I am trying to convert the decoder using torch.jit.script but I am facing some error as below.
My module
import numpy as np
import torch
import torch.nn as nn
from collections import OrderedDict
from layers import *
class DepthDecoder(nn.Module):
def __init__(self, num_ch_enc, scales=range(4), num_output_channels=1, use_skips=True):
super(DepthDecoder, self).__init__()
self.num_output_channels = num_output_channels
self.use_skips = use_skips
self.upsample_mode = 'nearest'
self.scales = scales
self.num_ch_enc = num_ch_enc
self.num_ch_dec = np.array([16, 32, 64, 128, 256])
# decoder
self.convs = OrderedDict()
for i in range(4, -1, -1):
# upconv_0
num_ch_in = self.num_ch_enc[-1] if i == 4 else self.num_ch_dec[i + 1]
num_ch_out = self.num_ch_dec[i]
self.convs[("upconv", i, 0)] = ConvBlock(num_ch_in, num_ch_out)
# upconv_1
num_ch_in = self.num_ch_dec[i]
if self.use_skips and i > 0:
num_ch_in += self.num_ch_enc[i - 1]
num_ch_out = self.num_ch_dec[i]
self.convs[("upconv", i, 1)] = ConvBlock(num_ch_in, num_ch_out)
for s in self.scales:
self.convs[("dispconv", s)] = Conv3x3(self.num_ch_dec[s], self.num_output_channels)
self.decoder = nn.ModuleList(list(self.convs.values()))
self.sigmoid = nn.Sigmoid()
def forward(self, input_features):
outputs = {}
# decoder
x = input_features[-1]
for i in range(4, -1, -1):
x = self.convs[("upconv", i, 0)](x)
x = [upsample(x)]
if self.use_skips and i > 0:
x += [input_features[i - 1]]
x = torch.cat(x, 1)
x = self.convs[("upconv", i, 1)](x)
if i in self.scales:
outputs[("disp", i)] = self.sigmoid(self.convs[("dispconv", i)](x))
return outputs
num_enc_channels = np.array([ 64, 64, 128, 256, 512])
depth_decoder = DepthDecoder( num_ch_enc= num_enc_channels , scales=range(4))
traced_script_module_decoder = torch.jit.script(depth_decoder)
traced_script_module_decoder.save('new-decoder.pt')
Error :
File "C:\Users\lib\site-packages\torch\jit\_recursive.py", line 259, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
RuntimeError:
Module 'DepthDecoder' has no attribute 'convs' (**This attribute exists on the Python module, but we failed to convert Python type: 'OrderedDict' to a TorchScript type**.):
File "C:\Users\networks\depth_decoder.py", line 55
x = input_features[-1]
for i in range(4, -1, -1):
x = self.convs[("upconv", i, 0)](x)
~~~~~~~~~~ <--- HERE
x = [upsample(x)]
if self.use_skips and i > 0: |
st181644 | You’ll have to make a couple changes that may be tricky. For dictionaries the only keys that are supported are str, int, and float, so using a tuple as the key won’t work in TorchScript. Since the range(4, -1, -1) you’re using to create self.convs is not going to change no matter how you initialize DepthDecoder, I’d recommend unrolling that loop and adding them as submodules to DepthDecoder. So something like:
self.upconv_0_0 = ConvBlock(...)
self.upconv_0_1 = ConvBlock(...)
Unfortunately TorchScript is pretty picky about nn.Modules inside of containers, so things like self.convs["upconv1"] currently aren’t supported (but this is something we’re working on fixing). |
st181645 | Hello everyone. I attempt to use
torch.jit.script on the Dict[str,List[float]], but it doesn’t work。
Has anyone ever done any related work?
by the way, I build the pytorch from source and the torch version is 1.4.0a0+93db2b8
I’d appreciate if anybody can help me! Thanks in advance!
here is the code:
import torch
from typing import Dict, List
class dictlist(torch.nn.Module):
item:Dict[str, List[float]]
hyps:List[Dict[str, List[float]]]
def __init__(self):
torch.nn.Module.__init__(self)
self.item = {'score': [0.0], 'ys': [0, 1]}
self.hyps = [self.item]
def forward(self):
new_item:Dict[str, List[float]] = {'score':[1.0], 'ys': [1,2,3]}
self.hyps.append(new_item)
model = dictlist()
script_model = torch.jit.script(model)
and here is the log:
RuntimeError: values[i]->type()->isSubtypeOf(value_type) INTERNAL ASSERT FAILED at /pytorch/torch/csrc/jit/ir.cpp:1527, please report a bug to PyTorch. (createDict at pytorch/torch/csrc/jit/ir.cpp:1527) |
st181646 | Solved by zdevito in post #3
The problem here is that [1,2,3] is a list of integers not a list of floats. If it is changed to [1.0, 2.0, 3.0], it should work. I submitted https://github.com/pytorch/pytorch/pull/31375 to make the error message report correctly. |
st181647 | The problem here is that [1,2,3] is a list of integers not a list of floats. If it is changed to [1.0, 2.0, 3.0], it should work. I submitted https://github.com/pytorch/pytorch/pull/31375 19 to make the error message report correctly. |
st181648 | oh, thanks. I thought it would be converted automatically.
when I change it to a list of floats, it works. |
st181649 | Actually, I am also facing the same issue as above : I am not able to use dictionary. I am using the following code to convert my decoder to Torch script.
import numpy as np
import torch
import torch.nn as nn
from collections import OrderedDict
from layers import *
class DepthDecoder(nn.Module):
def **init** (self, num_ch_enc, scales=range(4), num_output_channels=1, use_skips=True):
super(DepthDecoder, self). **init** ()
self.num_output_channels = num_output_channels
self.use_skips = use_skips
self.upsample_mode = 'nearest'
self.scales = scales
self.num_ch_enc = num_ch_enc
self.num_ch_dec = np.array([16, 32, 64, 128, 256])
# decoder
self.convs = OrderedDict()
for i in range(4, -1, -1):
# upconv_0
num_ch_in = self.num_ch_enc[-1] if i == 4 else self.num_ch_dec[i + 1]
num_ch_out = self.num_ch_dec[i]
self.convs[("upconv", i, 0)] = ConvBlock(num_ch_in, num_ch_out)
# upconv_1
num_ch_in = self.num_ch_dec[i]
if self.use_skips and i > 0:
num_ch_in += self.num_ch_enc[i - 1]
num_ch_out = self.num_ch_dec[i]
self.convs[("upconv", i, 1)] = ConvBlock(num_ch_in, num_ch_out)
for s in self.scales:
self.convs[("dispconv", s)] = Conv3x3(self.num_ch_dec[s], self.num_output_channels)
self.decoder = nn.ModuleList(list(self.convs.values()))
self.sigmoid = nn.Sigmoid()
def forward(self, input_features):
outputs = {}
# decoder
x = input_features[-1]
for i in range(4, -1, -1):
x = self.convs[("upconv", i, 0)](x)
x = [upsample(x)]
if self.use_skips and i > 0:
x += [input_features[i - 1]]
x = torch.cat(x, 1)
x = self.convs[("upconv", i, 1)](x)
if i in self.scales:
outputs[("disp", i)] = self.sigmoid(self.convs[("dispconv", i)](x))
return outputs
num_enc_channels = np.array([ 64, 64, 128, 256, 512])
depth_decoder = DepthDecoder( num_ch_enc= num_enc_channels , scales=range(4))
traced_script_module_decoder = torch.jit.script(depth_decoder)
traced_script_module_decoder.save(‘new-decoder.pt’)
Error :
File “C:\Users\lib\site-packages\torch\jit_recursive.py”, line 259, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
RuntimeError:
Module ‘DepthDecoder’ has no attribute ‘convs’ ( **This attribute exists on the Python module, but we failed to convert Python type: ‘OrderedDict’ to a TorchScript type** .):
File “C:\Users\networks\depth_decoder.py”, line 55
x = input_features[-1]
for i in range(4, -1, -1):
x = self.convs(“upconv”, i, 0)
~~~~~~~~~~ <— HERE
x = [upsample(x)]
if self.use_skips and i > 0:
More Specifically I am trying to convert this nn.Module using TorchScript
Link : https://github.com/nianticlabs/monodepth2/blob/master/networks/depth_decoder.py 32 |
st181650 | I’m quite new to torch.jit and Apex. I’m getting this error. What does it mean ?
RuntimeError: _register_state_dict_hook is not supported on ScriptModules
Here’s my code
model, optimizer = amp.initialize(model, optimizer,opt_level='O2',
keep_batchnorm_fp32=False,
loss_scale=config.TRAIN.FP16_LOSS_SCALE,
min_loss_scale=128.0) |
st181651 | state_dict_hook is an internal implementation that allows some modules to edit their state_dict before they get returned. torch.jit doesn’t generally support any hooks, including state_dict hooks, so it is possible that Apex is installing these hooks on a jit module and then it is failing. However, I do not know the details of how Apex works, so I am not sure that is precisely what is happening. |
st181652 | Scripting is currently not supported in apex, but tracing should work.
We are currently working on upstreaming mixed-precision training, so that you won’t need to build apex soon. |
st181653 | class ResnetEncoder(nn.Module):
"""Pytorch module for a resnet encoder
"""
def __init__(self, num_layers, pretrained, num_input_images=1):
super(ResnetEncoder, self).__init__()
self.num_ch_enc = np.array([64, 64, 128, 256, 512])
resnets = {18: models.resnet18,
34: models.resnet34,
50: models.resnet50,
101: models.resnet101,
152: models.resnet152}
if num_layers not in resnets:
raise ValueError("{} is not a valid number of resnet layers".format(num_layers))
if num_input_images > 1:
self.encoder = resnet_multiimage_input(num_layers, pretrained, num_input_images)
else:
self.encoder = resnets[num_layers](pretrained)
if num_layers > 34:
self.num_ch_enc[1:] *= 4
def forward(self, input_image):
self.features = []
x = (input_image - 0.45) / 0.225
x = self.encoder.conv1(x)
x = self.encoder.bn1(x)
self.features.append(self.encoder.relu(x))
self.features.append(self.encoder.layer1(self.encoder.maxpool(self.features[-1])))
self.features.append(self.encoder.layer2(self.features[-1]))
self.features.append(self.encoder.layer3(self.features[-1]))
self.features.append(self.encoder.layer4(self.features[-1]))
return self.features
encoder = ResnetEncoder(18, True )
example = torch.rand(1, 3, 640, 192)
traced_script_module_encoder = torch.jit.trace(encoder.__getattr__('encoder'), example )
traced_script_module.save('encoder_new.pt')
torch.jit.load('encoder_new.pt')
I have tried to convert the model via trace and loaded it back but it returns different shape features as also suggested by the community trace will not work (Tracing doesn’t understand dynamic control flow, so sometimes it will “constant-ify” shapes in your model. Try turning your model in to a ScriptModule and using TorchScript;)
But in order to convert via torch.jit.script I get the following error
TypeError: module, class, method, function, traceback, frame, or code object was expected, got ResnetEncoder
while using below example :
encoder = ResnetEncoder(18, True )
traced_script_module_encoder = torch.jit.script(encoder)
traced_script_module_encoder.save('new-encoder.pt') |
st181654 | Solved by driazati in post #2
You may be on an old version of torchvision or torch, can you make sure you’re on the latest of both? (or use the nightly if there are still errors)
Your code works fine for me after applying the following diff:
diff --git a/test.py b/test.py
index e305823abd..a96eb93829 100644
--- a/test.py
+++ b… |
st181655 | You may be on an old version of torchvision or torch, can you make sure you’re on the latest of both? (or use the nightly if there are still errors)
Your code works fine for me after applying the following diff:
diff --git a/test.py b/test.py
index e305823abd..a96eb93829 100644
--- a/test.py
+++ b/test.py
@@ -208,21 +208,20 @@ class ResnetEncoder(nn.Module):
self.num_ch_enc[1:] *= 4
def forward(self, input_image):
- self.features = []
+ features = []
x = (input_image - 0.45) / 0.225
x = self.encoder.conv1(x)
x = self.encoder.bn1(x)
- self.features.append(self.encoder.relu(x))
- self.features.append(self.encoder.layer1(self.encoder.maxpool(self.features[-1])))
- self.features.append(self.encoder.layer2(self.features[-1]))
- self.features.append(self.encoder.layer3(self.features[-1]))
- self.features.append(self.encoder.layer4(self.features[-1]))
+ features.append(self.encoder.relu(x))
+ features.append(self.encoder.layer1(self.encoder.maxpool(features[-1])))
+ features.append(self.encoder.layer2(features[-1]))
+ features.append(self.encoder.layer3(features[-1]))
+ features.append(self.encoder.layer4(features[-1]))
- return self.features
+ return features
encoder = ResnetEncoder(18, True )
-example = torch.rand(1, 3, 640, 192)
-traced_script_module_encoder = torch.jit.trace(encoder.__getattr__('encoder'), example )
+traced_script_module = torch.jit.script(encoder)
traced_script_module.save('encoder_new.pt')
torch.jit.load('encoder_new.pt')
This is necessary since in TorchScript you can’t add new attributes to self outside of __init__ (you can only mutate existing attributes), but here it looks like features wasn’t being used outside of forward anyways. |
st181656 | Thankyou @driazati for the quick reply, actually I am still facing the same issue after performing the changes( This is necessary since in TorchScript you can’t add new attributes to self outside of __init__ (you can only mutate existing attributes)) suggested by you. I am using pytorch 1.0.1, torchvision 0.2.2. (Might be i upgrade it to pytorch 1.2)
Also, do you get the same feature output before saving and after loading back the saved encoder model again ? and which version of Pytorch and torchvision are you using. |
st181657 | Now, I have updated pytorch to 1.3.1(Stable Version) and torchvision 0.4.
While running the same code now the error is changed to :
File “C:\Users\anaconda3\envs\pytorch_converter\lib\site-packages\torch\jit_init_.py”, line 1423, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
RuntimeError:
module has no attribute ‘downsample’: |
st181658 | People have suggested to use nightly version : https://github.com/pytorch/pytorch/issues/28351 3
I will try and will update if I fix the issue. |
st181659 | Thankyou so much@driazati , It worked with torch nightly and the conversion is completed.
Now I move to a next step and their I have a similar error
Now I am trying to convert the decoder using torch.jit.script but I am facing some error as below.
My module
import numpy as np
import torch
import torch.nn as nn
from collections import OrderedDict
from layers import *
class DepthDecoder(nn.Module):
def init(self, num_ch_enc, scales=range(4), num_output_channels=1, use_skips=True):
super(DepthDecoder, self).init()
self.num_output_channels = num_output_channels
self.use_skips = use_skips
self.upsample_mode = 'nearest'
self.scales = scales
self.num_ch_enc = num_ch_enc
self.num_ch_dec = np.array([16, 32, 64, 128, 256])
# decoder
self.convs = OrderedDict()
for i in range(4, -1, -1):
# upconv_0
num_ch_in = self.num_ch_enc[-1] if i == 4 else self.num_ch_dec[i + 1]
num_ch_out = self.num_ch_dec[i]
self.convs[("upconv", i, 0)] = ConvBlock(num_ch_in, num_ch_out)
# upconv_1
num_ch_in = self.num_ch_dec[i]
if self.use_skips and i > 0:
num_ch_in += self.num_ch_enc[i - 1]
num_ch_out = self.num_ch_dec[i]
self.convs[("upconv", i, 1)] = ConvBlock(num_ch_in, num_ch_out)
for s in self.scales:
self.convs[("dispconv", s)] = Conv3x3(self.num_ch_dec[s], self.num_output_channels)
self.decoder = nn.ModuleList(list(self.convs.values()))
self.sigmoid = nn.Sigmoid()
def forward(self, input_features):
outputs = {}
# decoder
x = input_features[-1]
for i in range(4, -1, -1):
x = self.convs[("upconv", i, 0)](x)
x = [upsample(x)]
if self.use_skips and i > 0:
x += [input_features[i - 1]]
x = torch.cat(x, 1)
x = self.convs[("upconv", i, 1)](x)
if i in self.scales:
outputs[("disp", i)] = self.sigmoid(self.convs[("dispconv", i)](x))
return outputs
num_enc_channels = np.array([ 64, 64, 128, 256, 512])
depth_decoder = DepthDecoder( num_ch_enc= num_enc_channels , scales=range(4))
traced_script_module_decoder = torch.jit.script(depth_decoder)
traced_script_module_decoder.save(‘new-decoder.pt’)
Error :
File “C:\Users\lib\site-packages\torch\jit_recursive.py”, line 259, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
RuntimeError:
Module ‘DepthDecoder’ has no attribute ‘convs’ (This attribute exists on the Python module, but we failed to convert Python type: ‘OrderedDict’ to a TorchScript type.):
File “C:\Users\networks\depth_decoder.py”, line 55
x = input_features[-1]
for i in range(4, -1, -1):
x = self.convs(“upconv”, i, 0)
~~~~~~~~~~ <— HERE
x = [upsample(x)]
if self.use_skips and i > 0: |
st181660 | I feel the error is the same as explained here in the link : https://github.com/pytorch/pytorch/issues/23905 19
failed to convert Python type: ‘dict’ to a TorchScript type.
Is there any fix ? |
st181661 | I use package Numba to write the forward and backward cuda operation, and use it as a module. When I use it, the input tensors, with grad required, go through the modules. The example can be described as:
def forward_cudakernel():
... # write by NUMBA
def backward_cudakernel():
... # write by NUMBA
def forward_cuda(x)
with torch.no_grad():
x = numba.cuda.as_cuda_array(x)
forward_cudakernel[blocks, threads](x)
def backward_cuda(x)
with torch.no_grad():
x = numba.cuda.as_cuda_array(x)
backward_cudakernel[blocks, threads](x)
class BaseModule(Function):
@staticmethod
def forward():
...
return forward_cuda()
@staticmethod
def backward():
...
return grads
class Module(nn.Module):
def forward(feature):
fun = BaseModule.apply
return fun(feature)
>>> features = torch.rand(2, 256, 20, 20).cuda()
>>> features.requires_grad_()
>>> net = Module()
>>> out = net(features)
>>> out.mean.backward()
Where “out” and “feature” needs grads so that the module can run. However, the attr “cuda_array_interface”(cuda.as_cuda_array related) is not available when a tensor is with gradient. It seems incompatible. So, How can I use “numba” and “pytorch” together? Thanks! |
st181662 | Now, I add a line x = x.data in forward_cuda(), discarding the gradient before using “as_cuda_array”. I find it worked, but I don’t know if there are some potential bugs remained. |
st181663 | As mentioned in the error message that you get if you try to use it here 24 you should use .detach() and not .data. |
st181664 | Example case:
ONES = ones(1).to(DEVICE)
@jit.script
def foo(feats, mask):
# mask : [batch features]
# feats : [features features]
return where( # ((1))
mask,
feats.unsqueeze(0),
# shape: [_ features features]
ONES
# shape before prod: [batch features features]
).prod(1) # ((2))
# shape after prod: [batch features]
((1)) and ((2)) should be fusible by directly storing the reduction in the out tensor while iterating over the mask, the data, and the reduction operation.
This… would probably be hell for the vectorizer. But at this point I’d take the hit in exchange for being able to run the thing in memory.
BUT… if any guru around here can show me the logical way I’ve been missing for doing this memory efficiently with existing primitives, please do! |
st181665 | I met this error when converting my model to torchscript.
RuntimeError:
iterator expression is expected to be a list, iterable, or range, found value of type 'Tensor':
at /home/dai/scripts/card_ocr_cpu/detector/model_torchscript.py:39:28
def standard_nms(S, thres):
order = torch.argsort(S[:, 8]).flip(0)
keep:List[int] = []
while order.size(0) > 0:
i = order[0]
keep.append(i.long().item())
ovr = torch.tensor([intersection(S[i], S[t]) for t in order[1:]])
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
inds = torch.where(ovr <= thres)[0]
order = order[inds + 1]
return S[keep]
'standard_nms' is being compiled since it was called from 'nms_locality'
at /home/dai/scripts/card_ocr_cpu/detector/model_torchscript.py:63:11
p = weighted_merge(g, p)
else:
if p.size(0) > 1:
S.append(p)
p = g
if p is not None:
S.append(p)
if len(S) == 0:
return torch.tensor([0])
return standard_nms(torch.stack(S), thres)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'nms_locality' is being compiled since it was called from 'get_boxes_torch'
at /home/dai/scripts/card_ocr_cpu/detector/model_torchscript.py:247:4
return None
boxes = torch.zeros((polys_restored.shape[0],9)).float()
boxes[:,:8] = polys_restored
xs = xy_text[index,0]
ys = xy_text[index,1]
boxes[:,8] = score[xs,ys]
boxes = nms_locality(boxes,nms_thresh)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return boxes
'get_boxes_torch' is being compiled since it was called from '__torch__.EAST.forward'
at /home/dai/scripts/card_ocr_cpu/detector/model_torchscript.py:216:12
def forward(self, x, train:bool=False):
x1,x2,x3,x4 = self.extractor(x)
x = self.merge(x1,x2,x3,x4)
# del x1,x2,x3,x4
# collect()
score,geo = self.output(x)
if not train:
boxes = get_boxes_torch(score,geo,score_thresh=0.95,nms_thresh=0.2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return boxes
return score,geo
How can I iterate through the tensor like a list. Looking forward for your replay. |
st181666 | This should already get supported. it might not be in v1.3 yet, can you try update your pytorch version to nightly build to see if it works? |
st181667 | I noticed that scriptmodule have graph() and graph_for() methods 41, but I am confused about those two methods. Does anyone can tell me the difference? really thanks! |
st181668 | Solved by ptrblck in post #2
graph_for should try to optimize or fuse the graph for the provided inputs, while graph should print the IR directly. |
st181669 | graph_for should try to optimize or fuse the graph for the provided inputs, while graph should print the IR directly. |
st181670 | Great! now I see, and I have another question, will torch.jit.script() try to optimize or fuse the graph when exporting a pytorch native model to a scriptmodule? |
st181671 | It will automatically optimize/fuse for you when you run it with some inputs, graph_for is showing you a preview of what will be executed for a certain set of inputs. |
st181672 | Hi All,
I am trying to build two CNN’s on top of ResNet50, one as a regression node and one as a classification node.
class resnet50(nn.Module):
def __init__(self):
super(resnet50, self).__init__()
self.left = nn.Sequential(
nn.AdaptiveAvgPool2d(1024),
nn.AdaptiveMaxPool2d(512),
nn.Flatten(),
nn.BatchNorm1d(512),
nn.Dropout(0.25),
nn.LeakyReLU(),
nn.Linear(256, 64),
nn.Dropout(0.5),
nn.LeakyReLU(),
nn.Linear(64, 1)
)
self.right = nn.Sequential(
nn.AdaptiveAvgPool2d(1024),
nn.AdaptiveMaxPool2d(512),
nn.Flatten(),
nn.BatchNorm1d(512),
nn.Dropout(0.25),
nn.LeakyReLU(),
nn.Linear(256, 64),
nn.Dropout(0.5),
nn.LeakyReLU(),
nn.Linear(64, 7)
)
self.model = models.resnet50(pretrained=True)
self.model.fc = nn.Identity()
def forward(self, x):
x = self.model(x)
print(x.shape)
count_out = self.left(x)
class_out = self.right(x)
return count_out, class_out
I tried it in a way as given in this previous problem but i get the following error when i attempt a forward pass.
o1, o2 = model(x)
torch.Size([1, 2048])
Traceback (most recent call last):
File "<ipython-input-9-d7dc74ba0de2>", line 1, in <module>
o1, o2 = model(x)
File "/home/siddhesh/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/siddhesh/Work/Projects/LYSTO/Scripts/utils/new_models.py", line 55, in forward
count_out = self.left(x)
File "/home/siddhesh/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/siddhesh/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/siddhesh/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/siddhesh/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/pooling.py", line 1031, in forward
return F.adaptive_avg_pool2d(input, self.output_size)
File "/home/siddhesh/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py", line 768, in adaptive_avg_pool2d
return torch._C._nn.adaptive_avg_pool2d(input, _output_size)
RuntimeError: non-empty 3D or 4D (batch mode) tensor expected for input
Can someone help me as to where i might be ruining my forward pass with this?
Thanks |
st181673 | torch.nn.AdaptiveAvgPool2d's input is a 3D or 4D tensor.
x = self.model(x) return a 2D tensor. |
st181674 | Hi~
I use torch.jit.trace to create the model for LibTorch C++.
Python code show as follows:
class ResBlock_W512H15(nn.Module):
def __init__(self, idx=0, filter_num=16, kernelSize=(3, 9), dropValue=0.2, poolSize=4):
super(ResBlock_W512H15, self).__init__()
self.in_ch = (2 ** idx) * filter_num
self.out_ch = (2 ** (idx + 1)) * filter_num
self.poolSize = poolSize
self.bn = nn.BatchNorm2d(self.in_ch)
self.conv = nn.Conv2d(in_channels=self.in_ch,
out_channels=self.out_ch,
kernel_size=kernelSize,
padding=(kernelSize[0]//2, 4))
self.drop = nn.Dropout2d(dropValue)
self.mp = nn.MaxPool2d(kernel_size=(1, poolSize),
stride=None)
def forward(self, x):
shortcut = self.mp(x)
P_top = (self.out_ch - self.in_ch) // 2
P_buttum = (self.out_ch - self.in_ch) - P_top
shortcut = F.pad(shortcut, (0, 0, 0, 0, P_top, P_buttum))
out = x
out = F.relu(self.bn(out))
out = self.drop(out)
out = self.conv(out)
out = self.mp(out)
out += shortcut
return out
class Model_W512H15(nn.Module):
def __init__(self, inChannel=1, filter_num=16, kernelSize=(3, 9), num_out=15, num_categories=4):
super(Model_W512H15, self).__init__()
self.filter_num = filter_num
self.kernelSize = kernelSize
self.num_out = num_out
self.num_categories = num_categories
self.conv1 = nn.Conv2d(in_channels=inChannel,
out_channels=self.filter_num,
kernel_size=self.kernelSize,
padding=(kernelSize[0]//2, 4))
# --- Resblocks
self.ConvBlock0 = ResBlock_W512H15(0, filter_num=self.filter_num, kernelSize=self.kernelSize, poolSize=4)
self.ConvBlock1 = ResBlock_W512H15(1, filter_num=self.filter_num, kernelSize=self.kernelSize, poolSize=4)
self.ConvBlock2 = ResBlock_W512H15(2, filter_num=self.filter_num, kernelSize=self.kernelSize, poolSize=4)
self.ConvBlock3 = ResBlock_W512H15(3, filter_num=self.filter_num, kernelSize=self.kernelSize, poolSize=2)
self.ConvBlock4 = ResBlock_W512H15(4, filter_num=self.filter_num, kernelSize=self.kernelSize, poolSize=2)
self.ConvBlock5 = ResBlock_W512H15(5, filter_num=self.filter_num, kernelSize=self.kernelSize, poolSize=2)
# --- Final
self.final_ch = (2 ** 6) * self.filter_num
self.bn = nn.BatchNorm2d(self.final_ch)
self.m = nn.Softmax(dim=2)
self.fc = nn.ModuleList()
for i in range(self.num_out):
self.fc.append(nn.Linear(self.final_ch, self.num_categories))
def forward(self, x):
out = x # (1,15,512)
out = self.conv1(out) # (16,15,512)
out = self.ConvBlock0(out) # (32,15,128)
out = self.ConvBlock1(out) # (64,15,32)
out = self.ConvBlock2(out) # (128,15,8)
out = self.ConvBlock3(out) # (256,15,4)
out = self.ConvBlock4(out) # (512,15,2)
out = self.ConvBlock5(out) # (1024,15,1)
out = F.relu(self.bn(out))
out = out.permute(0, 3, 2, 1)
out_final = torch.zeros([out.size()[0], self.num_out, self.num_categories]).cuda()
for i in range(self.num_out):
x1 = self.fc[i](out[:, :, i, :])
out_final[:, i, :] = x1[:, 0, :]
out_final = self.m(out_final)
return out_final
device = torch.device('cuda')
model = Model_W512H15(kernelSize=(7, 9)).to(device)
model.eval()
input = torch.ones(1, 1, 15, 512).cuda()
trace_net = torch.jit.trace(model, input)
trace_net.eval()
trace_net.save("CppModel.pt")
And I run the following C++ code:
//--- Load model
string ModulePath = "CppModel.pt";
torch::jit::script::Module module;
module = torch::jit::load(ModulePath);
module.to(at::kCUDA);
module.eval();
//--- Test input
at::Tensor example = torch::ones({ 10, 1, 15, 512 });
vector<torch::jit::IValue> example_i;
example_i.push_back(example.to(at::kCUDA));
try {
auto output = module.forward(example_i).toTensor();
}
catch (std::runtime_error & e) {
std::cerr << e.what() << std::endl;
}
I got the error message as follows:
4
Could someone help me?
Environment
PyTorch Version (e.g., 1.0): Pytorch 1.3
Libtorch Version: Nightly version
OS (e.g., Linux): Window 10
Visual studio 2019
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: Python 3.6
CUDA/cuDNN version: CUDA 9.2
GPU models and configuration:
Any other relevant information: |
st181675 | Solved by driazati in post #4
torch.jit.trace doesn’t record data flowing over Python values, they instead just get recorded as constants with respect to the inputs provided to torch.jit.trace. So it looks like for your model the batch size is being recorded into the graph and will be fixed so no other batch sizes work:
Conside… |
st181676 | It looks like the shape of the input to the model in Python is (1, 1, 15, 512) but in C++ it was (10, 1, 15, 512), is that the source of the error? |
st181677 | No, I think it was not the source of the error.
In this c++ code, “10” is batch size.
I have tested the jit model from this source (https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py 1).
In C++, the shape of the input is (10,1,32,32).
It run successfully.
According to the error message, I think the error source may from this line
out_final[:, i, :] = x1[:, 0, :] |
st181678 | torch.jit.trace doesn’t record data flowing over Python values, they instead just get recorded as constants with respect to the inputs provided to torch.jit.trace. So it looks like for your model the batch size is being recorded into the graph and will be fixed so no other batch sizes work:
Consider a batch size of 10 and the resulting graph
trace_net = torch.jit.trace(model, [torch.ones(10, 1, 15, 512)])
_30 = torch.copy_(_29, torch.view(_27, [10, 4]), False)
versus a batch size of 3
trace_net = torch.jit.trace(model, [torch.ones(3, 1, 15, 512)])
_30 = torch.copy_(_29, torch.view(_27, [3, 4]), False)
The only way around this is to use torch.jit.script instead of torch.jit.trace which will compile your code instead of tracing its execution. For example
...
out = F.relu(self.bn(out))
out = out.permute(0, 3, 2, 1)
out_final = torch.zeros([out.size()[0], self.num_out, self.num_categories])
i = 0
# ModuleLists cannot be indexed in TorchScript, so the loop here must
# be changed
for fc in self.fc:
x1 = fc(out[:, :, i, :])
out_final[:, i, :] = x1[:, 0, :]
i += 1
out_final = self.m(out_final)
return out_final
model = Model_W512H15(kernelSize=(7, 9))
model.eval()
trace_net = torch.jit.script(model) |
st181679 | I compiled the code following the instruction in docker. but got the following error while running python script.py:
OSError: /home/huang/vsopencv/extension-script/example_app/build/warp_perspective/libwarp_perspective.so: undefined symbol: _ZN3c1017RegisterOperators25checkSchemaAndRegisterOp_ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEONS0_7OptionsE
I opened an issue in the repository issue#6 7, but got no answer. |
st181680 | Are you importing torch before your extension?
This error is often seen, if you try to import the extension directly. |
st181681 | Yes,I imported torch before loading extension,my code:
import torch
torch.ops.load_library("/home/example_app/build/warp_perspective/libwarp_perspective.so")
@torch.jit.script
def compute(x, y):
if bool(x[0][0] == 42):
z = 5
else:
z = 10
x = torch.ops.my_ops.warp_perspective(x, torch.eye(3))
return x.matmul(y) + z
print(compute.graph)
print(compute(torch.randn(4, 8), torch.randn(8, 5)))
compute.save("example.pt")
I got this error when running python script.py:
(base) root@1b05a3dbfb17:/home# python script.py
Traceback (most recent call last):
File "script.py", line 3, in <module>
torch.ops.load_library("/home/example_app/build/warp_perspective/libwarp_perspective.so")
File "/root/local/miniconda/lib/python3.7/site-packages/torch/_ops.py", line 106, in load_library
ctypes.CDLL(path)
File "/root/local/miniconda/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /home/example_app/build/warp_perspective/libwarp_perspective.so: undefined symbol: _ZN3c104impl24ExcludeTensorTypeIdGuardD1Ev |
st181682 | I have created the following class and saved scripted_module and loaded in C++ API by:
torch::jit::script::Module module;
module = torch::jit::load("scriptmodule.pt");
Now, the question is how can I call func from C++?
import torch
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self):
super(MyModule, self).__init__()
@torch.jit.ignore
def get_rand(self):
return torch.randint(0, 2, size=[1])
def forward(self):
pass
@torch.jit.export
def func(self):
done = self.get_rand()
print (done)
scripted_module = torch.jit.script(MyModule()) |
st181683 | Solved by driazati in post #3
Also just FYI @torch.jit.ignore functions cannot be exported, if you do scripted_module.save("out.pt") you’ll get
Traceback (most recent call last):
File "../test.py", line 196, in <module>
scripted_module.save('s.pt')
File "/home/pytorch/torch/jit/__init__.py", line 1621, in save
retur… |
st181684 | Does this answer help?
Jit for tolist() jit
For methods other than forward you have to explicitly get the method and run it. For this Module
class M(nn.Module):
@torch.jit.export
def infer(self, x):
return x + 10
torch.jit.script(M()).save("m.pt")
You can run it in C++ with script::Module::get_method
int main() {
auto module = torch::jit::load("m.pt");
auto result = module.get_method("infer")({torch::ones({2, 2})});
std::cout << result << "\n";
}
We have an open issue to improve our C++ documentation to make thi… |
st181685 | Also just FYI @torch.jit.ignore functions cannot be exported, if you do scripted_module.save("out.pt") you’ll get
Traceback (most recent call last):
File "../test.py", line 196, in <module>
scripted_module.save('s.pt')
File "/home/pytorch/torch/jit/__init__.py", line 1621, in save
return self._c.save(*args, **kwargs)
RuntimeError:
Could not export Python function call 'get_rand'. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__:
File "../test.py", line 192
@torch.jit.export
def func(self):
done = self.get_rand()
~~~~~~~~~~~~~ <--- HERE
print (done)
Which is expected behavior (since saved models are expected to run without a Python runtime attached) |
st181686 | I’ve created a model with a forward function like this:
class Net(nn.Module):
...
def forward(self, num_nodes, num_feats, nodes):
features = nn.Embedding(num_nodes, num_feats)
features.weight = nn.Parameter(torch.FloatTensor(feat_data), requires_grad=False)
then save that model using
traced_script_module = torch.jit.script(net)
traced_script_module.save(model_path1)
I have train model successfully, but get this error when save the model.
NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:...
'Embedding' is being compiled since it was called from '__torch__.___torch_mangle_0.Net.forward'
at <ipython-input-5-501dbaacc7a5>:42:8
def forward(self, num_nodes, num_feats, nodes):
features = nn.Embedding(num_nodes, num_feats)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
features.weight = nn.Parameter(torch.FloatTensor(feat_data), requires_grad=False)
And pytorch’s version is 1.3
What is the best way to handle this?
Any help appreciated!!! |
st181687 | Solved by driazati in post #2
You should move the initialization of submodules out into __init__ rather than in your model’s forward. This will also likely improve performance since costly parameter allocations only need to happen once instead of on each run of your model’s forward.
class M(nn.Module):
def __init__(self, nu… |
st181688 | You should move the initialization of submodules out into __init__ rather than in your model’s forward. This will also likely improve performance since costly parameter allocations only need to happen once instead of on each run of your model’s forward.
class M(nn.Module):
def __init__(self, num_nodes, num_feats):
self.features = nn.Embedding(num_nodes, num_feats)
self.features.weight = nn.Parameter(torch.FloatTensor(feat_data), requires_grad=False)
def forward(self, nodes):
result = self.features(...)
...
model = M()
script_model = torch.jit.script(model)
script_model.save("script_model.pt")
When classes are instantiated in TorchScript, the entire class must be compatible with the TorchScript compiler (details 32), which is not the case for most nn.Modules. However, if nn.Modules are saved on self in __init__, only the methods that are actually used in the forward of your model M need to be compatible with the compiler (which should work for any module in nn except for these 3 30). |
st181689 | I am printing out the graph for a model and am seeing int? and Tensor? types. Here is the graph.
graph(%self : ClassType<Conv2D2>,
%input.1 : Float(1, 3, 224, 224)):
%1 : ClassType<Conv2d> = prim::GetAttr[name="conv"](%self)
%weight : Tensor = prim::GetAttr[name="weight"](%1)
%5 : Tensor? = prim::Constant(), scope: Conv2D2/Conv2d[conv]
%6 : int = prim::Constant[value=1](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%7 : int = prim::Constant[value=1](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%8 : int[] = prim::ListConstruct(%6, %7), scope: Conv2D2/Conv2d[conv]
%9 : int = prim::Constant[value=0](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%10 : int = prim::Constant[value=0](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%11 : int[] = prim::ListConstruct(%9, %10), scope: Conv2D2/Conv2d[conv]
%12 : int = prim::Constant[value=1](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%13 : int = prim::Constant[value=1](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%14 : int[] = prim::ListConstruct(%12, %13), scope: Conv2D2/Conv2d[conv]
%15 : bool = prim::Constant[value=0](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%16 : int = prim::Constant[value=0](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%17 : int = prim::Constant[value=0](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%18 : int[] = prim::ListConstruct(%16, %17), scope: Conv2D2/Conv2d[conv]
%19 : int = prim::Constant[value=1](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%20 : bool = prim::Constant[value=0](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%21 : bool = prim::Constant[value=0](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%22 : bool = prim::Constant[value=1](), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%input : Float(1, 64, 218, 218) = aten::_convolution(%input.1, %weight, %5, %8, %11, %14, %15, %18, %19, %20, %21, %22), scope: Conv2D2/Conv2d[conv] # /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py:340:0
%24 : int = prim::Constant[value=1](), scope: Conv2D2/Softmax[softmax] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1230:0
%25 : int? = prim::Constant(), scope: Conv2D2/Softmax[softmax]
%26 : Float(1, 64, 218, 218) = aten::softmax(%input, %24, %25), scope: Conv2D2/Softmax[softmax] # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1230:0
return (%26)
What are these empty constant operators w/ weird typings supposed to represent? Where can I find source code or documentation for these? Also off-shoot question, but where can I find the actual implementation for operators (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/core/interned_strings.h 5)? |
st181690 | For sad legacy reasons we have two ways of presenting types, one that matches Python’s typing and our own internal one (which is what you’re seeing), even though the underlying types are the same. The ? here means optional, so int? is equivalent to Optional[int], and the empty prim::Constant() means None.
prim::Constant is a special case 13 in the TorchScript interpreter. Other operators that don’t directly call the underlying torch operators live in register_prim_ops.cpp 15. However, most operators 8 are generated at build time since they just call PyTorch tensor ops and are placed in torch/csrc/jit/generated. |
st181691 | I was wondering if I can use numpy APIs in a function which is going to be scripted by torch.jit.script. I have this simple function which does not work:
import torch
import torch.nn as nn
class MyModule(nn.Module):
def init(self):
super(MyModule, self).init()
@torch.jit.ignore
def call_np():
return torch.jit.export(np.random.choice(2, p=[.95,.05]))
def forward(self):
pass
@torch.jit.export
def func(self):
done = self.call_np()
print (done)
scripted_module = torch.jit.script(MyModule())
scripted_module.func()
which results in:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-133-ab1ce37d6edc> in <module>()
18 print (done)
19
---> 20 scripted_module = torch.jit.script(MyModule())
21 scripted_module.func()
C:\ProgramData\Anaconda3\lib\site-packages\torch\jit\__init__.py in script(obj, optimize, _frames_up, _rcb)
1201
1202 if isinstance(obj, torch.nn.Module):
-> 1203 return torch.jit.torch.jit._recursive.recursive_script(obj)
1204
1205 qualified_name = _qualified_name(obj)
C:\ProgramData\Anaconda3\lib\site-packages\torch\jit\_recursive.py in recursive_script(mod, exclude_methods)
171 filtered_methods = filter(ignore_overloaded, methods)
172 stubs = list(map(make_stub, filtered_methods))
--> 173 return copy_to_script_module(mod, overload_stubs + stubs)
174
175
C:\ProgramData\Anaconda3\lib\site-packages\torch\jit\_recursive.py in copy_to_script_module(original, stubs)
93 setattr(script_module, name, item)
94
---> 95 torch.jit._create_methods_from_stubs(script_module, stubs)
96
97 # Now that methods have been compiled, take methods that have been compiled
C:\ProgramData\Anaconda3\lib\site-packages\torch\jit\__init__.py in _create_methods_from_stubs(self, stubs)
1421 rcbs = [m.resolution_callback for m in stubs]
1422 defaults = [get_default_args(m.original_method) for m in stubs]
-> 1423 self._c._create_methods(self, defs, rcbs, defaults)
1424
1425 # For each user-defined class that subclasses ScriptModule this meta-class,
RuntimeError: Unable to cast Python instance of type <class 'int'> to C++ type 'unsigned __int64'
I appreciate any help or comment. |
st181692 | TorchScript only supports PyTorch and the math module, so numpy functions won’t work natively and can’t be exported. You can use torch.jit.ignore as you have done to leave a call to the Python interpreter. Modifying your example slightly and running with the latest version of PyTorch:
@torch.jit.ignore
def call_np() -> int:
return np.random.choice(2, p=[.95,.05])
class MyModule(nn.Module):
def forward(self):
pass
@torch.jit.export
def func(self):
done = call_np()
print (done)
scripted_module = torch.jit.script(MyModule())
print(scripted_module.func.graph)
scripted_module.func()
prints
graph(%self : __torch__.MyModule):
%3 : None = prim::Constant() # ../test.py:184:4
%done.1 : int = ^call_np()() # ../test.py:185:15
= prim::Print(%done.1) # ../test.py:186:8
return (%3)
0
In which we can see ^call_np() which is an upcall to Python. You must also type annotate call_np since all arguments / returns are assumed to be Tensors unless otherwise specified. torch.jit.export is also intended to be used only as a decorator on a function (see here for details 21).
I couldn’t repro your exact error, could you file an issue on GitHub 4 with a full repro to produce your issue? Even if what’s happening should throw an error the message could be made more clear. I also ran into this issue 5 while trying to repro it which seems like another bug. |
st181693 | Which function can be used to replace tensor.bool()?
Traceback (most recent call last):
File "/home/dai/scripts/mobileocr/detector/mobilenet_east_deploy_v2.py", line 311, in <module>
script_module = torch.jit.script(net)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1203, in script
return torch.jit.torch.jit._recursive.recursive_script(obj)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 173, in recursive_script
return copy_to_script_module(mod, overload_stubs + stubs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 95, in copy_to_script_module
torch.jit._create_methods_from_stubs(script_module, stubs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1423, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 222, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1226, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 222, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1226, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 222, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1226, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
RuntimeError:
Unknown builtin op: aten::bool.
Here are some suggestions:
aten::Bool
aten::roll
The original call is:
at /home/dai/scripts/mobileocr/detector/mobilenet_east_deploy_v2.py:291:88
def is_valid_poly_torch(res,score_shape,scale:int):
# cnt = 0
# for i in range(res.shape[1]):
# if res[0,i] < 0 or res[0,i] >= score_shape[1] * 4 or \
# res[1,i] < 0 or res[1,i] >= score_shape[0] * 4:
# cnt += 1
# return True if cnt <= 1 else False
print("res shape",res.shape)
cnt = torch.sum(
(res[:,0,:] < 0) + (res[:,0,:] >= score_shape[1] * scale) + (res[:,1,:] < 0) + (res[:,1,:] >= score_shape[0] * scale).bool(),
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
dim = 1
)
print("cnt shape",cnt.shape)
return cnt <= 1 |
st181694 | Thank you for your replay, I opened an issue on Github:
github.com/pytorch/pytorch
Unknown builtin op: aten::bool 1
opened
Dec 3, 2019
Arctanxy
🐛 Bug
tensor.bool() doesn't work in torch.jit.script:
Code sample to reproduce this error:
class net(torch.nn.Module):
def __init__(self):
super(net,self).__init__()
def forward(self,x):
return x.bool()
model = net()
ts... |
st181695 | I want to convert a detection model to torchscript, but there are some control/loop in nms code, how can I replace them with operations supported in torchscript? |
st181696 | Solved by driazati in post #2
You can use normal Python control flow in script mode (in contrast to tracing which it sounds like you’re talking about). Are you running into any specific issues? |
st181697 | You can use normal Python control flow in script mode 27 (in contrast to tracing which it sounds like you’re talking about). Are you running into any specific issues? |
st181698 | I am implementing a variant of LSTMs using TorchScript by modifying the code in the fastrnn benchmark written by @tom but I am getting a weird error:
RuntimeError:
Return value was annotated as having type Tuple[Tensor, List[__torch__.model.subLSTM.nn.GRNState]] but is actually of type Tuple[Tensor, List[__torch__.model.subLSTM.nn.GRNState]]:
at ../../src/model/subLSTM/nn.py:214:9
if i < self.num_layers - 1:
output = self.dropout_layer(output)
output_states.append(out_state)
i += 1
if self.batch_first:
output = output.transpose(0, 1)
return output, output_states
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
Which does not make sense since it is the same type. The code for the mode is the following:
import torch
import torch.nn as nn
from torch.nn import Parameter
import torch.jit as jit
import warnings
from collections import namedtuple
from typing import List, Tuple
from torch import Tensor
GRNState = namedtuple('GRNState', ['hx', 'cx'])
def reverse(lst):
# type: (List[Tensor]) -> List[Tensor]
return lst[::-1]
class SubLSTMCell(jit.ScriptModule):
def __init__(self, input_size, hidden_size):
super(SubLSTMCell, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.weight_ih = Parameter(torch.randn(4 * hidden_size, input_size))
self.weight_hh = Parameter(torch.randn(4 * hidden_size, hidden_size))
self.bias_ih = Parameter(torch.randn(4 * hidden_size))
self.bias_hh = Parameter(torch.randn(4 * hidden_size))
@jit.script_method
def forward(self, input: Tensor, state: GRNState) -> Tuple[Tensor, GRNState]:
hx, cx = state
gates = (torch.mm(input, self.weight_ih.t()) + self.bias_ih +
torch.mm(hx, self.weight_hh.t()) + self.bias_hh).sigmoid()
ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)
cy = (forgetgate * cx) - (ingate - cellgate)
hy = outgate - torch.tanh(cy)
return hy, GRNState(hy, cy)
class LayerNormSubLSTMCell(jit.ScriptModule):
def __init__(self, input_size, hidden_size):
super(LayerNormSubLSTMCell, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.weight_ih = Parameter(torch.randn(4 * hidden_size, input_size))
self.weight_hh = Parameter(torch.randn(4 * hidden_size, hidden_size))
# The layernorms provide learnable biases
self.layernorm_i = nn.LayerNorm(4 * hidden_size)
self.layernorm_h = nn.LayerNorm(4 * hidden_size)
self.layernorm_c = nn.LayerNorm(hidden_size)
@jit.script_method
def forward(self, input: Tensor, state: GRNState) -> Tuple[Tensor, GRNState]:
hx, cx = state
igates = self.layernorm_i(torch.mm(input, self.weight_ih.t()))
hgates = self.layernorm_h(torch.mm(hx, self.weight_hh.t()))
gates = (igates + hgates).sigmoid()
ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)
cy = self.layernorm_c((forgetgate * cx) + (ingate - cellgate))
hy = outgate - torch.tanh(cy)
return hy, GRNState(hy, cy)
class GRNLayer(jit.ScriptModule):
def __init__(self, cell, *cell_args):
super(GRNLayer, self).__init__()
self.cell = cell(*cell_args)
@jit.script_method
def forward(self, input: Tensor, state: GRNState) -> Tuple[Tensor, GRNState]:
inputs = input.unbind(0)
outputs: List[Tensor] = []
for i in range(len(inputs)):
out, state = self.cell(inputs[i], state)
outputs += [out]
return torch.stack(outputs), state
class ReverseGRNLayer(jit.ScriptModule):
def __init__(self, cell, *cell_args):
super(ReverseGRNLayer, self).__init__()
self.cell = cell(*cell_args)
@jit.script_method
def forward(self, input:Tensor, state:GRNState) -> Tuple[Tensor, GRNState]:
inputs = reverse(input.unbind(0))
outputs = jit.annotate(List[Tensor], [])
for i in range(len(inputs)):
out, state = self.cell(inputs[i], state)
outputs += [out]
return torch.stack(reverse(outputs)), state
class BidirLayer(jit.ScriptModule):
__constants__ = ['directions']
def __init__(self, cell, *cell_args):
super(BidirLayer, self).__init__()
self.directions = nn.ModuleList([
GRNLayer(cell, *cell_args),
ReverseGRNLayer(cell, *cell_args),
])
@jit.script_method
def forward(self, input: Tensor, states: List[GRNState]) -> Tuple[Tensor, List[GRNState]]:
outputs: List[Tensor] = []
output_states: List[GRNState] = []
i = 0
for direction in self.directions:
state = states[i]
out, out_state = direction(input, state)
outputs += [out]
output_states += [out_state]
i += 1
return torch.cat(outputs, -1), output_states
def init_stacked_lstm(num_layers, layer, cell, input_size, hidden_size):
layers = [layer(cell, input_size, hidden_size)] + \
[layer(cell, hidden_size, hidden_size) for _ in range(num_layers - 1)]
return nn.ModuleList(layers)
def init_states(num_layers, batch_size, hidden_size, device):
states: List[GRNState] = []
temp = torch.randn(num_layers, batch_size, hidden_size,
2, device=device).unbind(0)
for s in temp:
hx, cx = s.unbind(2)
states.append(GRNState(hx, cx))
return states
class SubLSTM(jit.ScriptModule):
# Necessary for iterating through self.layers and dropout support
__constants__ = ['layers', 'num_layers', 'batch_first', 'hidden_size']
def __init__(self, input_size, hidden_size, num_layers, bias=True,
batch_first=False, dropout=0.0, bidirectional=False,
layer_norm=False):
super(SubLSTM, self).__init__()
layer = BidirLayer if bidirectional else GRNLayer
cell = LayerNormSubLSTMCell if layer_norm else SubLSTMCell
self.layers = init_stacked_lstm(
num_layers, layer, cell, input_size, hidden_size)
if dropout > 0 and num_layers == 1:
warnings.warn("dropout lstm adds dropout layers after all but last "
"recurrent layer, it expects num_layers greater than "
"1, but got num_layers = 1")
self.dropout_layer = nn.Dropout(dropout)
self.num_layers = num_layers
self.batch_first = batch_first
self.hidden_size = hidden_size
@jit.script_method
def forward(self, input: Tensor, states: List[GRNState]=None) -> Tuple[Tensor, List[GRNState]]:
output = input if not self.batch_first else input.transpose(0, 1)
output_states: List[GRNState] = []
if states is None:
states = init_states(self.num_layers, output.size(1),
self.hidden_size, output.device)
i = 0
for rnn_layer in self.layers:
state = states[i]
output, out_state = rnn_layer(output, state)
# Apply the dropout layer except the last layer
if i < self.num_layers - 1:
output = self.dropout_layer(output)
output_states.append(out_state)
i += 1
if self.batch_first:
output = output.transpose(0, 1)
return output, output_states |
st181699 | Solved by tom in post #2
I didn’t write that code, I just used it for benchmarking.
So are you using PyTorch 1.3? Maybe making those a plain Tensor tuple instead of a Namedtuple helps.
Best regards
Thomas |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.