id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st181700 | I didn’t write that code, I just used it for benchmarking.
So are you using PyTorch 1.3? Maybe making those a plain Tensor tuple instead of a Namedtuple helps.
Best regards
Thomas |
st181701 | Hi,
Thanks, it was the named tuple. A bit annoying since they can make the code a lot cleaner. By the way I don’t get the same level of performance as in the examples, which is a bit weird.
PS: Sorry for the very delayed reply |
st181702 | I got this error while trying to convert nn.module to torchscript.
Traceback (most recent call last):
File "/home/dai/scripts/mobileocr/detector/mobilenet_east_deploy.py", line 240, in <module>
script_module = torch.jit.script(net)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1203, in script
return torch.jit.torch.jit._recursive.recursive_script(obj)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 173, in recursive_script
return copy_to_script_module(mod, overload_stubs + stubs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 95, in copy_to_script_module
torch.jit._create_methods_from_stubs(script_module, stubs)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1423, in _create_methods_from_stubs
self._c._create_methods(self, defs, rcbs, defaults)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 222, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1226, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/_recursive.py", line 222, in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
File "/home/dai/py36env/lib/python3.6/site-packages/torch/jit/__init__.py", line 1226, in script
fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
RuntimeError:
Unknown builtin op: aten::Tensor.
Here are some suggestions:
aten::tensor
The original call is:
at /home/dai/scripts/mobileocr/detector/mobilenet_east_deploy.py:193:17
for i in range(valid_pos.shape[0]):
x = valid_pos[i,0]
y = valid_pos[i,1]
y_min = y - d[0,i]
y_max = y + d[1,i]
x_min = x - d[2,i]
x_max = x + d[3,i]
rotate_mat = get_rotate_mat_torch(-angle[i])
temp_x = torch.Tensor([[x_min,x_max,x_max,x_min]]) - x
~~~~~~~~~~~~ <--- HERE
temp_y = torch.Tensor([[y_min,y_min,y_max,y_max]]) - y
coordinates = torch.cat([temp_x,temp_y],dim = 0)
# res = torch.dot(rotate_mat,coordinates)
res = torch.matmul(rotate_mat,coordinates)
res[0,:] += x
res[1,:] += y
if is_valid_poly_torch(res,score_shape,scale):
index.append(i)
My pytorch version is 1.3.1 + cu92
How can I solve this problem or what function can be used to replace torch.Tensor()?
Please remind me if more information is needed. |
st181703 | Solved by ptrblck in post #2
Try to use the suggested torch.tensor instead (lowercase t).
I would generally not recommend to use torch.Tensor, as this might give you uninitialized code, e.g. by calling torch.Tensor(10).
torch.tensor is the factory method to pass values as a list etc. |
st181704 | Try to use the suggested torch.tensor instead (lowercase t).
I would generally not recommend to use torch.Tensor, as this might give you uninitialized code, e.g. by calling torch.Tensor(10).
torch.tensor is the factory method to pass values as a list etc. |
st181705 | In TF, independent sections of the graph are automatically executed in parallel if called together in one session.run(…) call. A simple case is this:
xs = [ ... ] # list of torch.Tensors
models = [ ... ] # list of nn.Modules
out = [m(x) for m, x in zip(models, xu)]
It seems to me if this code is dynamically executed, it is not possible to parallelize the calls of m1(x1), m2(x2), … mn(xn) for some obvious performance benefits.
What is the best way in PyTorch to achieve this effect? Is it to use the jit? |
st181706 | The JIT execution won’t run anything in parallel, right now it mostly does code transformations and fusions to optimize performance.
If you have multiple GPUs, DataParallel 2 might be helpful. |
st181707 | I traced some classification models from torchvision and then wanted to apply DataParallel for multi-gpu inference. While no error is thrown, for more than 1 gpu passed as device_ids to DataParallel, inference on the torchscript model is much slower than the regular torch model (5x-10x slower), even if batch_size=1, which results in effectively only 1 gpu being used. |
st181708 | If the batch size is set to 1, only a single GPU will be used, as the data cannot be chunked and send to each device. |
st181709 | so i know that. I was just giving an example of the issues I was encountering with DataParallel. The issue is the torchscript model inferences are significantly slower than the standard torch model inferences. I tested a few different parameters, like batch_size and different number of gpu device ids with DataParallel (as well as a few different architectures). I found with 1 device_id or when not using DataParallel at all, the model inference speed was similar between the torchscript and torch model. However, when there was more than 1 device_id passed to DataParallel, the inference speed for the torchscript model reduced dramatically in comparison to the standard torch model with the same parameters (5x-10x slower). I brought up the example because it was curious that even when only 1 gpu was effectively being used when batch_size=1 with 2 device_ids passed, the torchscript model was still significantly slower than the regular torch model. However, when 1 device_id was passed, the torchscript model was similar in speed to the torch model. |
st181710 | Wondering why the forward method isn’t const? And since it’s not, what’s the proper way to get back to the loaded state of the module without having to actually reload?
Thanks,
Marc |
st181711 | @kb1ooo Would you like to post this question in https://discuss.pytorch.org/c/jit 13? The JIT team monitors that channel more closely and we would be able to have a response there. |
st181712 | @yf225 oops, ok I saw that I could change the category so that’s what I did. Maybe not as effective as a repost. |
st181713 | Modules can have stateful data attached to themselves as parameters or attributes. These can be mutated during forward (or any other method that is run on the Module), so it doesn’t make sense to have it be const.
We don’t have any way of backing out changes to Module state, so re-loading is the only way to get back a known state. We may in the future have a user-accessible way of stating that some method is pure and has no mutable operations, but that’s still something we’re discussing. |
st181714 | github.com/pytorch/pytorch
Will the Jit support typehint OPTIONAL or UNION? 16
opened
Jul 26, 2019
closed
Jul 26, 2019
NeilWangziyu
❓ Questions and Help
Will the Jit support typehint OPTIONAL or UNION?
As the title indicates, will jit support we use UNION and...
jit
[JIT] [Mobile] Is isinstance() supposed to work with TorchScript jit
This returns None but anyway I haven’t looked for that functionality
any workaround? |
st181715 | I think torchscript supports Optional: https://pytorch.org/docs/master/jit.html#optional-type-refinement 93.
TorchScript doesn’t support Union. If you have a use case, opening a feature request on the repository (instead of a question) might be the way to go. |
st181716 | I instantiated a model and transfrom it into ScriptMoudle using this:
alex_model = torchvision.models.AlexNet(num_classes=10)
f = torch.jit.script(alex_model)
f.save("./models/alexnet.pt")
I cannot get the attr like stride of Conv2d when I dump() the model in c++
only a constant name like this:
121922×336 15.3 KB
How can I get the value like stride or padding of Conv2d when loading the model in c++?
THANKS VERY MUCH! |
st181717 | Constant values are inlined directly into the code of the module, so we currently have no way of accessing them once compilation has run, and the modules in nn.Modules store all their values as constants. I opened this issue 1 so you can track that for fixes.
As a temporary workaround, you could grab the values you care about in Python and stash them as attributes instead of constants:
class M(nn.Module):
def __init__(self):
super().__init__()
self.alex_model = torchvision.models.AlexNet(num_classes=10)
self.first_conv_stride = getattr(self.alex_model.features, '0').stride
def forward(self, x):
return self.alex_model(x)
sm = torch.jit.script(M())
print(sm.first_conv_stride)
and in C++
auto module = torch::jit::load("alexnet.pt");
std::cout << module.attr("first_conv_stride") << "\n"; |
st181718 | When I convert mobilenet_v2 network to torchscript model, the size increased from 13.5MB to 43.1MB. It’s too big to deploy in Android.
How can I compress the torchscript model ?
My code:
from torchvision.models import mobilenet_v2
net = mobilenet_v2()
torch.save(net,"net.pth")
script_module = torch.jit.script(net)
torch.jit.save(script_module,"net.pt")
The size of models: |
st181719 | Solved by driazati in post #2
Are you on the latest version of PyTorch (1.3)? We made some changes to significantly decrease the size of TorchScript binaries. For mobilenet_v2 on the nightly version of PyTorch, I get very similar sizes between eager PyTorch and TorchScript:
$ ls -la --block-size=M
-rw-------. 1 user user 15M N… |
st181720 | Are you on the latest version of PyTorch (1.3)? We made some changes to significantly decrease the size of TorchScript binaries. For mobilenet_v2 on the nightly version of PyTorch, I get very similar sizes between eager PyTorch and TorchScript:
$ ls -la --block-size=M
-rw-------. 1 user user 15M Nov 15 13:06 net.pt
-rw-------. 1 user user 14M Nov 15 13:06 net.pth
If you are on PyTorch 1.3 and still are getting this problem, could you try using the nightly PyTorch package and see if that fixes it? |
st181721 | Hello everyone. I attempt to use torch.jit.script on the something like hyp_ctc_state_prev.tolist(), but it doesn’t work。I build the pytorch from source and the torch version is 1.4.0a0+2e7dd54。
I’d appreciate if anybody can help me! Or if there is a workable implementation, please let me know! Thanks in advance!
here is the log:
logs
RuntimeError: │ # will be (2 x beam) hyps at most
Tried to access nonexistent attribute or method ‘tolist’ of type ‘Tensor’.:
hyps_score = [hyp_score]
hyps_ctc_state_prev = [hyp_ctc_state_prev.tolist()]
~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE
hyps_ctc_score_prev = [hyp_ctc_score_prev]
ended_hyps_score = [] |
st181722 | Solved by driazati in post #2
This is a known bug https://github.com/pytorch/pytorch/issues/26752
You can workaround by implementing it yourself, but it can be cumbersome due to the reasons explained in the bug report (plus the fact that TorchScript does not support generic functions or recursion):
def my_1d_tolist(x):
res… |
st181723 | This is a known bug https://github.com/pytorch/pytorch/issues/26752 34
You can workaround by implementing it yourself, but it can be cumbersome due to the reasons explained in the bug report (plus the fact that TorchScript does not support generic functions or recursion):
def my_1d_tolist(x):
result: List[float] = []
for i in x:
result.append(i.item())
return result
@torch.jit.script
def my_2d_tolist(x):
result: List[List[float]] = []
for i in x:
result.append(my_1d_tolist(i))
return result
x = torch.ones(2, 2)
# These calls should be the same
print(x.tolist())
print(my_2d_tolist(x)) |
st181724 | Thanks a lot! I implemented it by myself yesterday and I find that use c++ to load torchscript model only support the function name ‘forward’. If I change the function name ‘forward’ to ‘infer’,and use the decorator @torch.jit.export , transfer the model to torchscript.however, when i use c++ to load the model like torch::jit::script::Module module = torch::jit::load(‘model.pt’); and then use module.infer(inputs), the cpp file won’t be built. Do you know why it happen? |
st181725 | For methods other than forward you have to explicitly get the method and run it. For this Module
class M(nn.Module):
@torch.jit.export
def infer(self, x):
return x + 10
torch.jit.script(M()).save("m.pt")
You can run it in C++ with script::Module::get_method
int main() {
auto module = torch::jit::load("m.pt");
auto result = module.get_method("infer")({torch::ones({2, 2})});
std::cout << result << "\n";
}
We have an open issue to improve our C++ documentation to make things like this more clear in the future. |
st181726 | Gentle Introduction
Quantization is often tackled as a rewrite of the original model. We can overload Convolution (i.e, the module convolution), we can add quantization layer before and after (Glow style plus above), but if we use the convolution as functional we may want to add different quantization for the different slots (input, weights, and bias).
When I talk to HW people, they would like to break a convolution into a correlation and a bias addition. Thus, reorganize the convolution operation into two distinct operations. Quantization can be different for the weights, the correlation, the bias and their sums.
Then Quantization affects the forward computation. It affects the backward and the range of the quantization can be used as parameter and the gradient computation could/should use it for training.
Quantization as pass
In this forum, there is a nice tutorial how to introduce an optimization pass. This pass uses CustomFuseGraph. The Good: it boils down to an import. The Bad: FuseGraph optimizations are based on same input same output operations (convolution does not belong here)[ PLEASE CORRECT ME]. This pass will change the forward computation, thus the pass should be done before any AUTOGRAD. With this example, we do not have much control when the optimization and it seems too late.
Trainable and Automatic quantization for TF/Caffe
What automatic tools do in TF, Caffe, they modify the computation graph by pattern recognition and HW requirements, they train the network, then they remove those layers for inference. After that a dedicated compiler will take the computation graph and write code for a specific architecture.
Quantization as jit pass
The way I see it, it will be nice to register a jit pass. This pass must be before gradient computation. This pass will be basically an IR graph manipulation where a few targeted operation will be at first a sub graph but the inputs are completely qualified so that the “rewrite” of the graph can be local, complete and without side effects (nice functional).
Question for the masters
Would you like to let me know how to get started in a practical way ?
Please, hit me with questions, pointer, github links … whatever you consider important. |
st181727 | Check out: https://github.com/pytorch/pytorch/issues?utf8=✓&q=is%3Aissue+is%3Aopen+quantization 55. We are actively working on something very similar to your proposal, soon to be released! |
st181728 | Just trying to throw more context here, this is the earlier issue tracking the proposal: https://github.com/pytorch/pytorch/issues/18318 4 and the proposal itself: https://github.com/pytorch/pytorch/wiki/torch_quantization_design_proposal 14 |
st181729 | From the proposals and implementations, I will try to learn any simple way to add a pass that will target any layer and create a sub-graph (At graph layer) before gradient computation.
Such an addition, will help third parties to describe the computation differently and closer to a dedicated engine (we are interested in FPGA kernels): the computation is similar to to CPU, possibly using the same basic operation and changing the order.
The introduction of fake quantization nodes is one way we pursue to modify the subgraph. But our fake nodes will affect the forward and the backward. |
st181730 | Hi Paolo! I’m currently working on graph mode quantization (along with other people, of course), so I’ll try to shed some light on what we have in our plans.
As you noticed, quantization can be implemented as a jit pass. But we consider this as one of two modes of how quantization could be done.
The first mode is called eager mode quantization, where the user is expected to structure their model into submodules and insert quantization stubs manually. This way they completely control the quantization process and can fine-tune the results (and also it can be applied to non-scriptable models). The drawback of this approach is that the user needs to edit their model.
The second mode is called graph mode and it is based on jit. It’s more than one pass, but the idea is the same as you described: a user scripts/traces their model, passes it to some black-box quantizer and gets a quantized version of their model as a result.
In graph mode we roughly expect the following passes:
inserting instrumentation nodes for collecting runtime statistics (aka observers)
replacing quantization/dequantization nodes into the model
fusing dequantize-some_op-quantize patterns to quantize_op
There are actually more more-specific pass than these three, however these should give you an idea of what functionality would be there. As we work on those passes, we’re trying to build them on top of generic features that could be useful for other purposes as well (for instance, several months back we implemented subgraph-matcher and subgraph-rewriter to facilitate fusion of arbitrary graph patterns). We probably will add some features to facilitate transformations on module level (rather than on a graph, or function, level), but the specific details are still TBD.
I think you might help us trying out API as soon as it’s ready and letting us know if the the workflow makes sense for you and covers your use cases.
For now please stay tuned, we’re actively working on it and expect to show something soon! |
st181731 | We = I + Xilinx (HW+SW developers)
We are interested in the graph mode.
The most common scenario we can imagine it is a model that has been designed and trained; however, part of it will be executed on a fixed point precision device. This opens a lot of interesting questions.
Please, consider to use us not only as Guinea Piggys but we can help you in giving you test cases, suggestions, and also “me” the worst coder in the west world I will need to figure this out for the purpose to then plug in a specialized partitioner and compiler.
Please, consider to come down (if you are in the Bay Area) and visit Xilinx and see what we can do for you as well.
thank you |
st181732 | Hi Paolo!
Sorry for the delayed reply. Right now we’re on the final stretch before we should have graph mode working end-to-end (see e.g. this stack of PRs: https://github.com/pytorch/pytorch/pull/24426 1), so I don’t see an easy way to offload some of the current work to you right now. This is a critical path, so it can’t be parallelized that much I think. However, once it lands I expect we would discover many places where it can be improved or bugs which need to be fixed - and at that point your help in both finding these spots and helping with fixes would be much appreciated!
Until then I suggest you to get familiar with JIT IR in case you haven’t worked with it much before. A good overview of it can be found here: https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/docs/OVERVIEW.md 12 . I could try to provide more pointers if you need them - let me know if that’s the case. |
st181733 | Thank you Michael. This week I will try to interact with the IR.
Let me know if you guys are willing to come down and introduce you work when you are ready. |
st181734 | Let me address a few questions so you can help me and I can help you.
Q Can we (will be able to) register a JIT pass that may/will affect the “gradient computation” ?
Registering a pass, as for code optimization, is elegant and you and I can work independently. The HW people can change their minds to suite different HW configurations; I can create versions; and our packages are imported afterwords. The main concern is that you will need to give us partial control when to call the pass. This means that I can break a perfectly fine JIT.
In your proposal, you will give us the opportunity to activate “the pass”, which is great. Next question.
Q Can we target layers that affect “the gradient computation” ?
I know quantization of convolutions is HW dependent. This means that I will need space to wiggle in choosing the convolution layer and what do do with it. In practice, the HW will dictate the computation shape and format: in turn, the computation of the convolution. The computation is still based on basic operations such as aten::conv and addition of the bias, I know the HW people want them separated.
Q is it possible to have a tutorial I can build upon so I can practice on the master code for:
(Any) layer selection into a subgraph.
Where to locate such a phase so that I do the least damage to the JIT optimization process
The documentation is great but I learned more from a post like this: https://zasdfgbnm.github.io/2018/09/20/PyTorch-JIT-Source-Code-Read-Note/ 11 |
st181735 | I work at Xilinx as well. We released Graffitist 8 earlier this year, a TensorFlow package for quantization-aware training. I’d like to point out that there is a critical issue in the way FakeQuant’s gradients (wrt min/max vars) are implemented in TensorFlow. We shed more light on it in our paper 10, but it has to do with the correct straight-through-estimator (STE) for training thresholds or scale-factors. It’d be nice to address this in PyTorch early on (I’m happy to be of help and can contribute as well). This is essential for quantizing traditionally difficult networks such as mobilenets, with almost no loss in accuracy (refer Table 4 in paper 10).
I agree with Paolo, in that having control over the JIT IR pass is essential for target-specific transforms on the graph. I did go over the documentation (JIT README), but some things are not clear to me yet. For instance, how to (i) pattern-match and manipulate sub-graphs, (ii) insert custom ops, (iii) invoke custom passes on the IR.
Please let us know
(i) if you can visit Xilinx (San Jose) for a JIT deep-dive, and
(ii) how we can be of help/contribute. |
st181736 | Please, let me know what we can do to help?
Would you like to come down to Xilinx and give a talk ? |
st181737 | I just read this tutorial and it does not touch the backward part.
https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html 14
My understanding is that to get autograd work, we will also need to register the corresponding backward function. However, I could not find how this works. I checked PyTorch source code (torch/csrc/autograd/generated/VariableTypeEverything.cpp) and could not find where backward functions are registered either.
Help will be appreciated! |
st181738 | You implement it in C++ similarly to in Python via autograd.Function. You then have to register an op that uses your autograd function. See this example
#include <torch/script.h>
#include <torch/all.h>
#include <iostream>
#include <memory>
using namespace at;
using torch::Tensor;
using torch::autograd::AutogradContext;
using torch::autograd::Variable;
using torch::autograd::variable_list;
// computes f(x) = 2x
class MyDouble : public torch::autograd::Function<MyDouble> {
public:
static variable_list forward(
AutogradContext* ctx,
Variable input) {
return {input + input};
}
static variable_list backward(
AutogradContext* ctx,
variable_list grad_output) {
return {torch::ones({2, 2}) + 1};
}
};
Tensor double_op(const Tensor& input) {
return MyDouble::apply(input)[0];
}
static auto registry =
torch::RegisterOperators("my_ops::double_op", &double_op);
and in Python
import torch
torch.ops.load_library("build/libmy_custom_op.so")
@torch.jit.script
def fn(x):
return torch.ops.my_ops.double_op(x)
x = torch.randn(2, 2, requires_grad=True)
print(fn(x))
x.backward(torch.ones(2, 2)) |
st181739 | Suppose I want to instantiate a Module with a function:
class Foo(nn.Module):
def __init__(self, fn):
super(Foo, self).__init__()
self.fn = fn
def forward(self, x):
return self.fn(x)
This works fine
def square(x):
return x**2
foonet = Foo(square)
But attempting to script the module fails
foonet_script = torch.jit.script(foonet)
giving
RuntimeError:
module has no attribute 'fn':
at <ipython-input-25-24af853ed724>:7:15
def forward(self, x):
return self.fn(x)
~~~~~~~ <--- HERE
It doesn’t help if I first script the function
square_script = torch.jit.script(square)
and then instantiate with the resulting torch._C.Function. On the other hand, if I pass in another nn.Module, everything is fine. Setting up a whole new Module when all I want is the function call seems overkill though. |
st181740 | Update: It’s a known bug see this issue 39 for function attributes and this post 41 for class attributes. |
st181741 | This bug should be fixed in https://github.com/pytorch/pytorch/pull/28569 45, you should be able to try out the fix if you build master from source or use the nightly PyTorch pip package |
st181742 | Hello there,
I’ve faced similar issues during deploying scripted model on Android.
The first question is “How to check whole grap or code of scripted model?”.
ScriptedModule.code provides only upper level code, but inside stacktrace on mobile I see more information about how the code looks like.
Second is about isinstance(). My model has plenty of isinstance(tensor, torch.Tensor) or such which are converted to in tensor1 = unchecked_cast(Tensor, tensor) script code. Here comes an error.
Why there’s no error while exporting/scripting of module? |
st181743 | Solved by driazati in post #3
As for 1), we recently changed the behavior so that functions in .code and .graph appear as function calls (previously we were inlining the function bodies). So we’re still missing the functionality to show the called functions. For now you can re-enable inlining to see the entire graph:
def other_… |
st181744 | isinstance is supported but its result is static. It is useful for Modules that have different attribute types passed in, e.g.
class M(torch.nn.Module):
def __init__(self, x):
super().__init__()
self.x = x
def forward(self):
if isinstance(self.x, List[str]):
return self.x[2]
else:
return self.x + 2
print(torch.jit.script(M(['bye'])).graph)
print(torch.jit.script(M(2)).graph)
The compiler is able to see the isinstance check and evaluate it at compile time and remove the unused branch. The graphs show this:
graph(%self : ClassType<M>):
%3 : str = prim::Constant[value="hi"]() # ../test.py:22:19
return (%3)
graph(%self : ClassType<M>):
%4 : int = prim::Constant[value=2]() # ../test.py:24:28
%3 : int = prim::GetAttr[name="x"](%self)
%5 : int = aten::add(%3, %4) # ../test.py:24:19
return (%5) |
st181745 | As for 1), we recently changed the behavior so that functions in .code and .graph appear as function calls (previously we were inlining the function bodies). So we’re still missing the functionality to show the called functions. For now you can re-enable inlining to see the entire graph:
def other_fn(x):
return x + 10
# Change the inlining mode before you compile
torch._C._jit_set_inline_everything_mode(True)
@torch.jit.script
def fn(x):
return other_fn(x)
print(fn.code)
print(fn.graph)
You can track this bug in https://github.com/pytorch/pytorch/issues/29750 2. |
st181746 | driazati:
def other_fn(x): return x + 10 # Change the inlining mode before you compile torch._C._jit_set_inline_everything_mode(True) @torch.jit.script def fn(x): return other_fn(x) print(fn.code) print(fn.graph)
You can also print out a model directly with
class M(nn.Module):
def other_fn(self, x):
return x + 10
def forward(self, x):
return self.other_fn(x)
m = torch.jit.script(M())
print(m._c.dump()) |
st181747 | This one is unclear to me.
How could we annotate input for __init__() and output of forward() functions? The Union[int, List[str]] typing is unsupported |
st181748 | driazati:
torch._C._jit_set_inline_everything_mode(True)
Thanks a lot! That clarified the way how to see the whole code |
st181749 | zetyquickly:
This one is unclear to me.
How could we annotate input for __init__() and output of forward() functions? The Union[int, List[str]] typing is unsupported
__init__ on nn.Modules runs in Python (torch.jit.script only sees the module after it has been initialized), so you can annotate that with whatever Python annotations you want (but they won’t be enforced by the compiler at all, for that you should use something like mypy).
For methods that are compiled (e.g. forward and anything it calls), the return types can be deduced from the code. If you want to explicitly write it out, you can use any of the type annotations listed here 1.
Unions aren’t supported so they won’t work in TorchScript. As a workaround you could do something like Tuple[Optional[int], Optional[List[str]]]. |
st181750 | Hello there,
I’ve faced some unexpected behaviour of torch.jit.script(). The issue may be reproduced with the following code.
As I learned from docs ( https://pytorch.org/docs/stable/jit.html#id3 14 ) one can use custom TorchScript classe if it’s properly written. But…
import torch
from typing import Tuple
@torch.jit.script
class MyClass(object):
def __init__(self, weights = (1.0, 1.0, 1.0, 1.0,)):
# type: (Tuple[float, float, float, float])
self.weights = weights
def apply(self):
# type: () -> Tuple[float, float, float, float]
return self.weights
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.class_field = torch.jit.export(MyClass(weights = (1.0, 1.0, 1.0, 1.0,)))
def forward(self, x):
self.class_field.apply()
return x + 10
m = torch.jit.script(MyModule())
Produces such error:
RuntimeError:
Module 'MyModule' has no attribute 'class_field' (This attribute exists on the Python module, but we failed to convert Python type: 'MyClass' to a TorchScript type.):
at script_test.py:20:8
def forward(self, x):
self.class_field.apply()
~~~~~~~~~~~~~~~~ <--- HERE
return x + 10 |
st181751 | It seems that I’ve solved the problem. Found out that attributes introduced in __init__(self,...) should be explicitly annotated the following way:
class MyModule(torch.nn.Module):
class_field : MyClass
def __init__(self):
super().__init__()
self.class_field = torch.jit.export(MyClass(weights = (1.0, 1.0, 1.0, 1.0,)))
Ofk, if they are custom scripted classes.
P.S. The proper way to initialize custom class field is still unclear to me, I mean:
self.class_field = torch.jit.export(MyClass(weights = (1.0, 1.0, 1.0, 1.0,)))
or just
self.class_field = MyClass(weights = (1.0, 1.0, 1.0, 1.0,))
Behaviour is the same |
st181752 | This is a bug, I’ve filed https://github.com/pytorch/pytorch/issues/29597 130 since this should be something that we can do automatically.
export is meant to be used as a decorator on functions that need to be compiled but are not called from forward or anything forward calls. So for your code you can just get rid of the call to torch.jit.export. See these docs 82 for details. |
st181753 | Hi Everyone,
I’m unable to import ‘weak_module’ from ‘torch._jit_internal’. It says 'ImportError: cannot import name ‘weak_module’. I’m using Pytorch 1.3.0a0+24ae9b5 version. Please help me. |
st181754 | This is some internal module, it might have been removed/renamed. Why do you want to use it? |
st181755 | I’m trying to run EfficientNet which has used the module in the following manner.
‘from torch._jit_internal import weak_module, weak_script_method’ |
st181756 | Soniya:
EfficientNet
Which EfficientNet implementation are you using? The decorators its pulling from _jit_internal were deleted in v1.2. Their functionality is now automatic, so they can just be deleted and everything should work the same. |
st181757 | I’m using the implementation which is given here https://github.com/katsura-jp/efficientnet-pytorch 24 |
st181758 | It should work if you apply this patch
diff --git a/model/swish.py b/model/swish.py
index 66adfa5..3f68678 100644
--- a/model/swish.py
+++ b/model/swish.py
@@ -1,10 +1,8 @@
import torch
import torch.nn as nn
from torch.nn.parameter import Parameter
-from torch._jit_internal import weak_module, weak_script_method
-@weak_module
class Swish(nn.Module):
def __init__(self, train_beta=False):
super(Swish, self).__init__()
@@ -13,7 +11,6 @@ class Swish(nn.Module):
else:
self.weight = 1.0
- @weak_script_method
def forward(self, input):
return input * torch.sigmoid(self.weight * input) |
st181759 | Where to apply this patch? I’m unable to find the corresponding file from where i have used it as ‘from torch._jit_internal’. |
st181760 | Don’t know is it a bug or just partial coverage but the _fields attribute of NamedTuple unavailable while scripting.
class _Contents(NamedTuple):
a : Optional[torch.Tensor]
b : Optional[torch.Tensor]
c : Optional[torch.Tensor]
d : Optional[torch.Tensor]
@torch.jit.script
class Container:
def __init__(self, fields):
# type: (Tuple[int, int], _Contents)
self.fields = fields
def has(self, name: str) -> bool:
"""
Returns:
bool: whether the field called `name` exists.
"""
return name in self.fields._fields
RuntimeError:
Unknown attribute to named tuple:
at container.py:91:23
def has(self, name: str) -> bool:
"""
Returns:
bool: whether the field called `name` exists.
"""
return name in self.fields._fields
~~~~~~~~~~~~~~~~~~~~ <--- HERE |
st181761 | We have not bound _fields into NamedTuples. If you file a feature request on GitHub we can take a look at implementing it |
st181762 | PyTorch version: 1.1
I’m optimizing Decoder part of model.py at https://github.com/nvidia/tacotron2 5 using JIT.
@torch.jit.script
class DecoderOptions:
@torch.jit.script_method
def __init__(self, decoder, memory, mask):
B = memory.size(0)
MAX_TIME = memory.size(1)
self.attention_hidden = Variable(memory.data.new(
B, decoder.attention_rnn_dim).zero_())
self.attention_cell = Variable(memory.data.new(
B, decoder.attention_rnn_dim).zero_())
self.decoder_hidden = Variable(memory.data.new(
B, decoder.decoder_rnn_dim).zero_())
self.decoder_cell = Variable(memory.data.new(
B, decoder.decoder_rnn_dim).zero_())
self.attention_weights = Variable(memory.data.new(
B, MAX_TIME).zero_())
self.attention_weights_cum = Variable(memory.data.new(
B, MAX_TIME).zero_())
self.attention_context = Variable(memory.data.new(
B, decoder.encoder_embedding_dim).zero_())
self.memory = memory
self.processed_memory = decoder.attention_layer.memory_layer(memory)
self.mask = mask
class Decoder(torch.jit.ScriptModule):
def __init__(self, hparams):
# same to nvidia version
def get_go_frame(self, memory):
# same to nvidia version
def parse_decoder_inputs(self, decoder_inputs):
# same to nvidia version
def parse_decoder_outputs(self, mel_outputs, gate_outputs, alignments):
# same to nvidia version
@torch.jit.script_method
def decode(self, decoder_input, options: DecoderOptions):
# type: (Tensor, DecoderOptions) -> (Tuple[Tensor, Tensor, Tensor, DecoderOptions])
""" Decoder step using stored states, attention and memory
PARAMS
------
decoder_input: previous mel output
RETURNS
-------
mel_output:
gate_output: gate output energies
attention_weights:
"""
cell_input = torch.cat((decoder_input, options.attention_context), -1)
options.attention_hidden, options.attention_cell = self.attention_rnn(
cell_input, (options.attention_hidden, options.attention_cell))
options.attention_hidden = F.dropout(
options.attention_hidden, self.p_attention_dropout, self.training)
attention_weights_cat = torch.cat(
(options.attention_weights.unsqueeze(1),
options.attention_weights_cum.unsqueeze(1)), dim=1)
options.attention_context, options.attention_weights = self.attention_layer(
options.attention_hidden, options.memory, options.processed_memory,
attention_weights_cat, options.mask)
options.attention_weights_cum += options.attention_weights
decoder_input = torch.cat(
(options.attention_hidden, options.attention_context), -1)
options.decoder_hidden, options.decoder_cell = self.decoder_rnn(
decoder_input, (options.decoder_hidden, options.decoder_cell))
options.decoder_hidden = F.dropout(
options.decoder_hidden, self.p_decoder_dropout, self.training)
decoder_hidden_attention_context = torch.cat(
(options.decoder_hidden, options.attention_context), dim=1)
decoder_output = self.linear_projection(
decoder_hidden_attention_context)
gate_prediction = self.gate_layer(decoder_hidden_attention_context)
return decoder_output, gate_prediction, options.attention_weights, options
def forward(self, memory, decoder_inputs, memory_lengths):
""" Decoder forward pass for training
PARAMS
------
memory: Encoder outputs
decoder_inputs: Decoder inputs for teacher forcing. i.e. mel-specs
memory_lengths: Encoder output lengths for attention masking.
RETURNS
-------
mel_outputs: mel outputs from the decoder
gate_outputs: gate outputs from the decoder
alignments: sequence of attention weights from the decoder
"""
decoder_input = self.get_go_frame(memory).unsqueeze(0)
decoder_inputs = self.parse_decoder_inputs(decoder_inputs)
decoder_inputs = torch.cat((decoder_input, decoder_inputs), dim=0)
decoder_inputs = self.prenet(decoder_inputs)
options = DecoderOptions(self, memory, ~get_mask_from_lengths(memory_lengths))
mel_outputs, gate_outputs, alignments = [], [], []
while len(mel_outputs) < decoder_inputs.size(0) - 1:
decoder_input = decoder_inputs[len(mel_outputs)]
mel_output, gate_output, attention_weights, options = self.decode(
decoder_input, options)
mel_outputs += [mel_output.squeeze(1)]
gate_outputs += [gate_output.squeeze()]
alignments += [attention_weights]
mel_outputs, gate_outputs, alignments = self.parse_decoder_outputs(
mel_outputs, gate_outputs, alignments)
return mel_outputs, gate_outputs, alignments
@torch.jit.script_method
def inference(self, memory, options: DecoderOptions):
""" Decoder inference
PARAMS
------
memory: Encoder outputs
RETURNS
-------
mel_outputs: mel outputs from the decoder
gate_outputs: gate outputs from the decoder
alignments: sequence of attention weights from the decoder
"""
decoder_input = self.get_go_frame(memory)
mel_outputs, gate_outputs, alignments = [], [], []
run_more = True
while run_more:
decoder_input = self.prenet(decoder_input)
mel_output, gate_output, alignment, options = self.decode(decoder_input, options)
mel_outputs += [mel_output.squeeze(1)]
gate_outputs += [gate_output]
alignments += [alignment]
if torch.sigmoid(gate_output.data) > self.gate_threshold:
run_more = False
elif len(mel_outputs) == self.max_decoder_steps:
print("Warning! Reached max decoder steps")
run_more = False
decoder_input = mel_output
mel_outputs, gate_outputs, alignments = self.parse_decoder_outputs(
mel_outputs, gate_outputs, alignments)
return mel_outputs, gate_outputs, alignments
and I call the decoder in Tacotron2.inference with this line:
mel_outputs, gate_outputs, alignments = self.decoder.inference(
encoder_outputs, DecoderOptions(self.decoder, encoder_outputs))
But the compiler says:
RuntimeError:
Tried to access to nonexistent attribute attention_context. Did you forget to initialize it in __init__()?:
------
decoder_input: previous mel output
RETURNS
-------
mel_output:
gate_output: gate output energies
attention_weights:
"""
cell_input = torch.cat((decoder_input, options.attention_context), -1)
~~~~~~~~~~~~~~~~~ <--- HERE
options.attention_hidden, options.attention_cell = self.attention_rnn(
cell_input, (options.attention_hidden, options.attention_cell))
options.attention_hidden = F.dropout(
options.attention_hidden, self.p_attention_dropout, self.training)
attention_weights_cat = torch.cat(
(options.attention_weights.unsqueeze(1),
options.attention_weights_cum.unsqueeze(1)), dim=1)
options.attention_context, options.attention_weights = self.attention_layer(
I exactly initialized options.attention_context at DecoderOptions.init(). Why this error occurs?
Thanks! |
st181763 | Thanks for the report! Do you mind posting the exact script you’re using to generate that error so that I can run it to see what’s happening?
In the meantime, you could try removing @torch.jit.script_method from the __init__ method of DecoderOptions—for class types, the single annotation at the top will result in all methods getting compiled so it may have unexpected effects. |
st181764 | @Michael_Suo Problem is still present:
class SomeClass(object):
def __init__(self, x):
# type: (int)
self.x = x
def some_method(self, a, b):
# type: (Tensor, Tensor)
assert torch.isfinite(a).all().item(), "Some of elements are infinite or NaN!"
RuntimeError:
Tried to access nonexistent attribute or method 'all' of type 'bool'.:``` |
st181765 | One could overcome this issue by creating extra variable
bool_tensor : torch.Tensor = (deltas == deltas) & ~(torch.eq(deltas.abs(), torch._six.inf))
This is how torch.isfinite() was calculated in previous version https://s0pytorch0org.icopy.site/docs/0.4.1/_modules/torch/functional.html 25 |
st181766 | This looks like a bug with isfinite, see https://github.com/pytorch/pytorch/issues/29340 200 |
st181767 | image.png944×290 6.68 KB
When I training the car evaluation, the training loss, validation loss and accuracy doesn’t change ,the above is my code, i am a beginner, please give me advice, thank you very much.
111179:
import numpy as np
from collections import Counter
from sklearn import datasets
import torch.nn.functional as F
from torch.autograd import Variable
import matplotlib.pyplot as plt
import torch
import torch.utils.data as Data
import pandas as pd
import torch.nn as nn
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import train_test_split
import torch.utils.data as utils
import torch.utils.data as td
dataframe= pd.read_csv('iris dataset classification/car.csv')
dataframe.columns = ['buying','maint','doors','persons','lug_boot','safety','classes']
dataframe.buying.replace(('vhigh','high','med','low'),(1,2,3,4), inplace=True)
dataframe.maint.replace(('vhigh','high','med','low'),(1,2,3,4), inplace=True)
dataframe.doors.replace(('2','3','4','5more'),(1,2,3,4), inplace=True)
dataframe.persons.replace(('2','4','more'),(1,2,3), inplace=True)
dataframe.lug_boot.replace(('small','med','big'),(1,2,3), inplace=True)
dataframe.safety.replace(('low','med','high'),(1,2,3), inplace=True)
dataframe.classes.replace(('unacc','acc','good','vgood'),(0,1,2,3), inplace=True)
array = dataframe.values
x = array[:,:6]
y = array[:,6]
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=0)
train_x = torch.Tensor(x_train).float()
train_y = torch.Tensor(y_train).long()
train_ds = utils.TensorDataset(train_x,train_y)
train_loader = td.DataLoader(train_ds, batch_size=10,shuffle=True, num_workers=1)
test_x = torch.Tensor(x_test).float()
test_y = torch.Tensor(y_test).long()
test_ds = utils.TensorDataset(test_x,test_y)
test_loader = td.DataLoader(test_ds, batch_size=10, shuffle=True, num_workers=1)
class IrisNet(nn.Module):
def __init__(self):
super(IrisNet, self).__init__()
self.fc1 = nn.Linear(6, 50)
self.fc3 = nn.Linear(50, 4)
def forward(self, x):
x = F.relu(self.fc1(x))
x = torch.softmax(self.fc3(x),dim=1)
return x
model = IrisNet()
def train(model, data_loader, optimizer):
model.train()
train_loss = 0
for batch, tensor in enumerate(data_loader):
data, target = tensor
optimizer.zero_grad()
out = model(data)
loss = loss_criteria(out, target)
train_loss += loss.item()
loss.backward()
optimizer.step()
avg_loss = train_loss / len(data_loader.dataset)
return avg_loss
def test(model, data_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for batch, tensor in enumerate(data_loader):
data, target = tensor
out = model(data)
test_loss += loss_criteria(out, target).item()
_, predicted = torch.max(out.data, 1)
correct += torch.sum(target==predicted).item()
avg_accuracy = correct / len(data_loader.dataset)
avg_loss = test_loss / len(data_loader.dataset)
return avg_loss, avg_accuracy
loss_criteria = nn.CrossEntropyLoss()
learning_rate = 0.01
optimizer= torch.optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.99))
epoch_nums = []
training_loss = []
validation_loss = []
epochs = 600
for epoch in range(1, epochs + 1):
train_loss = train(model, train_loader, optimizer)
test_loss, accuracy = test(model, test_loader)
epoch_nums.append(epoch)
training_loss.append(train_loss)
validation_loss.append(test_loss)
if (epoch) % 10 == 0:
print('Epoch {:d}: Training loss= {:.4f}, Validation loss= {:.4f}, Accuracy={:.4%}'.format(epoch, train_loss, test_loss, accuracy)) |
st181768 | Solved by ptrblck in post #2
Based on your screenshot, it seems the training and validation losses do change.
However, nn.CrossEntropyLoss expects raw logits, so you should remove the last softmax layer from your model. |
st181769 | Based on your screenshot, it seems the training and validation losses do change.
However, nn.CrossEntropyLoss expects raw logits, so you should remove the last softmax layer from your model. |
st181770 | # load net
num_classes = len(labelmap) + 1 # +1 for background
net = build_refinedet('test', int(args.input_size), num_classes) # initialize SSD
net.load_state_dict(torch.load(args.trained_model,map_location='cuda:0')) # model = torch.load(model_path, map_location='cuda:0')
net.eval()
print('Finished loading model!')
# 向模型中输入数据以得到模型参数
example = torch.rand(1,3,320,320).cuda()
traced_script_module = torch.jit.trace(net,example)
# 保存模型
traced_script_module.save("torch_script_eval.pt")
i want to make the refinedet model transform to libtorch,but it have a error about “RuntimeError: Attempted to trace Detect_RefineDet, but tracing of legacy functions is not supported”
Thank you for helping me! |
st181771 | As the error message says, the model is using PyTorch 0.1.2-style Functions. (Newer autograd.Functions cannot be traced either, but that is another story…)
You probably want to convert the model to use the nms functions provided by TorchVision, those have tracing support.
Best regards
Thomas |
st181772 | Hi, I’m interested in the optimization of a PyTorch JIT graph. For this purpose, I want to find a way to execute an existing torch.Graph directly. Is there any official or unofficial way for this purpose? |
st181773 | I found a private API torch._C._create_function_from_graph(name, graph) which reconstructs a torch.Function from an existing JIT graph. |
st181774 | The usual thing to do is to transform the graph. This is what the various passes in the JIT do and you can also register custom passes. For example, the difference between fn.graph and fn.graph_for(*inputs) comes from the passes executed after specialization of the input arguments.
Best regards
Thomas |
st181775 | Hey guys, these lines incur the error in jit mode. Do you have any idea?
matching_scores = torch.matmul(bike_key_out.permute(0, 3, 4, 2, 1), taxi_key_out.permute(0, 3, 4, 1, 2))
bt_t_x = torch.matmul(matching_scores, taxi_x.permute(0, 3, 4, 2, 1)).permute(0, 4, 3, 1, 2)
Traceback (most recent call last):
File "multi_expt.py", line 329, in <module>
main()
File "multi_expt.py", line 298, in main
(bike_loss + taxi_loss).backward()
File "/home/jindeng/anaconda3/envs/myenv/lib/python3.7/site-packages/torch/tensor.py", line 118, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/jindeng/anaconda3/envs/myenv/lib/python3.7/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: No grad accumulator for a saved leaf! |
st181776 | I don’t think those are the leaves the error talks about. Could you try to reduce your code to an reproducing example?
Best regards
Thomas |
st181777 | Hi,
It might be related to https://github.com/pytorch/pytorch/issues/19769 71
We would need a small code sample to reproduce this to be sure. |
st181778 | I think I find out where the problem is. I unbind a tensor to a list of subtensors, and iteratively retrieve each subtensor to conduct further operation. If I substitue it by direct indexing on the original tensor, the error is gone. |
st181779 | Hi all,
I have some basic questions on CustomFuseGraph and its relation to other projects. I see that pytorch/tvm 7 and pytorch/glow 7 are using different custom fuse passes and not using the CustomFuseGraph. Could someone explain why they do not use the graph_fuser provided by PyTorch? Sorry if this is a naive question.
Thanks |
st181780 | Traditionally, the graph fuser in PyTorch has been operating on a per-Block basis. This is inherited (at least it used to a few weeks ago) by the CustomFuseGraph.
PyTorch TVM (and probably also glow) are interested in fusing across blocks, e.g. to also fuse control flow nodes. This is why PyTorch TVM has switched away from CustomFuseGraph 12. Personally, I would expect that custom fusion mechanism will be folded back into PyTorch when the picture of what users need there becomes clearer (just like CustomFuseGraph was introduced when one thought that one could also make good use of that, I think I recall that originally we thought that the CustomFuseGraph might already work for PyTorch TVM).
Best regards
Thomas |
st181781 | @tom: Awesome. Thanks for taking the time and responding. I think that answers my question to a great extent. So, is the plan to upstream these changes to the CustomFuseGraph? Also, the torch/tvm’s custom fusion seems to ignore the control flow ops for now seen in fusion_pass.cpp 7.
I was also experimenting a bit and see that fusing blocks of control flow ops is not currently handled in subgraph_utils, is this understanding right ?
Lets take this graph for example
graph(%x.1 : Float(*)):
%18 : Float(1) = prim::Constant[value={1}]()
%16 : int[] = prim::Constant[value=[1]]()
%1 : None = prim::Constant()
%2 : int = prim::Constant[value=1]() # script_module.py:64:22
%3 : int = prim::Constant[value=2]() # script_module.py:68:20
%4 : int = prim::Constant[value=3]() # script_module.py:71:18
%ret.1 : Double(*) = aten::zeros(%16, %1, %1, %1, %1) # script_module.py:64:10
%7 : Bool(*) = aten::eq(%x.1, %2) # script_module.py:65:7
%8 : bool = aten::Bool(%7) # script_module.py:65:7
%ret : Tensor(*) = prim::If(%8) # script_module.py:65:4
block0():
%12 : Tensor = aten::add_(%ret.1, %18, %2) # script_module.py:67:8
%ret.4 : Double(*) = aten::mul(%ret.1, %3) # script_module.py:68:14
%ret.7 : Double(*) = aten::add(%ret.4, %18, %2) # script_module.py:69:14
-> (%ret.7)
block1():
%ret.9 : Float(*) = aten::add(%x.1, %4, %2) # script_module.py:71:14
-> (%ret.9)
return (%ret)
Here, the first node aten::add_() inside block0() of prim::If has one of its inputs %ret.1 which is coming from outside the prim::If. But when cloning this node, during mergeNodeIntoSubgraph its not able to find the metadata of this input node. I think its because when cloning, the value_map only has the inputs in block scope and not the graph scope. I am not very familiar with the PT graph manipulation to understand the reason why only block’s inputs are added to value_map in ir.cpp 1 and only the prim::If’s inputs are added to the value map in subgraph_utils . The error seems to come because the value_map(i) in ir.cpp returns a NULL as the ret.1 is not in the value_map.
Is there an example in code where fusion of control flow ops is handled? Also, please correct me if my understanding is wrong. |
st181782 | So I don’t know how to answer most of your questions and I haven’t looked in a while relative to the rate of change. Apparently, something is not quite there yet for the control flow in torch TVM.
For the fusion in the fusion pass, blocks is not currently handled, and I am ignorant of whether the obstacle torch TVM hit was with the graph rearrangement itself or something else.
to the best of my knowledge we don’t currently fuse inplace ops, so I would expect adventure when you try that. Maybe that is part of the problem you are seeing.
I don’t know any examples beyond the obvious users. In theory, I would expect the optimizations removing ifs with constant true/false result to give some idea of what it would take to move ops with blocks into a subgraph.
I’ll be very interested to hear from your progress.
Best regards
Thomas |
st181783 | @bwasti: Thanks for your response. Could you let me know if my understanding above is correct? I just want to understand if fusion of control flow ops into the custom op node is even allowed by PT. |
st181784 | I am looking into PyTorch-jit-fusion now. I try to set a breakpoint in fuser_kernel.cpp at FusedKernelCPU::FusedKernelCPU(…) when I run a jit-trace stript. Anyone can tell me when and how such function will be invoked? Any reply will be appreicated. thanks |
st181785 | Solved by tom in post #2
Last I looked, CPU fusion was disabled by default, so you need to enable it with
torch._C._jit_override_can_fuse_on_cpu(True).
Printing myfn.graph_for(inp) will show if you have fusion for your inputs.
Note that as optimized graphs are cached, you need to re-define the traced/scripted function in… |
st181786 | Last I looked, CPU fusion was disabled by default, so you need to enable it with
torch._C._jit_override_can_fuse_on_cpu(True).
Printing myfn.graph_for(inp) will show if you have fusion for your inputs.
Note that as optimized graphs are cached, you need to re-define the traced/scripted function in order to clear the cache between after enabling CPU fusion if you previously ran it with CPU fusion disabled.
Note that CPU fusion is disabled by default due to performance & flakiness issues. In turn AVX & co are disabled in the CPU fuser because they sometimes cause problems (see the commentary in fused_kernel.cpp).
Best regards
Thomas |
st181787 | Hi, I built my CUDA extension module following this 2 link. Everything works well when I only use 1 GPU. And its utilization is high (>90%). However, when I integrated it into my neural work and trained with 2 GPU, the utilization of each gpu is pretty low (≈50%).
Any ideas? thanks!
My environments:
Pytorch:1.1
CUDA:10.1
OS:Ubuntu 18
GPU: RTX2080ti |
st181788 | Solved by jhd in post #5
Checking line by line, I finally found that declaring variables and allocating CUDA memory in CUDA extension of Pyotrch will greatly reduce GPU efficiency. By removing the following statements,
float *dist_Dev;
gpuErrchk(cudaMalloc((void**)&dist_Dev, myParameter.obj_num * myParameter.cluster_num *… |
st181789 | Maybe you can analysis the running time of each part, such as data loading, model forwarding, output processing. This may not be caused by forwarding. |
st181790 | Hi, @Mendel123 thanks for your reply.
After analysis the running time of each part, Forward has taken the largest part.
Forward
backward
Data
2 GPU(32 batchsize)
1.1 sec
0.27 sec
0.001
1 GPU(16 batchsize)
0.4 sec
0.27 sec
0.001 |
st181791 | I used APEX mix-precision training, will it be related to apex? Besides, I didn’t implement half() operation in my module , so I just convert inputs to float() before my invoking and convert it to half() after my invoking. |
st181792 | Checking line by line, I finally found that declaring variables and allocating CUDA memory in CUDA extension of Pyotrch will greatly reduce GPU efficiency. By removing the following statements,
float *dist_Dev;
gpuErrchk(cudaMalloc((void**)&dist_Dev, myParameter.obj_num * myParameter.cluster_num * sizeof(float)));
int *obj_num_Dev;
gpuErrchk(cudaMalloc((void**)&obj_um_Dev, myParameter.cluster_num * sizeof(int)));
int *num_per_classt;
gpuErrchk(cudaMalloc((void**)&num_per_classt, myParameter.t * myParameter.cluster_num * sizeof(int)));
, the utilization of each gpu is high. And each variable will be created in pytorch and then passed to CUDA module.
However, it still can’t expalin why single GPU works. I also compiled C++/CUDA into a .so file and invoked by ctypes.The result proves that declearing variables like the method above is fine.
So I guess there might be a problem with the way CUDA extension works. |
st181793 | Hello everyone. I attempt to use torch.jit.script on the torch.nn.transformer, but it doesn’t work。
Has anyone ever done any related work?
I build the pytorch from source and the torch version is 1.4.0a0+2e7dd54
I’d appreciate if anybody can help me! Or if there is a workable implementation, please let me know! Thanks in advance!
here is the code:
import torch
import torch.nn as nn
torch.manual_seed(2)
transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
trans_model = torch.jit.script(transformer_model)
trans_model.save(‘test.pt’)
and here is the log:
Traceback (most recent call last):
File “transformer_demo.py”, line 13, in
trans_model = torch.jit.script(transformer_model)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/init.py”, line 1239, in script
return torch.jit.torch.jit._recursive.recursive_script(obj)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 508, in recursive_script
return create_script_module(nn_module, infer_methods_to_compile(nn_module))
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 305, in create_script_module
concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 243, in get_or_create_concrete_type
raw_concrete_type = infer_raw_concrete_type(nn_module)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 91, in infer_raw_concrete_type
sub_concrete_type = concrete_type_store.get_or_create_concrete_type(item)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 243, in get_or_create_concrete_type
raw_concrete_type = infer_raw_concrete_type(nn_module)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 91, in infer_raw_concrete_type
sub_concrete_type = concrete_type_store.get_or_create_concrete_type(item)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 240, in get_or_create_concrete_type
scripted = create_constant_iterable_module(nn_module)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 539, in create_constant_iterable_module
modules[key] = recursive_script(submodule)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 508, in recursive_script
return create_script_module(nn_module, infer_methods_to_compile(nn_module))
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 308, in create_script_module
return create_script_module_impl(nn_module, concrete_type, cpp_module, stubs)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 358, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/init.py”, line 1612, in _construct
init_fn(script_module)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 340, in init_fn
scripted = recursive_script(orig_value)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 508, in recursive_script
return create_script_module(nn_module, infer_methods_to_compile(nn_module))
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 308, in create_script_module
return create_script_module_impl(nn_module, concrete_type, cpp_module, stubs)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 362, in create_script_module_impl
create_methods_from_stubs(concrete_type, stubs)
File “/home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/jit/_recursive.py”, line 268, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
RuntimeError:
Module ‘MultiheadAttention’ has no attribute ‘q_proj_weight’ :
at /home/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/activation.py:771:30
key_padding_mask=key_padding_mask, need_weights=need_weights,
attn_mask=attn_mask, use_separate_proj_weight=True,
q_proj_weight=self.q_proj_weight, k_proj_weight=self.k_proj_weight,
~~~~~~~~~~~~~~~~~~ <— HERE
v_proj_weight=self.v_proj_weight)
else: |
st181794 | This was brought up here 12 but the fix here 22 never landed due to backwards-compatibility issues, I can try to get it through soon. |
st181795 | ok, thanks! I tried the statement ‘self.register_parameter(‘q_proj_weight’, None)’,but it seems doesn’t work。 |
st181796 | It turns out there are some other fixes needed, see these two PRs for details (#28555 8 and #28561 10). If you need it now you can check out #28561 and build from source 3, otherwise it should be in the nightly package in a few days. |
st181797 | The error comes from the torchvision itself:
attribute 'downsample' of type 'NoneType' is not usable in a script method (did you forget to add it __constants__?):
at /nfs/engine/rajagopalar/anaconda3/envs/torchExp/lib/python3.6/site-packages/torchvision/models/resnet.py:109:12
out = self.relu(out)
Original post is here --> original post 9
Thanks in advance! |
st181798 | This was a bug in torchvision, it’s fixed with this PR 24.
We’re working on pushing a new release soon that will include this fix (among many others!). To get the fix locally today, you can install torchvision master from source 9. |
st181799 | @driazati Thanks for your prompt reply
After adding __constant__ = ['downsample'] I got the error below:
RuntimeError: attempting to re-assign constant 'downsample' in WeakScriptModuleProxy
Thanks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.