id
stringlengths
3
8
text
stringlengths
1
115k
st181800
Are you on the latest version of PyTorch (v.1.3.0)? You can check with print(torch.__version__). WeakScriptModuleProxy was removed prior to the 1.3 release, so you may need to update to get the right fixes.
st181801
Can someone help me understand when and how to use the _extra_files parameter is torch.jit.save and torch.jit.load? The code example in the documents (with extra_files['foo.txt'] = 'bar') didn’t help me understand what its practical use would be.
st181802
It lets you wrap up any extra data you want into the output of torch.save and have it be decoded for you by torch.load. Maybe you want to ship some documentation or versioning metadata with your saved model to help with deployment, extra_files lets you do that without having to ship files separately alongside your .pt file. The .pt file from torch.jit.save is a zip file, so any extra_files just get added to the zip archive. To dig into the docs example a little more, for this code: class M(nn.Module): def forward(self): pass m = torch.jit.script(M()) extra_files = torch._C.ExtraFilesMap() extra_files['foo.txt'] = 'bar' torch.jit.save(m, 'scriptmodule.pt', _extra_files=extra_files) Then in a shell $ unzip scriptmodule.pt Archive: scriptmodule.pt extracting: scriptmodule/version extracting: scriptmodule/extra/foo.txt extracting: scriptmodule/data.pkl extracting: scriptmodule/code/__torch__.py extracting: scriptmodule/code/__torch__.py.debug_pkl extracting: scriptmodule/constants.pkl $ cat scriptmodule/extra/foo.txt bar
st181803
There is an example for training a torch::nn::Module https://pytorch.org/cppdocs/frontend.html#end-to-end-example 10 but when I try to use a torch::jit::script::Module, I cannot call .parameters() like in the example. Is there a way to train a torch::jit::script::Module in C++?
st181804
Solved by driazati in post #2 Weirdly unlike in Python, jit::script::Module doesn’t have the same API as nn::Module. You should be able to access a script::Module's parameters via something like: for (auto param_slot : my_module.get_parameters()) { auto my_tensor = param_slot.value().toTensor(); }
st181805
Weirdly unlike in Python, jit::script::Module doesn’t have the same API as nn::Module. You should be able to access a script::Module's parameters via something like: for (auto param_slot : my_module.get_parameters()) { auto my_tensor = param_slot.value().toTensor(); }
st181806
The nn::Module::parameters() returns a thing called a slot_list_impl<>. Do you have advice on how I can make one of these from the tensors returned by the script::Module::get_parameters()?
st181807
slot_list_impl<NameValue> is an iterator over the one of a script::Module's internal lists of Parameters/Attributes. Can you elaborate on your use case for constructing one yourself? You should be able to mess with a script::Module by using script::Module::register_parameter to change any parameter value.
st181808
I was trying to adapt the training loop to work for torch::jit::script::Module so I needed the same interface that the optimizer expects. However, I realized that it accepts std::vector<at::Tensor>, so I actually don’t have to create slot_list_impl after all. Maybe you can take a look at what I was trying to do here jit::script::Module parameters are not updating when training 12 I think for the code to actually work, I need to use register_parameter or set_parameter to push the updated tensor back into the script::Module after the optimizer has acted on it. Does that sound right? Or could there be a way that the optimizer can update the tensor values in place?
st181809
Hi, I was wondering how to generate the PyTorch IR for a Torch Script module with line numbers, like the one shown in the example code: github.com pytorch/pytorch/blob/master/docs/source/jit.rst#interpreting-graphs 3 TorchScript =========== .. toctree:: :maxdepth: 1 :caption: Builtin Functions :hidden: torch.jit.supported_ops <jit_builtin_functions> .. contents:: :local: .. automodule:: torch.jit .. currentmodule:: torch.jit TorchScript is a way to create serializable and optimizable models from PyTorch code. Any TorchScript program can be saved from a Python process and loaded in a process where there is no Python dependency. We provide tools to incrementally transition a model from a pure Python program This file has been truncated. show original Currently, foo.graph does not get you the IR with line numbers, so I was wondering if it was possible to get the IR with the line numbers included. Thank you!
st181810
Are you on the latest version of PyTorch (v1.3.0)? You can check with print(torch.__version__). This feature was recently added so if you’re on an older version (e.g. v1.0) it might not show up. For a simple example like: @torch.jit.script def x(e): return e + 10 print(x.graph) You should get something like graph(%e.1 : Tensor): %3 : int = prim::Constant[value=1]() %2 : int = prim::Constant[value=10]() # ../test.py:33:15 %4 : Tensor = aten::add(%e.1, %2, %3) # ../test.py:33:11 return (%4)
st181811
Solved by driazati in post #2 script::Modules don’t track their device since the device only affects the Tensor parameters stored in the module. The only way to check the device would be to check one of the Tensor parameters on the module and view its device. We definitely need to have a more accessible documentation page, but …
st181812
script::Modules don’t track their device since the device only affects the Tensor parameters stored in the module. The only way to check the device would be to check one of the Tensor parameters on the module and view its device. We definitely need to have a more accessible documentation page, but you can see the definition of script::Module here 33 for a complete list of its methods.
st181813
Hi Is there somewhere formal TorchScript grammar spec available? Or it’s all hardcoded in JIT complier? Having that would help implementing some decent TorchScript tooling and add support in popular editors like vs.code or emacs and would greatly improve experience. Don’t know exactly how others are doing this, but my current development flow is little bit frustrating: write model in python, run jit, and plumb all places which are not supported by jit right now. And there is a lot little details which I’m learning by actually converting my code not covered anywhere else (recent dicovery for example: jit does not support yield, which makes perfect sense if You think about it, especially in context of optimization, but it’s not documented). I would rather like to write my models in TorchScript subset from the beginning. Artur
st181814
Hey, thanks for the feedback! We don’t have a specification for what parts of Python we cover and don’t. The closest we have is the language reference 21. I think you’re right that it would be really useful to at least have a comprehensive inventory of what keywords, builtins, etc. are supported—I’ll take a look at writing something up and try to have it in the docs by the next release.
st181815
This is linked on the language reference page but maybe hidden, we list all the Tensor and torch.nn.functional methods we support here 6, but that is missing a few things (including many Python globals, nn.Modules, and some other TorchScript builtins)
st181816
Hi everyone, I have an error, the information is as follows THCudaCheck FAIL file=/pytorch/torch/csrc/generic/serialization.cpp line=137 error=78 : a PTX JIT compilation failed Traceback (most recent call last): File "test_video.py", line 221, in <module> main() File "test_video.py", line 110, in main net.load_state_dict(torch.load(trained_model)) File "/home/sean/anaconda3/envs/pt40/lib/python3.6/site-packages/torch/serialization.py", line 303, in load return _load(f, map_location, pickle_module) File "/home/sean/anaconda3/envs/pt40/lib/python3.6/site-packages/torch/serialization.py", line 476, in _load deserialized_objects[key]._set_from_file(f, offset, f_is_real_file) RuntimeError: cuda runtime error (78) : a PTX JIT compilation failed at /pytorch/torch/csrc/generic/serialization.cpp:137 Environment information are: Pytorch0.40 cuda-8.0 Nvidia Titan X, Nvidia Titan Xp Could you help me solve it. Thank you in advance.
st181817
def save_model: net = self.my_net.module dummy_input = torch.randn(1,3,112,112).cpu() net.eval() net.cpu() # switch to cpu to save the model to avoid some issues on Pytorch 1.1.0 script_module = torch.jit.trace(net,dummy_input) script_module.save('my_model.pt') net.cuda() # switch to cuda to continue training on GPU net.train() The memory usage keeps increasing after each call to save_model, in other words, memory is being leaked. May anyone help to me to understand this issue? Thank you.
st181818
Memory on the cpu or gpu side? How do you measure this? Could you provide a small script to reproduce this?
st181819
When nodes are merged (graph fusion pass for example), how are cycles detected/avoided? Is it taken care by moveAfterTopologicallyValid in alias analysis? What algorithm is used here? Any pointers would be highly appreciated
st181820
Solved by Michael_Suo in post #4 Yes, moveAfterTopologicallyValid takes care of that. It does dependence analysis to make sure that this merging is disallowed.
st181821
What do you mean by cycles? The graphs are acyclic and we loop until there is no more nodes to add to a fusion group… Best regards Thomas
st181822
Lets consider a graph, like the following, (Copied from here 1) If we combine 1, 3, 4 red nodes above, we form a loop right? Considering 1, 3, 4 are can be grouped, what prevents the fusion from happening? If my question doesn’t make sense, please feel free to correct me .
st181823
Yes, moveAfterTopologicallyValid takes care of that. It does dependence analysis to make sure that this merging is disallowed.
st181824
Michael_Suo: takes care of that. It does dependence analysis to make sure that this merging is disallowed. Cool. Thanks for the response .
st181825
I was curious to know if anyone know how I could serialize a jit graph into something that we can visualize. I am currently using graph->dump() and node->dump()'s to debug. I wanted to know how we could visualize the graph.
st181826
You can print out the graph as valid Python code with my_module.code. We used to have a helper 16 to turn a Graph into a visualization via GraphViz, but it wasn’t maintained and was deleted.
st181827
Thanks. I used .graph and .code to look at the graph. But wanted to get something more visual :). I hadn’t seen the helper. I also have a interesting case. I want to use this from the C++ side . Is there a similar dump function that can be invoked from C++ side or do we need to write one.
st181828
We don’t have anything like that for C++ and probably won’t (it’s outside of our scope), but you should have all the tools / APIs for Graph and Node to make something for your own purposes.
st181829
Hi torch experts, I am trying to add a forward hook to my model. But I got the error message indicating the hook won’t work with jit module. Is there a particular reason why hook won’t work with jit modules? How can I work around this while still keep my module as a script module? Thanks in advance!
st181830
register_forward_hooks (and register_backward_hooks) currently aren’t supported in TorchScript. If you’d like to see them added, please file a feature request on GitHub 42. In your particular case it sounds like you’re inheriting from ScriptModule as the way to access the TorchScript compiler. An API change in v1.2.0 lets you compile nn.Modules without making them inherit from ScriptModule directly, see these docs 31 for details: import torch import torch.nn as nn class M(nn.Module): def __init__(self): super().__init__() self.conv = nn.Conv2d(5, 5, 2) def forward(self, x): return self.conv(x) + 10 def my_hook(self, *args): print("Hello from my_hook") m = M() m.conv.register_forward_hook(my_hook) # `my_hook` will be called m(torch.randn(5, 5, 2, 2)) a_scripted_module = torch.jit.script(m) # `my_hook` will NOT be called, forward hooks are lost # when an `nn.Module` is compiled a_scripted_module(torch.randn(5, 5, 2, 2))
st181831
TorchScript -> ONNX conversion of this simple module fails (pastebin 21). Am I doing something wrong? If one doesn’t jit-compile the model, everything works. from tempfile import TemporaryFile import torch import torch.onnx import torch.jit from torch import nn, Tensor print(f"PyTorch version is {torch.__version__}") class Model(nn.Module): def __init__(self): super().__init__() self.module = nn.Linear( in_features=8, out_features=4) self.module2 = nn.Linear( in_features=4, out_features=2) def forward(self, x: Tensor) -> Tensor: preout = self.module(x) out = self.module2(preout) return out model = Model() model = torch.jit.script(model) dummy_input = torch.randn(3, 8) dummy_output = model(dummy_input) with TemporaryFile() as temp: torch.onnx.export(model=model, args=dummy_input, example_outputs=dummy_output, f=temp, verbose=True)
st181832
When the ONNX exporter sees an nn.Module, it uses the TorchScript tracer to graph a graph, then converts that graph to an ONNX graph. The TorchScript compiler (torch.jit.script) should be functionally equivalent, so it sound like this is a bug. Could you file an issue on GitHub 23 so we can track this? Thanks!
st181833
Sure, I’ll file an issue on GitHub! Edit: https://github.com/pytorch/pytorch/issues/27569 468
st181834
I’d like to modify the graph by inserting new inputs (hopefully by using addInput in the graph). Is there a way to do this from Python after getting a torch._C.Graph object? Couldn’t seem to find an constructor/create function when doing dir(torch._C.Node).
st181835
This is purposefully not exposed, the only way to get a graph is by compiling code with torch.jit.script or tracing with torch.jit.trace. Potentially you could extract the code you are compiling initially with the Python inspect module and edit it that way, so it would be changed in the way you want before TorchScript sees it. You could also drop down to C++ (and potentially use PyBind to make it callable from Python) and write a custom pass that operates on a Graph object, see this tutorial 15 for details.
st181836
Hi, I would like to use PackedSequence directly in a custom module for nlp task. Specifically, given a padded batch, i want to convert it to a packed sequence, perform some operations on the data and convert back to padded sequence. An example recipe is below (modifed the example from https://pytorch.org/docs/stable/jit.html 15) my recipe works in python but when using jit.script, it fails with ValueError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults I wanted to learn why PackedSequence fails with torch script when used here but works fine when called using wrapper methods in torch.nn.utils.rnn. And how I can fix it. I will be grateful for any help in understanding this behavior. Thank you! import torch from torch.nn.utils.rnn import PackedSequence import torch.nn._VF as torch_varfuncs from torch._jit_internal import Optional class MyModule(torch.nn.Module): def __init__(self, N, M): super(MyModule, self).__init__() # This parameter will be copied to the new ScriptModule self.weight = torch.nn.Parameter(torch.rand(N, M)) # When this submodule is used, it will be compiled self.linear = torch.nn.Linear(N, M) def _pad(self, data, batch_first, batch_sizes, pad_value, sorted_indices, unsorted_indices): packed_seq = torch.nn.utils.rnn.PackedSequence(data, batch_sizes, sorted_indices, unsorted_indices) return torch.nn.utils.rnn.pad_packed_sequence(packed_seq, batch_first, pad_value) def forward(self, input, data_lengths): batch_first = True packed_input = torch.nn.utils.rnn.pack_padded_sequence(input, batch_first=batch_first,lengths=data_lengths, enforce_sorted=False) output = self.weight.mv(packed_input.data) # This calls the `forward` method of the `nn.Linear` module, which will # cause the `self.linear` submodule to be compiled to a `ScriptModule` here output = self.linear(output) return self._pad(output, batch_first, packed_input.batch_sizes,-1.0, packed_input.sorted_indices, packed_input.unsorted_indices) class MyModuleVF(MyModule): def _pad(self, data, batch_first1: bool, batch_sizes, pad_value: float, sorted_indices: Optional[torch.Tensor], unsorted_indices: Optional[torch.Tensor]): max_length = batch_sizes.size(0) padded_output, lengths = torch_varfuncs._pad_packed_sequence(data, batch_sizes, batch_first1, -1.0, max_length) if sorted_indices is not None: # had to invert permute specifically as pytorch method was giving errors in jit (arange is returning float type and not long, as expected) output = torch.empty_like(sorted_indices) output.scatter_(0, sorted_indices,torch.arange(0, sorted_indices.numel(), device=sorted_indices.device).long()) batch_dim = 0 if batch_first1 else 1 return padded_output.index_select(batch_dim, output), lengths[output] return padded_output, lengths test_input = torch.tensor([[1., 2., 3., 4.], [5., 6., -1.0, -1.0],[8, 9, 10, -1.0]], dtype=torch.float) data_lengths = torch.tensor([4,2,3]) size_ = (test_input > 0).sum() # works mm = MyModule(20,size_.item()) result = mm(test_input, data_lengths) # works mmvf = MyModuleVF(20,size_.item()) result_vf = mmvf(test_input, data_lengths) # works mmvf_s = torch.jit.script(MyModuleVF(20,size_.item())) result_vf_s = mmvf_s(test_input, data_lengths) # does not work mm_s = torch.jit.script(MyModule(20,size_.item())) result_s = mm(test_input, data_lengths)
st181837
Thanks for the repro! There is a bug somewhere here, would you mind filing an issue on GitHub 11? For some reason it works for me on master if you use PackedSequence directly instead of the qualified version, so that could be a workaround until we get this fixed: def _pad(self, data, batch_first: bool, batch_sizes, pad_value: float, sorted_indices: Optional[torch.Tensor], unsorted_indices: Optional[torch.Tensor]): packed_seq = PackedSequence(data, batch_sizes, sorted_indices, unsorted_indices) return torch.nn.utils.rnn.pad_packed_sequence(packed_seq, batch_first, pad_value)
st181838
Hi, I’m using profiling tool VTune Amplifier. What I’m interested in is parallel programming, both in thread level and instruction levels. The number of cores in my server is 16, and it supports AVX instructions. (not support AVX2, AVX512) lscpu gives: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Thread(s) per core: 1 Core(s) per socket: 8 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 62 Model name: Intel® Xeon® CPU E5-2650 v2 @ 2.60GHz Stepping: 4 CPU MHz: 1200.433 CPU max MHz: 3400.0000 CPU min MHz: 1200.0000 BogoMIPS: 5201.92 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0,2,4,6,8,10,12,14 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d I’m profiling resnet18 training code below. I don’t copy the code of printing loss and accuracy. import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision import torchvision.transforms as transforms import torchvision.models as models transform_train = transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) #transform_test = transforms.Compose([ # transforms.ToTensor(), # transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), #]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train) trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=0) #testset = torchvision.datasets.CIFAR10(root='./data', train=False, # download=True, transform=transform_test) #testloader = torch.utils.data.DataLoader(testset, batch_size=100, # shuffle=False, num_workers=2) # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # define network net = models.resnet18(pretrained=False) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4) for epoch in range(15): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # calculate loss running_loss += loss.item() In my profiling result, I found that AVX dynamic codes (which are hotspots in my code) are mostly executed by 16 threads. (Total 48~49 threads are running, but 16 of them are terminated before training, and the other 16 of them are executing other codes) I have some interesting results. As I increase the number of training loops, some of CPU doesn’t work. I attached result images below with google drive link. Files numbered 1~4 are for epoch 5, 15, 25, and 50, respectively. VTune Results The CPU Utilization metrics are 58.3%, 62.1%, 53%, and 49.4%, respectively. I think I have to mention some note. For epoch 50, I’ve profiled it twice because of the extremely low metric at the first time. It was 31.1%. The result image of this is in the link above, with the file name numbered 5. Is there anyone who could give me some insight about these results?
st181839
Hi Friends, I use traced_model._modules[‘conv1’] to access conv module. But how can I find ‘stride’ info in it? Thanks, 8086
st181840
Thanks ptrblck, that’s a good guide. But I did try still fail. I want to able to access these fields after load traced file. My test environment is at Windows pytorch 1.1.0 import torch import torchvision m = torchvision.models.resnet18() t_m = torch.jit.trace(m, torch.rand(1, 3, 224, 224)) m._modules['conv1'].stride #(2, 2) t_m._modules['conv1'] #TracedModule[Conv2d]() t_m._modules['conv1'].stride --------------------------------------------------------------------------- # AttributeError Traceback (most recent call last) # <ipython-input-26-22862c1bf06c> in <module>() # ----> 1 t_m._modules['conv1'].stride # c:\program files\python35\lib\site-packages\torch\jit\__init__.py in __getattr__(self, attr) # 1230 if self._c._has_attribute(attr): # 1231 return self._c._get_attribute(attr) # -> 1232 return Module.__getattr__(self, attr) # 1233 # 1234 def __setattr__(self, attr, value): # c:\program files\python35\lib\site-packages\torch\nn\modules\module.py in __getattr__(self, name) # 537 return modules[name] # 538 raise AttributeError("'{}' object has no attribute '{}'".format( # --> 539 type(self).__name__, name)) # 540 # 541 def __setattr__(self, name, value): # AttributeError: 'TracedModule' object has no attribute 'stride' torch.jit.save(t_m, 'traced_resnet18.pt') reload_t_m = torch.jit.load('traced_resnet18.pt') reload_t_m._modules['conv1'] #ScriptModule() reload_t_m._modules['conv1'].stride --------------------------------------------------------------------------- # AttributeError Traceback (most recent call last) # <ipython-input-31-eae0d5d2cc24> in <module>() # ----> 1 reload_t_m._modules['conv1'].stride # c:\program files\python35\lib\site-packages\torch\jit\__init__.py in __getattr__(self, attr) # 1230 if self._c._has_attribute(attr): # 1231 return self._c._get_attribute(attr) # -> 1232 return Module.__getattr__(self, attr) # 1233 # 1234 def __setattr__(self, attr, value): # c:\program files\python35\lib\site-packages\torch\nn\modules\module.py in __getattr__(self, name) # 537 return modules[name] # 538 raise AttributeError("'{}' object has no attribute '{}'".format( # --> 539 type(self).__name__, name)) # 540 # 541 def __setattr__(self, name, value): # AttributeError: 'ScriptModule' object has no attribute 'stride'
st181841
Thanks for the code. The attributes seems to be indeed hidden and I’m not sure if this is a bug or a feature. joe8086: And does ScriptModel have api to query module’s input list? I’m not sure to understand the question completely. Could you give an example?
st181842
I’m doing a simple add operation where the graph looks like: graph(%self : ClassType, %1 : Float(1, 3, 224, 224)): %2 : Long() = prim::Constant[value={1}]() %3 : int = prim::Constant[value=1]() %4 : Float(1, 3, 224, 224) = aten::add(%1, %2, %3) return (%4) Which is correct but am seeing that after saving and loading the trace, it changes to: graph(%self : ClassType, %argument_1.1 : Tensor): %3 : Tensor = prim::Constant[value={1}]() %4 : int = prim::Constant[value=1]() %5 : Tensor = aten::add(%argument_1.1, %3, %4) return (%5) Is there a reason it doesn’t preserve types and names? It seems like a statically allocated tensor is being converted to a dynamic one w/o any size/shape present? I am using torch.jit.save on the trace object and torch.jit.load. Am used to operating on IntType, etc but am not sure how to use TensorType.
st181843
Hi jit_net = torch.jit.load(saved_path) # load the pre-trained network defined as a ScriptModule nn_net = TheSameNet() # this is the same network as jit_net but defined as a nn.Module nn_net.load_state_dict(jit_net.state_dict()) I have a pre-trained ScriptModule and now I want to change some forward-pass function of it, therefore I define an exactly same nn.Module class and want to load the weights into this new network. I do this by the above code. However, when I successfully load the pre-trained weights in the ScriptModule model to the nn.Module model, the “nn_net” outputs different results as the “jit_net” for the same inputs. I would like to know if there is a proper way to transfer the weights between jit-scriptmodule and nn.module, or do I miss something, or if it is potentially a bug for the inconsistent output, or is it just not recommened to do so (transfering weights between ScriptModule and nn.Module).
st181844
I also meet this kind inconsistent before. So I give up use ScriptModel, instead I build a parser to parse ScriptModel.graph IR. Then use the IR label to access state_dict But the IR is not public release, it still not a good idea to handle this issue (because IR may change between different version) I think this is a pytorch bug, need pytorch to fix it.
st181845
This is not expected, no, but could happen if there is a source of non-determinism in the model. If you could file a Github issue with a repro, that would be very helpful. Ideally a simple model that we can run to produce an inconsistency.
st181846
Hi, I’m trying to export in ONNX a model which received a list of index of word for a embedding layer in my model. The problem is I need a dummy input for the export and my input size will vary. How can I solved my problem ? tks
st181847
int main(int argc, const char* argv[]) { if (argc != 2) { std::cerr << “usage: example-app \n”; return -1; } cv::VideoCapture stream(0); cv::namedWindow(“Gesture Detect”, cv::WINDOW_AUTOSIZE); std::shared_ptrtorch::jit::script::Module module = torch::jit::load(argv[1]); module->to(at::kCUDA); … it error fellow : /Applications/CLion.app/Contents/bin/cmake/mac/bin/cmake --build /Users/yuchao/CLionProjects/untitled/cmake-build-debug --target untitled – -j 2 Scanning dependencies of target untitled [ 50%] Building CXX object CMakeFiles/untitled.dir/main.cpp.o /Users/yuchao/CLionProjects/untitled/main.cpp:53:49: error: no viable conversion from ‘script::Module’ to ‘std::shared_ptrtorch::jit::script::Module’ std::shared_ptrtorch::jit::script::Module module = torch::jit::load(argv[1]); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/memory:3753:23: note: candidate constructor not viable: no known conversion from ‘script::Module’ to ‘std::nullptr_t’ (aka ‘nullptr_t’) for 1st argument _LIBCPP_CONSTEXPR shared_ptr(nullptr_t) _NOEXCEPT; ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/memory:3767:5: note: candidate constructor not viable: no known conversion from ‘script::Module’ to ‘const std::__1::shared_ptrtorch::jit::script::Module &’ for 1st argument shared_ptr(const shared_ptr& __r) _NOEXCEPT; ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/memory:3775:5: note: candidate constructor not viable: no known conversion from ‘script::Module’ to ‘std::__1::shared_ptrtorch::jit::script::Module &&’ for 1st argument shared_ptr(shared_ptr&& __r) _NOEXCEPT; ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/memory:3770:9: note: candidate template ignored: could not match ‘shared_ptr’ against ‘torch::jit::script::Module’ shared_ptr(const shared_ptr<_Yp>& __r, ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/memory:3776:52: note: candidate template ignored: could not match ‘shared_ptr’ against ‘torch::jit::script::Module’ template _LIBCPP_INLINE_VISIBILITY shared_ptr(shared_ptr<_Yp>&& __r, ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/memory:3785:9: note: candidate template ignored: could not match ‘auto_ptr’ against ‘torch::jit::script::Module’ shared_ptr(auto_ptr<_Yp>&& __r, ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/memory:3795:9: note: candidate template ignored: could not match ‘unique_ptr<type-parameter-0-0, type-parameter-0-1>’ against ‘torch::jit::script::Module’ shared_ptr(unique_ptr<_Yp, _Dp>&&, ^ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/memory:3804:9: note: candidate template ignored: could not match ‘unique_ptr<type-parameter-0-0, type-parameter-0-1>’ against ‘torch::jit::script::Module’ shared_ptr(unique_ptr<_Yp, _Dp>&&, ^ 1 error generated. make[3]: *** [CMakeFiles/untitled.dir/main.cpp.o] Error 1 make[2]: *** [CMakeFiles/untitled.dir/all] Error 2 make[1]: *** [CMakeFiles/untitled.dir/rule] Error 2 make: *** [untitled] Error 2
st181848
IIRC torch::jit::load now returns a Module rather than a shared_ptr in C++ frontend, starting from PyTorch 1.2 release, script::Module is now a reference type, please see the release notes here https://github.com/pytorch/pytorch/releases 17
st181849
My question is as the title. From what I read, there is some speed-up benefit when converting a model to static graph so I suppose training a traced or scripted model is faster. Do you have any idea about this?
st181850
The JIT can bring speed benefits for some patterns (especially if your code has lots of unfused pointwise operations). See this post 24 as an example.
st181851
I would like to call a different function when tracing. I have registered it through torch::jit::RegisterOperators() and had to change it a little for that, so would like to call it only if it is tracing. Is there a way to find if the module is being traced? I currently rely on the fact that tenssor.shape[0] is a tensor while tracing, but an int otherwise. That is not the best way. Something like torch.jit.is_running
st181852
Solved by wanchaol in post #2 Hi thanks for posting the question. We do have a method do know if it’s in tracing state or not using torch._C._get_tracing_state(), it will return true if you are in tracing otherwise false. But note that this is a internal API and it might break in the future (also unlikely).
st181853
Hi thanks for posting the question. We do have a method do know if it’s in tracing state or not using torch._C._get_tracing_state(), it will return true if you are in tracing otherwise false. But note that this is a internal API and it might break in the future (also unlikely).
st181854
Thanks! that would be very useful, a public facing API would be most very welcome. It will be better than relying on if isinstance(bbs.shape[0], torch.Tensor).
st181855
Sorry for some basic questions, but interpreter in JIT seems to run all the operators in a sequence. Can it be assumed that JIT based execution of computation-graphs are sequential unless we use “torch.script._fork” and “future.wait()” primitives? Or am I missing something basic
st181856
Solved by Michael_Suo in post #2 Yes, that is correct
st181857
Hi All, I am curious to understand the reasoning behind https://github.com/pytorch/pytorch/blob/v1.2.0/torch/csrc/jit/passes/python_print.cpp#L788 4? Why would jit module export not support multiple values? I also see that the python-c++ interface is only returning at the last element in the stack vector (stack.back()) when we invoke a scriptmodule from python, but why this limitation? Why not return all elements in the stack vector? I am surely missing more technical details and would love to understand this further. Thanks,
st181858
We follow the same conventions as Python here, if a function has multiple return values they all get wrapped up into a tuple, so it’s actually only 1 return value. You can have as many returns as you want, they will just be wrapped into a tuple that you will have to unpack.
st181859
Thanks. I think I understand now. I will dig into the code to look at where this creation of tuples happens on the C++ side
st181860
We have an open issue to document it better and make it a nicer experience: https://github.com/pytorch/pytorch/issues/17165 23
st181861
In graph_executor.cpp 3, If the graph to be optimized needs Gradient, runNondiffOptimization(gradient.f) will be called. runNondiffOptimization() runs some optimizations including BatchMM and FuseGraph. I wonder why are these optimization non-differentiable? Take the FuseGraph as an example. It fuses continuous point-wise ops. If it can fuse f(g(x)), why can’t it fuse df(g(x)) * dg(x). Or it’s simply because the current implementation of FuseGraph does not support this?
st181862
Find below a Minimum Reproducible Example that crashes both in Pytorch 1.1 and Pytorch 1.2 with CUDA (it works with CPU). import torch from torch import nn device = torch.device('cuda') # crashes with cuda, works with cpu class Model(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(2, 16) self.linear2 = nn.Linear(2, 16) def forward(self, x, y): x = self.linear1(x) y = self.linear2(y) return torch.cat([x, y], dim=-1) # if we replace -1 with 1 works fine model = Model().to(device) data = [torch.randn(1, 2).to(device), torch.randn(1, 2).to(device)] traced = torch.jit.trace(model, data) print(traced) Surprisingly the above works with CPU backend but not with CUDA backend. It also works when torch.cat(..., dim=1) but crashes with a negative dimension refering to the same one torch.cat(..., dim=-1). Find the jit.trace error below (not very explanatory): torch.jit.TracingCheckError: Tracing failed sanity checks! Encountered an exception while running the trace with test inputs. Exception: vector::_M_range_check: __n (which is 18446744073709551615) >= this->size() (which is 2) The above operation failed in interpreter, with the following stack trace:
st181863
Solved by driazati in post #2 This is a bug, you can track it in the corresponding GitHub issue, any updates / fixes that go in will get posted there. As a workaround you can wrap the negative index around manually with something like dim=len(x.shape) + (-1)
st181864
This is a bug, you can track it in the corresponding GitHub issue 18, any updates / fixes that go in will get posted there. As a workaround you can wrap the negative index around manually with something like dim=len(x.shape) + (-1)
st181865
Yup sorry! Realised that later and filled an issue with it. x.ndim - 1 should also work.
st181866
I want to change Extremenet 1 to onnx, which programmed a dynamic model, I searched 1 https://pytorch.org/docs/stable/onnx.html#tracing-vs-scripting 2 2 https://github.com/onnx/tutorials 4 It seems not told how to change a dynamic pytorch model to onnx, where is the example of change dynamic model to onnx? below is core dynamic code patch: class exkp(nn.Module): def __init__( self, n, nstack, dims, modules, out_dim, pre=None, cnv_dim=256, make_tl_layer=None, make_br_layer=None, make_cnv_layer=make_cnv_layer, make_heat_layer=make_kp_layer, make_tag_layer=make_kp_layer, make_regr_layer=make_kp_layer, make_up_layer=make_layer, make_low_layer=make_layer, make_hg_layer=make_layer, make_hg_layer_revr=make_layer_revr, make_pool_layer=make_pool_layer, make_unpool_layer=make_unpool_layer, make_merge_layer=make_merge_layer, make_inter_layer=make_inter_layer, kp_layer=residual ): super(exkp, self).__init__() self.nstack = nstack self._decode = _exct_decode curr_dim = dims[0] self.pre = nn.Sequential( convolution(7, 3, 128, stride=2), residual(3, 128, 256, stride=2) ) if pre is None else pre self.kps = nn.ModuleList([ kp_module( n, dims, modules, layer=kp_layer, make_up_layer=make_up_layer, make_low_layer=make_low_layer, make_hg_layer=make_hg_layer, make_hg_layer_revr=make_hg_layer_revr, make_pool_layer=make_pool_layer, make_unpool_layer=make_unpool_layer, make_merge_layer=make_merge_layer ) for _ in range(nstack) ]) self.cnvs = nn.ModuleList([ make_cnv_layer(curr_dim, cnv_dim) for _ in range(nstack) ]) ## keypoint heatmaps self.t_heats = nn.ModuleList([ make_heat_layer(cnv_dim, curr_dim, out_dim) for _ in range(nstack) ]) self.l_heats = nn.ModuleList([ make_heat_layer(cnv_dim, curr_dim, out_dim) for _ in range(nstack) ]) self.b_heats = nn.ModuleList([ make_heat_layer(cnv_dim, curr_dim, out_dim) for _ in range(nstack) ]) self.r_heats = nn.ModuleList([ make_heat_layer(cnv_dim, curr_dim, out_dim) for _ in range(nstack) ]) self.ct_heats = nn.ModuleList([ make_heat_layer(cnv_dim, curr_dim, out_dim) for _ in range(nstack) ]) for t_heat, l_heat, b_heat, r_heat, ct_heat in \ zip(self.t_heats, self.l_heats, self.b_heats, \ self.r_heats, self.ct_heats): t_heat[-1].bias.data.fill_(-2.19) l_heat[-1].bias.data.fill_(-2.19) b_heat[-1].bias.data.fill_(-2.19) r_heat[-1].bias.data.fill_(-2.19) ct_heat[-1].bias.data.fill_(-2.19) self.inters = nn.ModuleList([ make_inter_layer(curr_dim) for _ in range(nstack - 1) ]) self.inters_ = nn.ModuleList([ nn.Sequential( nn.Conv2d(curr_dim, curr_dim, (1, 1), bias=False), nn.BatchNorm2d(curr_dim) ) for _ in range(nstack - 1) ]) self.cnvs_ = nn.ModuleList([ nn.Sequential( nn.Conv2d(cnv_dim, curr_dim, (1, 1), bias=False), nn.BatchNorm2d(curr_dim) ) for _ in range(nstack - 1) ]) self.t_regrs = nn.ModuleList([ make_regr_layer(cnv_dim, curr_dim, 2) for _ in range(nstack) ]) self.l_regrs = nn.ModuleList([ make_regr_layer(cnv_dim, curr_dim, 2) for _ in range(nstack) ]) self.b_regrs = nn.ModuleList([ make_regr_layer(cnv_dim, curr_dim, 2) for _ in range(nstack) ]) self.r_regrs = nn.ModuleList([ make_regr_layer(cnv_dim, curr_dim, 2) for _ in range(nstack) ]) self.relu = nn.ReLU(inplace=True) def _train(self, *xs): image = xs[0] t_inds = xs[1] l_inds = xs[2] b_inds = xs[3] r_inds = xs[4] inter = self.pre(image) outs = [] layers = zip( self.kps, self.cnvs, self.t_heats, self.l_heats, self.b_heats, self.r_heats, self.ct_heats, self.t_regrs, self.l_regrs, self.b_regrs, self.r_regrs, ) for ind, layer in enumerate(layers): kp_, cnv_ = layer[0:2] t_heat_, l_heat_, b_heat_, r_heat_ = layer[2:6] ct_heat_ = layer[6] t_regr_, l_regr_, b_regr_, r_regr_ = layer[7:11] kp = kp_(inter) cnv = cnv_(kp) t_heat, l_heat = t_heat_(cnv), l_heat_(cnv) b_heat, r_heat = b_heat_(cnv), r_heat_(cnv) ct_heat = ct_heat_(cnv) t_regr, l_regr = t_regr_(cnv), l_regr_(cnv) b_regr, r_regr = b_regr_(cnv), r_regr_(cnv) t_regr = _tranpose_and_gather_feat(t_regr, t_inds) l_regr = _tranpose_and_gather_feat(l_regr, l_inds) b_regr = _tranpose_and_gather_feat(b_regr, b_inds) r_regr = _tranpose_and_gather_feat(r_regr, r_inds) outs += [t_heat, l_heat, b_heat, r_heat, ct_heat, \ t_regr, l_regr, b_regr, r_regr] if ind < self.nstack - 1: inter = self.inters_[ind](inter) + self.cnvs_[ind](cnv) inter = self.relu(inter) inter = self.inters[ind](inter) # print("+++++++++++++++outs shape:", outs[0].shape, outs[1].shape, outs[2].shape,outs[3].shape,outs[4].shape,outs[5].shape,outs[6].shape,outs[7].shape,outs[8].shape,) # print("+++++++++++++++outs shape:", outs[9].shape, outs[10].shape, outs[11].shape,outs[12].shape,outs[13].shape,outs[14].shape,outs[15].shape,outs[16].shape,outs[17].shape,) return outs # @torch.jit.script def _test(self, *xs, **kwargs): image = xs[0] inter = self.pre(image) outs = [] layers = zip( self.kps, self.cnvs, self.t_heats, self.l_heats, self.b_heats, self.r_heats, self.ct_heats, self.t_regrs, self.l_regrs, self.b_regrs, self.r_regrs, ) for ind, layer in enumerate(layers): kp_, cnv_ = layer[0:2] t_heat_, l_heat_, b_heat_, r_heat_ = layer[2:6] ct_heat_ = layer[6] t_regr_, l_regr_, b_regr_, r_regr_ = layer[7:11] kp = kp_(inter) cnv = cnv_(kp) if ind == self.nstack - 1: t_heat, l_heat = t_heat_(cnv), l_heat_(cnv) b_heat, r_heat = b_heat_(cnv), r_heat_(cnv) ct_heat = ct_heat_(cnv) t_regr, l_regr = t_regr_(cnv), l_regr_(cnv) b_regr, r_regr = b_regr_(cnv), r_regr_(cnv) outs += [t_heat, l_heat, b_heat, r_heat, ct_heat, t_regr, l_regr, b_regr, r_regr] if ind < self.nstack - 1: inter = self.inters_[ind](inter) + self.cnvs_[ind](cnv) inter = self.relu(inter) inter = self.inters[ind](inter) # if kwargs['debug']: # _debug(image, t_heat, l_heat, b_heat, r_heat, ct_heat) # del kwargs['debug'] # print("output shape: ", self._decode(*outs[-9:], **kwargs).shape) return self._decode(*outs[-9:], **kwargs) # return outs[-9:] def forward(self, *xs, **kwargs): if len(xs) > 1: return self._train(*xs, **kwargs) return self._test(*xs, **kwargs)
st181867
Is there any chance to have the torch.jit.trace instrunction work on a maskrcnn_resnet50_fpn model since it doesn’t seem to work yet, see https://github.com/pytorch/vision/issues/1002 42 ? Thanks
st181868
Given a torch’s nn.Module with a pre-forward hook, e.g. import torch import torch.nn as nn class NeoEmbeddings(nn.Embedding): def __init__(self, num_embeddings:int, embedding_dim:int, padding_idx=-1): super().__init__(num_embeddings, embedding_dim, padding_idx) self.register_forward_pre_hook(self.neo_genesis) @staticmethod def neo_genesis(self, input, higgs_bosson=0): if higgs_bosson: input = input + higgs_bosson return input It’s possible to let an input tensor go through some manipulation before going to the actual forward() function, e.g. >>> x = NeoEmbeddings(10, 5, 1) >>> x.forward(torch.tensor([0,2,5,8])) tensor([[-1.6449, 0.5832, -0.0165, -1.3329, 0.6878], [-0.3262, 0.5844, 0.6917, 0.1268, 2.1363], [ 1.0772, 0.1748, -0.7131, 0.7405, 1.5733], [ 0.7651, 0.4619, 0.4388, -0.2752, -0.3018]], grad_fn=<EmbeddingBackward>) >>> print(x._forward_pre_hooks) OrderedDict([(25, <function NeoEmbeddings.neo_genesis at 0x1208d10d0>)]) How could we pass the arguments (*args or **kwargs) that the pre-forward hook needs but not accepted by the default forward() function? Without modification/overriding the forward() function, this is not possible: >>> x = NeoEmbeddings(10, 5, 1) >>> x.forward(torch.tensor([0,2,5,8]), higgs_bosson=2) ---------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-102-8705a40a3cc2> in <module> 1 x = NeoEmbeddings(10, 5, 1) ----> 2 x.forward(torch.tensor([0,2,5,8]), higgs_bosson=2) TypeError: forward() got an unexpected keyword argument 'higgs_bosson'
st181869
Also on https://stackoverflow.com/questions/57703808/how-do-i-pass-a-keyword-argument-to-the-forward-used-by-a-pre-forward-hook 207
st181870
Hi, Why is the forward pre-hook necessary here? Why not include it at the beginning of the forward? Or if you want to inherit the forward from the parent class, create a new forward, do the preprocessing and then call the parent forward with super().forward(args).
st181871
I have a torch script that accepts a tuple[Tensor], @torch.jit.script def smt_pred(confs, child, child_sizes, threshold, obj, height, width, a): # type: (Tuple[Tensor],Tuple[Tensor],Tuple[Tensor],Tensor,Tensor,Tensor,Tensor,Tensor) -> Tuple[Tensor,Tensor] When I call the code in Python2 I get the error RuntimeError: smt_pred() Expected a value of type ‘Tuple[Tensor]’ for argument ‘confs’ but instead found type ‘tuple’. tuple is same as Tuple and I could not find how to annotate confs to be tuple of tensors. Is it possible in Python2? Changing Tuple to List works in Python2, so this is only about Tuple
st181872
Solved by driazati in post #3 Copied from the issue: I’m guessing that you’re passing in a tuple that doesn’t look like (torch.ones(2, 2),). For a Tuple the type needs to be fixed and completely specified and the elements can be different types. In TorchScript, a, b, and c below are all different types # type is `Tuple[int, in…
st181873
Added an issue here 5 because List works (and I just have to pass list(confs) as a workaround)
st181874
Copied from the issue: I’m guessing that you’re passing in a tuple that doesn’t look like (torch.ones(2, 2),). For a Tuple the type needs to be fixed and completely specified and the elements can be different types. In TorchScript, a, b, and c below are all different types # type is `Tuple[int, int, int] a = (1, 2, 3) # type is `Tuple[str, int, int] b = ('hi', 2, 3) # type is `Tuple[int]` c = (2,) For lists, the element types must all be the same, which is why it can be any length but you only need to specify List[Tensor] or List[int]. You can read more about it here 6.
st181875
Hello, I am using torch.jit.script to export my models but I encountered a problem with Resnet. This is my model definition: class Model(nn.Module): def __init__(self, num_cats): super(Model, self).__init__() self.model = torchvision.models.resnet18(pretrained=True) self.model.fc = nn.Linear(self.model.fc.in_features, num_cats) self.model = torch.jit.trace(self.model, torch.rand(1,3,224,224)) def forward(self, x): x = self.model(x) return x Training works fine and validation accuracy goes as expected. Finally I export my model like this scripted = torch.jit.script(model) torch.jit.save(scripted, 'scripted_model.pth') The thing is that when I load my model model = torch.jit.load('scripted_model.pth') and use it to perform inference on the entire validation set with the same batch size than the one used for training I get the same predictions. But if I try to perform inference on a single image (or in the validation set with a small batch size), I get totally wrong predictions. I’ve tried the same example with a custom CNN and I get good results. Also I tried exporting the resnet model with torch.jit.trace and torch.jit.trace_module (instead of torch.jit.script) and also get good results. I would appreciate any help on the issue, since I find the functionality of torch.jit.script very powerful in production.
st181876
Thanks for the minimal repro! This sounds like a bug, would you mind filing an issue on GitHub 6 and adding some info about your environment so we can track it better? From a first look I’m not getting any difference between eager mode, the original scripted model, and the loaded scripted model (script here 4). Since self.model here is a traced graph the compilation should also be very simple since all it would really compile is the return self.model(x) statement, so I’m not sure why the results would be different.
st181877
Hi, you are right there is no difference between the models. The thing is that they all are wrong. Consider the following models: class Model1(nn.Module): def __init__(self, num_cats): super(Model1, self).__init__() self.model = torchvision.models.resnet18(pretrained=True) self.model.fc = nn.Linear(self.model.fc.in_features, num_cats) self.model = torch.jit.trace(self.model, torch.rand(1,3,224,224)) def forward(self, x): x = self.model(x) return x class Model2(nn.Module): def __init__(self, num_cats): super(Model2, self).__init__() self.model = torchvision.models.resnet18(pretrained=True) self.model.fc = nn.Linear(self.model.fc.in_features, num_cats) def forward(self, x): x = self.model(x) return x I can train both models with the same dataset and hyperparameters and achieve similar expected results (good results). I can then export them as follows: scripted1 = torch.jit.script(model1) torch.jit.save(scripted1, 'scripted_model1.pth') scripted2 = torch.jit.trace(model2) torch.jit.save(scripted2, 'scripted_model2.pth') And load them loaded1 = torch.jit.load('scripted_model1.pth') loaded2 = torch.jit.load('scripted_model2.pth') In all cases the 3 versions of each model give the same results, the difference is that using the first version only works fine when performing inference with a large batch size (and failing in single image inference). The second model, however, gives good predictions in all cases. PS: I tried the same experiment than the one proposed here but with a custom CNN in place of resnet18 and I observe the same problem. Hence it seems to be something related with the interaction between torch.jit.script and torch.jit.trace.
st181878
I am trying to script LearnedPositionalEmbedding from Fairseq: CODE import torch.nn as nn import torch import torch.jit from fairseq import utils class LearnedPositionalEmbedding(nn.Embedding): def __init__(self, num_embeddings, embedding_dim, padding_idx, left_pad): super().__init__(num_embeddings, embedding_dim, padding_idx) self.left_pad = left_pad # self.register_buffer('padding_idx_', padding_idx) def forward(self, input_, incremental_state=None): """Input is expected to be of size [bsz x seqlen].""" if incremental_state is not None: positions = input_.data.new(1, 1).fill_(self.padding_idx_ + input_.size(1)) else: positions = utils.make_positions(input_.data, self.padding_idx_, self.left_pad) return super().forward(positions) def max_positions(self): """Maximum number of supported positions.""" return self.num_embeddings - self.padding_idx_ - 1 padding_idx = torch.tensor([[1]], dtype=torch.long) model = LearnedPositionalEmbedding(4,5, padding_idx, False) model_scripted = torch.jit.script(model) And I get following error: TypeError: 'Tensor' object for attribute 'padding_idx' is not a valid constant. Valid constants are: 1. a nn.ModuleList 2. a value of type {bool, float, int, str, NoneType, function, device, layout, dtype} 3. a list or tuple of (2) Even issue #16284 1 is not of help.
st181879
In the nn.Embedding provided by PyTorch, padding_idx is an int that doesn’t change, for that reason we have added it to __constants__ in nn.Embedding (code here 1). Since LearnedPositionalEmbedding does not override __constants__, it gets the one from nn.Embedding. If you want padding_idx to be a Tensor (which is not a supported constant type 1), you’ll have to provide your own __constants__ that removes padding_idx but keeps the other stuff around, something like: from fairseq import utils class LearnedPositionalEmbedding(nn.Embedding): __constants__ = ['num_embeddings', 'embedding_dim', 'max_norm', 'norm_type', 'scale_grad_by_freq', 'sparse'] def __init__(self, num_embeddings, embedding_dim, padding_idx, left_pad): super().__init__(num_embeddings, embedding_dim, padding_idx) self.left_pad = left_pad
st181880
I get the below error when trying to trace the following code. frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f3e4b04d441 in /bigdisk2/sunil/pytorchnightly/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f3e4b04cd7a in /bigdisk2/sunil/pytorchnightly/lib/python3.6/site-packages/torch/lib/libc10.so) The code I am trying is this: import torch import torch.jit import copy from torch.nn import functional as F from torch.nn import Module from torch.nn import MultiheadAttention from torch.nn import ModuleList from torch.nn.init import xavier_uniform_ from torch.nn import Dropout from torch.nn import Linear from torch.nn import LayerNorm class TransformerEncoderLayer(Module): r"""TransformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper "Attention Is All You Need". Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application. Args: d_model: the number of expected features in the input (required). nhead: the number of heads in the multiheadattention models (required). dim_feedforward: the dimension of the feedforward network model (default=2048). dropout: the dropout value (default=0.1). Examples:: >>> encoder_layer = nn.TransformerEncoderLayer(d_model, nhead) """ def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1): super(TransformerEncoderLayer, self).__init__() self.self_attn = MultiheadAttention(d_model, nhead, dropout=dropout) # Implementation of Feedforward model self.linear1 = Linear(d_model, dim_feedforward) self.dropout = Dropout(dropout) self.linear2 = Linear(dim_feedforward, d_model) self.norm1 = LayerNorm(d_model) self.norm2 = LayerNorm(d_model) self.dropout1 = Dropout(dropout) self.dropout2 = Dropout(dropout) def forward(self, src, src_mask=None, src_key_padding_mask=None): r"""Pass the input through the endocder layer. Args: src: the sequnce to the encoder layer (required). src_mask: the mask for the src sequence (optional). src_key_padding_mask: the mask for the src keys per batch (optional). Shape: see the docs in Transformer class. """ # src2 = self.self_attn(src, src, src, attn_mask=src_mask, # key_padding_mask=src_key_padding_mask)[0] # src = src + self.dropout1(src2) src = self.norm1(src) # print(src) # src2 = self.linear2(self.dropout(F.relu(self.linear1(src)))) # src = src + self.dropout2(src2) src = self.norm2(src) # print(src) return src layerencoder = TransformerEncoderLayer(512,1).to('cuda') input = torch.rand(1,1,512).to('cuda') print(layerencoder(input).shape) layerencoder.eval() traced_model = torch.jit.trace(layerencoder,input) If I remove the second layer normalization, I am able to trace. I am using PyTorch nightly build.
st181881
Solved by Michael_Suo in post #2 Thanks for the report! Can you file an issue on github and we can track it from there? cc @wanchaol
st181882
Thanks for the report! Can you file an issue on github and we can track it from there? cc @wanchaol
st181883
@Michael_Suo I updated my cuda_driver to 10 and things are working fine now for me. But I am getting issues with dropout layers. I will create an issue with that on weekend.
st181884
With the PyTorch 1.2, I have noticed that any alias specification when registering an operator would cause a crash (tried on Win64). So I had to change "some_op(Tensor(a) t) -> Tensor(a)" to "some_op(Tensor t) -> Tensor" to make it work again. Is there an alternative?
st181885
Solved by leowalkling in post #4 I found out, that in PyTorch 1.2, the default behavior is not to allow alias specifications, leading to a segfault. This can be changed, but alias specifications are not allowed for third-party extensions as stated in the error message: RuntimeError: node->kind().is_prim() || node->kind().is_aten(…
st181886
Do you have a minimal code example reproducing the crash? And what is the error message you’re seeing exactly?
st181887
The error message is simply: OSError: [WinError 1114] A dynamic link library (DLL) initialization routine failed I have created a repo for it: https://github.com/leowalkling/pytorch_register_op_minimal 2 I’m using scikit-build to easily generate a solution file but the compilation flags are taken from torch.utils.cpp_extension.
st181888
I found out, that in PyTorch 1.2, the default behavior is not to allow alias specifications, leading to a segfault. This can be changed, but alias specifications are not allowed for third-party extensions as stated in the error message: RuntimeError: node->kind().is_prim() || node->kind().is_aten() INTERNAL ASSERT FAILED at ..\torch\csrc\jit\passes\alias_analysis.cpp:392, please report a bug to PyTorch. The current code base should only have AliasAnalysisKind::FROM_SCHEMA for aten:: and prim:: ops but we found it for tp::broadcast_tensors. We want to open this up though. (analyzeImpl at ..\torch\csrc\jit\passes\alias_analysis.cpp:392) (no backtrace available) To get this error message, I passed an Options struct as a third (optional) argument to RegisterOperators.op(), i.e.: torch::RegisterOperators() .op( "some_op(Tensor(a) t) -> Tensor(a)", &some_op, torch::RegisterOperators::options().aliasAnalysis(c10::AliasAnalysisKind::FROM_SCHEMA))
st181889
I created a model that uses two Resnets(pre-trained) as feature extractors In my model I used a pre-trained Resnet as a sub module. Training and testing work well in pytorch, but If you use simple torch.jit.trace while converting torch, the following error occurs. could not export python function call Scatter. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to constants .: if i want to use sub_module, it looks like i need to use trace and script together, as shown in the code below from the official pytorch homepage. ################################## import torch import torchvision class MyScriptModule(torch.nn.Module): def init(self): super(MyScriptModule, self).init() self.means = torch.nn.Parameter(torch.tensor([103.939, 116.779, 123.68]) .resize_(1, 3, 1, 1)) self.resnet = torch.jit.trace(torchvision.models.resnet18(), torch.rand(1, 3, 224, 224)) def forward(self, input): return self.resnet(input - self.means) my_script_module = torch.jit.script(MyScriptModule()) ######################################################## but, This also causes an error. Please advise if this approach is correct or if you can make a simple trace. plz…
st181890
The code snippet from the website you posted works fine for me (on PyTorch 1.2, are you up to date?), can you post a full code example that results in the could not export Python function.. error?
st181891
When I want to script a model, it comes a warning message: RuntimeWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). But if I don’t script it and just run it the warning message won’t appear. I wonder whether it is a BUG? And Could it be solved?
st181892
I have the same problem with this Github issue 8 when run the model in C++ on GPU. If I don’t run it on GPU it will work well.
st181893
My pytorch version is 1.1. CUDA version is 9.0, CUDNN is 7.3.0. Can anyone give me a help?
st181894
In fastrnn’s custom LSTM 20, we could conveniently define JIT LayerNormLSTM. I am wondering that how could we also make it support PackedSequence as input data for varied lengths of tensors ?
st181895
Solved by eellison in post #2 This should be supported:
st181896
This should be supported: github.com/pytorch/pytorch Issue: TorchScript support for torch.nn.utils.rnn.pack_padded_sequence() and torch.nn.utils.rnn.pad_packed_sequence() 5 opened by julioasotodv on 2019-06-03 closed by driazati on 2019-08-07 🚀 Feature Add TorchScript support for torch.nn.utils.rnn.pack_padded_sequence() and torch.nn.utils.rnn.pad_packed_sequence(). Motivation Motivated by https://discuss.pytorch.org/t/torch-jit-script-for-torch-nn-utils-rnn-pack-padded-sequence/30668, this appears to be a desired feature given that tracing is... jit triaged
st181897
Hi! I’m trying to jit.script-compile a model which uses the t[t != t] = ... trick to fill the nans of a tensor with a default value. However, torchscript does not seem to appreciate this kind of indexing. Does anyone know a solution/workaround? Thank you! Here’s a small example plus error message: @torch.jit.script def f(): t = torch.tensor(0.) t[t!=t] = 7 RuntimeError: Arguments for call are not valid. The following operator variants are available: aten::index_put_(Tensor(a) self, Tensor?[] indices, Tensor values, bool accumulate=False) -> (Tensor(a)): Expected a value of type 'Tensor' for argument 'values' but instead found type 'int'. aten::index_put_(Tensor(a) self, Tensor[] indices, Tensor values, bool accumulate=False) -> (Tensor(a)): Expected a value of type 'List[Tensor]' for argument 'indices' but instead found type 'List[Optional[Tensor]]'. The original call is: at <ipython-input-21-45d08c8a21f7>:4:5 @torch.jit.script def f(): t = torch.tensor(0.) t[t!=t] = 7 ~~~~~~~~~~~ <--- HERE
st181898
Solved by ptrblck in post #2 I’m not sure if it’s the indexing or rather the rhs of the assignment, since the error says: Expected a value of type ‘Tensor’ for argument ‘values’ but instead found type ‘int’. This seems to work: @torch.jit.script def f(): t = torch.tensor([0.]) t[t!=t] = torch.tensor(7.) retur…
st181899
I’m not sure if it’s the indexing or rather the rhs of the assignment, since the error says: Expected a value of type ‘Tensor’ for argument ‘values’ but instead found type ‘int’. This seems to work: @torch.jit.script def f(): t = torch.tensor([0.]) t[t!=t] = torch.tensor(7.) return t f()