id
stringlengths
3
8
text
stringlengths
1
115k
st180600
I am trying to build PyTorch from source using TBB instead of OpenMP but I am getting the following errors every time: /usr/bin/ld: <PATH>/pytorch/build/lib/libtorch_cpu.so: undefined reference to `omp_get_max_threads' /usr/bin/ld: <PATH>/pytorch/build/lib/libtorch_cpu.so: undefined reference to `omp_get_num_threads' /usr/bin/ld: <PATH>/pytorch/build/lib/libtorch_cpu.so: undefined reference to `GOMP_barrier' /usr/bin/ld: <PATH>/pytorch/build/lib/libtorch_cpu.so: undefined reference to `GOMP_parallel' /usr/bin/ld: <PATH>/pytorch/build/lib/libtorch_cpu.so: undefined reference to `omp_in_parallel' /usr/bin/ld: <PATH>/pytorch/build/lib/libtorch_cpu.so: undefined reference to `omp_get_thread_num' collect2: error: ld returned 1 exit status make[2]: *** [caffe2/CMakeFiles/kernel_function_legacy_test.dir/build.make:109: bin/kernel_function_legacy_test] Error 1 You can find the full log file HERE: To Reproduce Steps to reproduce the behavior: git clone https://github.com/pytorch/pytorch cd pytorch ATEN_THREADING=TBB BUILD_BINARY=1 USE_EIGEN_THREADPOOL=1 USE_CUDA=0 PARALLEL_BACKEND=NATIVE_TBB USE_OPENMP=0 USE_TBB=1 MKL_THREADING=TBB BLAS=MKL USE_MKLDNN=1 MKLDNN_THREADING=TBB BUILD_BINARY=1 python setup.py build --cmake 2>&1 | tee ~/output.txt PyTorch Version (e.g., 1.0): 1.10.0a0+git1798ff0 OS (e.g., Linux): Manjaro How you installed PyTorch (conda, pip, source): pip Python version: 3.9.5 CUDA/cuDNN version: NO GPU models and configuration: NO
st180601
Solved by Vlad_Dumitru in post #3 Yes I did, so far as I understood it was because of some flags for whatever reason. If I am running the fallowing command everything is working properly. BUILD_BINARY=1 USE_EIGEN_THREADPOOL=1 USE_CUDA=0 PARALLEL_BACKEND=NATIVE_TBB USE_OPENMP=0 USE_TBB=1 MKL_THREADING=TBB BLAS=MKL USE_MKLDNN=1 MKLDN…
st180602
Yes I did, so far as I understood it was because of some flags for whatever reason. If I am running the fallowing command everything is working properly. BUILD_BINARY=1 USE_EIGEN_THREADPOOL=1 USE_CUDA=0 PARALLEL_BACKEND=NATIVE_TBB USE_OPENMP=0 USE_TBB=1 MKL_THREADING=TBB BLAS=MKL USE_MKLDNN=1 MKLDNN_THREADING=TBB BUILD_BINARY=1 python setup.py build --cmake 2>&1 | tee ~/output.txt
st180603
Hi I’m trying to debug an unrelated issue and I was trying to construct a very basic test case. However, I’m getting a different, error when I am running my basic test case that makes me question if I understand the basics of jit compilation. I defined the following two dummy classes class Dummy1(): def __init__(self): self.__name__ = "dummy" def forward(self, x:Tensor)-> Tuple[Tensor, Optional[Tensor]]: return (x,x) class Dummy2(Dummy1): def __init__(self): super().__init__() def forward(self, x:Tensor)->Tuple[Tensor, Optional[Tensor]]: return super.forward(x) And I get the following error when trying to torch.jit.script an object of Dummy2 >>> import torch >>> torch.__version__ '1.8.1' >>> import multihead_attention_window >>> dummy = multihead_attention_window.Dummy2() >>> torch.jit.script(dummy) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/psridhar/.pyenv/versions/py3/lib/python3.8/site-packages/torch/jit/_script.py", line 986, in script ast = get_jit_def(obj, obj.__name__) File "/Users/psridhar/.pyenv/versions/py3/lib/python3.8/site-packages/torch/jit/frontend.py", line 240, in get_jit_def sourcelines, file_lineno, filename = get_source_lines_and_file(fn, torch._C.ErrorReport.call_stack()) File "/Users/psridhar/.pyenv/versions/py3/lib/python3.8/site-packages/torch/_utils_internal.py", line 54, in get_source_lines_and_file filename = inspect.getsourcefile(obj) File "/Users/psridhar/.pyenv/versions/3.8.1/lib/python3.8/inspect.py", line 696, in getsourcefile filename = getfile(object) File "/Users/psridhar/.pyenv/versions/3.8.1/lib/python3.8/inspect.py", line 676, in getfile raise TypeError('module, class, method, function, traceback, frame, or ' TypeError: module, class, method, function, traceback, frame, or code object was expected, got Dummy2 I googled around and noticed this might be because of a torch version issue? But I’m on torch 1.8.1. What am I missing here?
st180604
The error is raised since you are using Python classes instead of nn.Modules for the scripting. After deriving from nn.Module this error would be gone. However, TorchScript doesn’t support inheritance as described here 23, so you might run into other issues with this approach.
st180605
Ah, thanks for parsing out that error message for me! And thanks for that inheritance post. That was exactly what I was trying to debug.
st180606
Hello everyone, I’m trying to have weight caching , but on one of my models it fails . for caching I’m simply running this snippet : fname = 'cached_weights_s3fd.pt' test_imgs = Detector.gen_fake_img(batch=1) traced = torch.jit.trace(self.detector, test_imgs, check_trace=False) script = traced.save(fname) self.detector = torch.jit.load(fname) and the full stacktrace is as follows: Traceback (most recent call last): File "d:\Codes\Pytorch_Retinaface\detection_core\main_detector.py", line 2074, in <module> run_test() File "d:\Codes\Pytorch_Retinaface\detection_core\main_detector.py", line 2065, in run_test run_capture_sfd() File "d:\Codes\Pytorch_Retinaface\detection_core\main_detector.py", line 2002, in run_capture_sfd after_detection_fn=None) File "Pytorch\detection_core\main_detector.py", line 1523, in __init__ self._init() File "Pytorch\detection_core\main_detector.py", line 1536, in _init traced = torch.jit.trace(self.detector, test_imgs, check_trace=False) File "C:\Users\User\Anaconda3\lib\site-packages\torch\jit\__init__.py", line 911, in trace name = _qualified_name(func) File "C:\Users\User\Anaconda3\lib\site-packages\torch\_jit_internal.py", line 683, in _qualified_name name = obj.__name__ AttributeError: 'FaceAlignment' object has no attribute '__name__' Whats wrong? As far as I know all classes has __name__ attribute, including the mentioned class in the error.(the name is ‘FaceAlignment’ obviously) So I’m not sure why I’m getting this. Any help is greatly appreciated.
st180607
Upon further investigations, I turns out in \torch\_jit_internal.py", line 684, the object that is causing the error is Object is type: <class 'function'> and the object itself is : <function LSTM.forward at 0x00000214C495D048> The Irony is, I do not have any LSTM cell in my model! What am I seeing this?
st180608
Thanks for doing some investigation! Are you able to share the binary for cached_weights_s3fd.pt so we can try to debug this on our end? I think what you’re seeing here is some initialization the jit does when you first call torch.jit.script or torch.jit.trace. We have to grab the qualified names for a few modules that have overloaded forward methods (only nn.LSTM and nn.GRU), which is what you’re seeing. You could try setting a breakpoint and skipping past these calls to get to the time it’s actually invoked with your class.
st180609
Shisho_Sama: fname = ‘cached_weights_s3fd.pt’ test_imgs = Detector.gen_fake_img(batch=1) traced = torch.jit.trace(self.detector, test_imgs, check_trace=False) script = traced.save(fname) self.detector = torch.jit.load(fname) The cached weights are not generated, it fails before that. You can replicate the issue by cloning this repo : https://github.com/1adrianb/face-alignment 5 and running this snippet : import face_alignment import torch import torchvision as tv def gen_fake_img(samples=1000, batch=10, image_size=(3, 224, 224), num_classes=2): fake_dt = tv.datasets.FakeData(samples, image_size, num_classes, tv.transforms.ToTensor()) dt_ldr = torch.utils.data.DataLoader(fake_dt, batch, pin_memory=True) sample_imgs, _ = next(iter(dt_ldr)) return sample_imgs detector = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device='cpu') fname = 'cached_weights_s3fd.pt' test_imgs = gen_fake_img(batch=1) traced = torch.jit.trace(detector, test_imgs, check_trace=False) script = traced.save(fname) detector = torch.jit.load(fname) fa.get_landmarks('test/assets/aflw-test.jpg')
st180610
Hi @Shisho_Sama , I am also having the same error. I used the same github repo. What changes did you make in the code to make it up and running ? I used the below code. import torch import torchvision import face_alignment Optionally set detector and some additional detector parameters face_detector = ‘sfd’ face_detector_kwargs = { “filter_threshold” : 0.8 } model = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, device=‘cpu’, flip_input=True,face_detector=face_detector) example = io.imread(’/test/assets/aflw-test.jpg’) traced_script_module = torch.jit.trace(model,example) traced_script_module.save(“traced_facealignment_model.pt”)
st180611
I implemented my own version of CaffeNet in PyTorch and I am trying to save the model in an onnx file. But I have the error described in the title when I run this code: " import torch.nn as nn import torch class Flatten(nn.Module): def forward(self,x): return x.view(x.size(0),-1) class CaffeNet(nn.Module): def init(self): super(CaffeNet,self).init() self.layers=[] self.layers.append(nn.Conv2d(in_channels=3,out_channels=3,kernel_size=11)) self.layers.append(nn.ZeroPad2d(padding=1)) self.layers.append(nn.Conv2d(3,96,55)) self.layers.append(nn.MaxPool2d(kernel_size=2,stride=2)) self.layers.append(nn.ZeroPad2d(padding=1)) self.layers.append(nn.Conv2d(96,192,27)) self.layers.append(nn.MaxPool2d(kernel_size=2, stride=2)) self.layers.append(nn.ZeroPad2d(padding=1)) self.layers.append(nn.Conv2d(192, 288, 13)) self.layers.append(nn.ZeroPad2d(padding=1)) self.layers.append(nn.Conv2d(288, 288, 13)) self.layers.append(nn.ZeroPad2d(padding=1)) self.layers.append(nn.Conv2d(288, 256, 11)) #12-th layer, normally kernel_size=13 self.layers.append(nn.MaxPool2d(kernel_size=2, stride=2)) self.layers.append(nn.ZeroPad2d(padding=1)) self.layers.append(nn.Conv2d(256, 256, 3)) #15-th layer, normally kernel_size=13 self.layers.append(nn.MaxPool2d(kernel_size=2, stride=2)) self.layers.append(nn.ZeroPad2d(padding=1)) self.layers.append(Flatten()) self.layers.append(nn.Linear(2304,2304)) #19-th layer, normally 4096 neurons self.layers.append(nn.ReLU()) self.layers.append(nn.Dropout()) self.layers.append(nn.Linear(2304,2304)) self.layers.append(nn.ReLU()) self.layers.append(nn.Dropout()) self.layers.append(nn.Linear(2304,1000)) self.layers.append(nn.Softmax()) def forward(self, x): for layer in self.layers: x=layer(x) return x net=CaffeNet() input = torch.randn(20, 3, 227, 227,requires_grad=False) input_names = [ “input” ] output_names = [ “output” ] torch.onnx.export(net,input,“caffenet.onnx”,verbose=True, input_names=input_names, output_names=output_names) " I am using python2.7. The last line of the stack trace is: “RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient”
st180612
The modules you’re adding have some Parameters, so they need to be submodules of CaffeNet. It should work if you use a torch.nn.Sequential instead of a regular Python list for self.layers.
st180613
Ok Thanks you. I’m gonna try this on monday when I’m going back to work. I hope it will work
st180614
hi,I have the same problem as you. How did you solve it,Look forward to your reply thanks
st180615
hi, the subLayers of your module should be its submodules. So: 1/ I made my CaffeNet module derived from nn.sequential 2/Then I defined a add method which adds layer to the subModule dictionary (self._modules) of a nn.sequential: def add(self, layer): self._modules[str(self.index)]=layer self.index=self.index+1 3/Then I call this method with each layer of CaffeNet
st180616
Thank you for your reply, you are a good man, but I don’t know the specific operation, can you provide the code to solve it?
st180617
Unfortunately, I seem to have the problem (and replied in another similar question 36). My model has a VGG (as encoder) and FCN as a decoder it’s not analogous. Any ideas?
st180618
Hi - torch.jit.trace is throwing a runtime error on a forward method that is returning outputs from every layer in a sequence. Any suggestions on how to make this work? Thanks! Code in question: def forward(self, x): outs = [] for idx, layer in enumerate(self.layers): x = layer(x) outs.append(x) return tuple(outs) self.layers: ModuleList( (0): ConvBNAct( (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): Sequential( (0): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (2): Sequential( (0): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False) (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=144, bias=False) (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (3): Sequential( (0): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(144, 144, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=144, bias=False) (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (2): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (4): Sequential( (0): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=192, bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (2): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (3): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (5): Sequential( (0): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False) (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (2): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False) (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (6): Sequential( (0): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=576, bias=False) (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (2): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (7): Sequential( (0): InvertedResidual( (conv): Sequential( (0): ConvBNAct( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (1): ConvBNAct( (0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) (2): Conv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), bias=False) (3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (8): ConvBNAct( (0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace=True) ) ) Error: RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient
st180619
BUMP! Somewhat related: I am trying to save my model as shown below: with torch.no_grad(): input_shape = [1, 3, 512, 256] input_data = torch.randn(input_shape) input_data = input_data.to(DEVICE) # GPU out_res = model(input_data) scripted_model = torch.jit.trace(model, input_data) scripted_model.eval() # Save your model. The following code saves it with the .pth file extension scripted_model.save('model.pth') I keep getting this error which makes no sense to me (since I am adding the no_grad() ) RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient Tensor: (1,1,.,.) = 0.01 * ... Any hints would be very helpful. Thanks.
st180620
I try to use torch.jit.save() save model in pytorch1.7 machine. But I can not use torch.jit.load() load model in my local machin(pytorch1.6) ERROR message: RuntimeError: aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) → (Tensor): Expected at most 12 arguments but found 13 positional arguments. Serialized File “code/torch/torch/nn/modules/conv.py”, line 11 argument_1: Tensor) → Tensor: _0 = self.bias input = torch._convolution(argument_1, self.weight, _0, [1, 1], [0, 0], [1, 1], False, [0, 0], 1, False, False, True, True) ~~~~~~~~~~~~~~~~~~ <— HERE return input class ConvTranspose2d(Module):
st180621
PyTorch is backwards compatible (unless deprecation warning indicate otherwise), which means you should be able older models (e.g. stored in 1.6) in newer PyTorch versions (e.g. 1.7). However, it’s not forward compatible, so you would need to update your PyTorch version.
st180622
When turning model to scripted model, the error occur: has no attribute xxxxx Small Snippet: class Base(nn.Module): def __init__(self): super(Base, self).__init__() self.__module_name: List[str] = [] @property def module_name(self): return self.__module_name model = Base() ScriptModel = torch.jit.script(model, (input,)) ................................................. [Error]: Module 'Base' has no attribute '__module_name' : File "/xxx/xxx/xxx/Base.py" @property def module_name(self): return self.__module_name ~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE Still can’t figure out why. Any solution will be great. Thanks!
st180623
Solved by RebirthT in post #2 Solved by bellow: self.__module_name → self._module_name class Base(nn.Module): def __init__(self): super(Base, self).__init__() self._module_name: List[str] = [] @property def module_name(self): return self._module_name
st180624
Solved by bellow: self.__module_name → self._module_name class Base(nn.Module): def __init__(self): super(Base, self).__init__() self._module_name: List[str] = [] @property def module_name(self): return self._module_name
st180625
Hi There, I’m a newbie to creating torch scripts from models. I have found this 1 helpful page that shows how to export a PyTorch model for ‘BertModel’ for inputs at the token level using JIT & TRACE. However, I want to export the SBert 4 model which is also PyTorch based. But the inputs that SBert gets are sentences. Below is a sample code I have put together, my problem is that I do not know how should I create the example_inputs here. Shall I convert sentences directly to tensor? or tokenize them first? I appreciate your input on this. Thanks import torch from sentence_transformers import SentenceTransformer sbert_model = SentenceTransformer('paraphrase-mpnet-base-v2') sbert_model.eval() # Creating the trace traced_model = torch.jit.trace(sbert_model, example_inputs = []) torch.jit.save(traced_model, "traced_bsert.pt")
st180626
Say I have something like: import torch from torch import Tensor from torch import nn import torch.nn.functional as F class MyModule(nn.Module): def __init__(self, scale_factor: float): super().__init__() self.scale_factor = scale_factor def forward(self, x: Tensor) -> Tensor: x = F.interpolate(x, scale_factor=self.scale_factor) return x model = MyModule(5) model(torch.zeros(16,1,20,20)) torch.jit.script(model) This raises an error: Expected a value of type ‘Optional[float]’ for argument ‘scale_factor’ but instead found type ‘int’. Actually, for some reason this toy model wasn’t able to reproduce the error I get on my real model which is: Module ‘MyModule’ has no attribute ‘final_scale_factor’ (This attribute exists on the Python module, but we failed to convert Python type: ‘numpy.int64’ to a TorchScript type. Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type int64… Its type was inferred; try adding a type annotation for the attribute.): In both cases it seems to be an issue of annotation. How would I annotate an instance property? I tried annotating in the __init__ method like self.scale_factor = torch.jit.annotate(float, scale_factor) and that doesn’t help either. EDIT - I was able to sort my issue with self.scale_factor = torch.tensor(scale_factor) then using Tensor.item() on the other end. But I’d still like to know the answer!
st180627
i get this error when i use torch.jit.script then torch.jit.save : frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7f37df3cb627 in /home/kencorp/miniconda3/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: + 0x2f9d95d (0x7f37e258e95d in /home/kencorp/miniconda3/lib/python3.7/site-packages/torch/lib/libtorch.so) frame #2: + 0x2f9dc3b (0x7f37e258ec3b in /home/kencorp/miniconda3/lib/python3.7/site-packages/torch/lib/libtorch.so) frame #3: + 0x2f9d1ee (0x7f37e258e1ee in /home/kencorp/miniconda3/lib/python3.7/site-packages/torch/lib/libtorch.so) frame #4: + 0x2f9dc5b (0x7f37e258ec5b in /home/kencorp/miniconda3/lib/python3.7/site-packages/torch/lib/libtorch.so) frame #5: + 0x32bcdd9 (0x7f37e28addd9 in /home/kencorp/miniconda3/lib/python3.7/site-packages/torch/lib/libtorch.so) frame #6: + 0x32c0bdd (0x7f37e28b1bdd in /home/kencorp/miniconda3/lib/python3.7/site-packages/torch/lib/libtorch.so) frame #7: torch::jit::ExportModule(torch::jit::script::Module const&, std::string const&, std::unordered_map<std::string, std::string, std::hashstd::string, std::equal_tostd::string, std::allocator<std::pair<std::string const, std::string> > > const&, bool) + 0x19b (0x7f37e28aba5b in /home/kencorp/miniconda3/lib/python3.7/site-packages/torch/lib/libtorch.so) frame #8: + 0x743dd2 (0x7f37e581cdd2 in /home/kencorp/miniconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: + 0x26b876 (0x7f37e5344876 in /home/kencorp/miniconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #29: __libc_start_main + 0xf0 (0x7f37f0fb1830 in /lib/x86_64-linux-gnu/libc.so.6)
st180628
Support for Device was added somewhat recently, are you on the latest version of PyTorch? If so, can you open an issue on GitHub with a repro?
st180629
I found a similar issue in 1.4.0, and upgrading to 1.8.1 fixes it. The release notes for 1.8.0 reference PR #46441 which adds hashing for many other types including Tuple , bool , device by implementing generic hashing on IValue I believe this is the improvement that resolved the root issue
st180630
Hi everyone ! I’m currently learning on how to use torchscript (and PyTorch in general), I have done a benchmark (CPU & GPU) on differents models : gru (native PyTorch implementation), lstm (native PyTorch implementation), custom model implementation (that we will call custom_model) and jit.script(custom_model). In inference I have seen that my custom model was very very slow (it’s a sort of custom GRU so that’s “normal” that he is very slow because of the loop on time step) and with the jit.script implementation my very very slow model was very good ! He reached the cudaNN implementation. But I was wondering something, in training he was better than my custom implementation but very slow in comparison to the gru/lstm cudaNN implementation. So my question is the following : what is in torchscript that is not doing “good” in optimizing the backward pass ? I have read a lot’s about this subject and the “answer” that I have found is that broadcasting is complicating the job of AD. But I haven’t fully understand well why. I see improvment so there is optimization that is doing good but not fully that would lead to the same perf as cudaNN implementation. My second question is : do numba or other framework could help to reach performances of cudaNN implementation ? In my case it’s the for loop on time step (seq_len) that is generating this slower results custom_model = ligru jit.script(custom_model) = torch.jit.script(ligru) Results are the following : Training results : timetraining766×525 34.1 KB Thanks you a lot ! Take care of you guys. The ressources that I have used (and others but I’m limited with links): pytorch.org PyTorch An open source machine learning framework that accelerates the path from research prototyping to production deployment. http://lernapparat.de/fast-lstm-pytorch/ 3
st180631
Inference results : timeInference766×525 36.5 KB (I double post because I can’t post the other graphic in the main topic because I’m “new” on this forum )
st180632
I want to find some experiments that clearly show the benefit of torch.jit.script. It clearly improves the speed of inference, but seems not enough. I simply build up experiment like this. import torch import torch.nn as nn import torch.jit as jit from torch import Tensor from typing import Tuple class QueryKeyValueUnoptimized(nn.Module): def __init__(self): super().__init__() self.query = nn.Linear(512, 256) self.key = nn.Linear(512, 256) self.value = nn.Linear(512, 256) def forward(self, x: Tensor) -> Tuple[Tensor, Tensor, Tensor]: return self.query(x), self.key(x), self.value(x) class QueryKeyValueOptimized(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(512, 768) def forward(self, x: Tensor) -> Tuple[Tensor, Tensor, Tensor]: x = self.linear(x) return x[:, :256], x[:, 256:512], x[:, 512:] net_unoptimized_nojit_cpu = QueryKeyValueUnoptimized() net_unoptimized_jit_cpu = jit.script(net_unoptimized_nojit_cpu) net_optimized_nojit_cpu = QueryKeyValueOptimized() net_optimized_jit_cpu = jit.script(net_optimized_nojit_cpu) device = torch.device("cpu") x = torch.zeros(128, 512, device=device) %timeit net_unoptimized_nojit_cpu(x) %timeit net_unoptimized_jit_cpu(x) %timeit net_optimized_nojit_cpu(x) %timeit net_optimized_jit_cpu(x) device = torch.device("cuda") net_unoptimized_nojit_cuda = net_unoptimized_nojit_cpu.to(device) net_unoptimized_jit_cuda = net_unoptimized_jit_cpu.to(device) net_optimized_nojit_cuda = net_optimized_nojit_cpu.to(device) net_optimized_jit_cuda = net_optimized_jit_cpu.to(device) x = x.to(device) %timeit net_unoptimized_nojit_cuda(x) %timeit net_unoptimized_jit_cuda(x) %timeit net_optimized_nojit_cuda(x) %timeit net_optimized_jit_cuda(x) net_unoptimized_cuda_jit = jit.script(QueryKeyValueUnoptimized().to(device)) net_optimized_cuda_jit = jit.script(QueryKeyValueOptimized().to(device)) %timeit net_unoptimized_cuda_jit(x) %timeit net_optimized_cuda_jit(x) And the result is as followed. Manual optimization JIT optimization Device Inference time (microsec) X X CPU 623 X O CPU 588 O X CPU 290 O O CPU 277 X X GPU 191 X O GPU 170 O X GPU 94.3 O O GPU 97.5 I have three questions. Is there any other official or well-known performance benchmark result for JIT scripting? The result indicates that we have to optimize code manually although we perform jit operation to Module. Is it right? Is the fusing technique, which I manually do in this experiment, not supported in PyTorch? Thanks,
st180633
Find below my model, which includes conditional statements in forward block class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear( 1, 3 ) self.fc2 = nn.Linear( 3, 10 ) self.fc3 = nn.Linear( 10, 2 ) def forward(self,x): if x.shape[0] ==1 : x = self.fc2( self.fc1(x) ) return x else: x = self.fc3( self.fc2( self.fc1(x) ) ) return x To handle the conditional flow statements I have converted the model to torchscript model = Net() script_model = torch.jit.script( model ) Pass through dummy data data = torch.randn((1,1)) outputs = script_model(data) ONNX model export onnx_model_path = "saved_models/model.onnx" torch.onnx.export(script_model, data, onnx_model_path, opset_version=11, input_names=["input"] , example_outputs= outputs, dynamic_axes={ "input":{0:"batch_size"} }, output_names=["output"]) ONNX checker throws no error, However when I do inference I am having a crash with below error onnx_session(onnx_model_path) Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from saved_models/model.onnx failed:Node (If_5) Op (If) [TypeInferenceError] Graph attribute inferencing failed: Node (If_10) Op (If) [TypeInferenceError] Graph attribute inferencing failed: This is an invalid model. Type Error: Type 'tensor(int64)' of input parameter (25) of operator (If) in node (If_14) is invalid. The JIT converted model is working fine, Is there any limitation of using tensor.size or shape functions in conditional statements when converting to ONNX ? Please help me figure out :)) Thanks
st180634
Hi there, for torch.onnx you want to pass the torch model directly, not the jit model. this works fine for me: import torch import torch.nn as nn import onnx class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear( 1, 3 ) self.fc2 = nn.Linear( 3, 10 ) self.fc3 = nn.Linear( 10, 2 ) def forward(self,x): if x.shape[0] ==1 : x = self.fc2( self.fc1(x) ) return x else: x = self.fc3( self.fc2( self.fc1(x) ) ) return x model = Net() data = torch.randn((1,1)) outputs = model(data) onnx_model_path = "model.onnx" torch.onnx.export(model, data, onnx_model_path, opset_version=11, input_names=["input"] , example_outputs= outputs, dynamic_axes={ "input":{0:"batch_size"} }, output_names=["output"])
st180635
Hi,i implemented my own custom LSTMCell based on [https://github.com/pytorch/pytorch/blob/master/benchmarks/fastrnns/custom_lstms.py 9], but during back-propagation i get nan values (after two or three iterations).To be more specific my net is consisted of CNN (Alexnet) + CustomRNN + Log_Softmax and is trained with CTC loss.As far as my custom LSTM is concerned, it is an implementation of Differential RNN https://arxiv.org/abs/1504.06678 1. Below are some snippets of my code. LSTMCell: Screenshot from 2019-08-20 16:33:54.png1059×819 112 KB Model forward: def forward(self, x): LSTMState = namedtuple('LSTMState', ['hx', 'cx', 'dc']) batch_size, timesteps, C, H, W = x.size() c_in = x.view(batch_size * timesteps, C, H, W) c_out = self.cnn(c_in) c_out = c_out.view(-1, batch_size, 4096) h1 = torch.zeros(batch_size, self.hidden_size).cuda(0) h2 = torch.zeros(batch_size, self.hidden_size).cuda(0) h3 = torch.zeros(batch_size, self.hidden_size).cuda(0) states = [[LSTMState(h1, h2, h3) for _ in range(2)] for _ in range(self.num_layers)] r_out, out_state = self.rnn(c_out, states) custom_state = double_flatten_states(out_state) r_out2 = self.last_linear(r_out) return (r_out2) Thanks in advance
st180636
I guess you should also include some of your training code to help troubleshoot. The code you’ve provided here looks ok. Given that it happens after a few epochs I guess the gradient is either vanishing or exploding. Either one could be caused by a learning rate issue. For exploding gradients you could try gradient clipping.
st180637
Hi Jack and thanks for your reply.First of all what i meant was that i got nan after two or three batches, not epochs(reducing learning rate wouldn’t work).What i modified to the original code is that i added one state (the derivative of cell state, dc) and the computation of gates of course, as you can see above, and consequently i can’t understand why it should return nan (i can run the original code with my model and training code without a problem) Training code: criterion = nn.CTCLoss(blank=0, reduction='mean') # with autograd.detect_anomaly(): for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) input_len = torch.tensor([output.size(0)], dtype=torch.int) target_len = torch.tensor([target.size(1)], dtype=torch.int) log_probs = nn.functional.log_softmax(output, dim=2) loss = criterion(log_probs, target, input_len, target_len) train_loss += loss.item() loss.backward() optimizer.step() LSTM full code: import torch import torch.nn as nn from torch.nn import Parameter import torch.jit as jit import warnings from collections import namedtuple from typing import List, Tuple from torch import Tensor import numbers def script_lstm(input_size, hidden_size, num_layers, bias=True, batch_first=False, dropout=False, bidirectional=True): assert bias assert not batch_first stack_type = StackedLSTM2 layer_type = BidirLSTMLayer dirs = 2 return stack_type(num_layers, layer_type, first_layer_args=[LSTMCell, input_size, hidden_size], other_layer_args=[LSTMCell, hidden_size * dirs, hidden_size]) def reverse(lst): # type: (List[Tensor]) -> List[Tensor] return lst[::-1] class LSTMCell(jit.ScriptModule): def __init__(self, input_size, hidden_size, order=1): # __constants__ = ['order'] super(LSTMCell, self).__init__() self.order = order self.input_size = input_size self.hidden_size = hidden_size self.weight_ih = Parameter(torch.randn(4 * hidden_size, input_size)) self.weight_hh = Parameter(torch.randn(4 * hidden_size, hidden_size)) self.bias_ih = Parameter(torch.randn(4 * hidden_size)) self.bias_hh = Parameter(torch.randn(4 * hidden_size)) ###weight-bias for st-1, eq.6,7 for N=0### self.weight_ch_prev = Parameter(torch.randn(2 * hidden_size, hidden_size)) self.bias_ch_prev = Parameter(torch.randn(2 * hidden_size)) ###weight-bias for d(st-1), eq.6,7 for N=1### self.weight_ch_dc_prev = Parameter(torch.randn(2 * self.order * hidden_size,hidden_size)) self.bias_ch_dc_prev = Parameter(torch.randn(2 * self.order * hidden_size)) ###weight-bias for st, eq.8 for N=0### self.weight_ch_cur = Parameter(torch.randn(hidden_size, hidden_size)) self.bias_ch_cur = Parameter(torch.randn(hidden_size)) ###weight-bias for d(st-1), eq.8 for N=1 self.weight_ch_dc_cur = Parameter(torch.randn(self.order * hidden_size,hidden_size)) self.bias_ch_dc_cur = Parameter(torch.randn(self.order * hidden_size)) @jit.script_method def forward(self, input, state): # type: (Tensor, Tuple[Tensor, Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor, Tensor]] hx, cx, dc = state gates = (torch.mm(input, self.weight_ih.t()) + self.bias_ih + torch.mm(hx, self.weight_hh.t()) + self.bias_hh) ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1) gates_2 = (torch.mm(dc, self.weight_ch_dc_prev.t()) + self.bias_ch_dc_prev + torch.mm(cx, self.weight_ch_prev.t()) + self.bias_ch_prev) ingate_2, forgetgate_2 = gates_2.chunk(2, 1) ingate = ingate + ingate_2 forgetgate = forgetgate + forgetgate_2 ingate = torch.sigmoid(ingate) forgetgate = torch.sigmoid(forgetgate) cellgate = torch.tanh(cellgate) cy = (forgetgate * cx) + (ingate * cellgate) outgate = outgate + (torch.mm(cy-cx, self.weight_ch_dc_cur.t()) + self.bias_ch_dc_cur + torch.mm(cy, self.weight_ch_cur.t()) + self.bias_ch_cur ) outgate = torch.sigmoid(outgate) hy = outgate * torch.tanh(cy) d_c = cy - cx return hy, (hy, cy, d_c) class LSTMLayer(jit.ScriptModule): def __init__(self, cell, *cell_args): super(LSTMLayer, self).__init__() self.cell = cell(*cell_args) @jit.script_method def forward(self, input, state): # type: (Tensor, Tuple[Tensor, Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor, Tensor]] inputs = input.unbind(0) outputs = torch.jit.annotate(List[Tensor], []) for i in range(len(inputs)): out, state = self.cell(inputs[i], state) outputs += [out] return torch.stack(outputs), state class ReverseLSTMLayer(jit.ScriptModule): def __init__(self, cell, *cell_args): super(ReverseLSTMLayer, self).__init__() self.cell = cell(*cell_args) @jit.script_method def forward(self, input, state): # type: (Tensor, Tuple[Tensor, Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor, Tensor]] inputs = reverse(input.unbind(0)) outputs = jit.annotate(List[Tensor], []) for i in range(len(inputs)): out, state = self.cell(inputs[i], state) outputs += [out] return torch.stack(reverse(outputs)), state class BidirLSTMLayer(jit.ScriptModule): __constants__ = ['directions'] def __init__(self, cell, *cell_args): super(BidirLSTMLayer, self).__init__() self.directions = nn.ModuleList([ LSTMLayer(cell, *cell_args), ReverseLSTMLayer(cell, *cell_args), ]) @jit.script_method def forward(self, input, states): # type: (Tensor, List[Tuple[Tensor, Tensor, Tensor]]) -> Tuple[Tensor, List[Tuple[Tensor, Tensor, Tensor]]] # List[LSTMState]: [forward LSTMState, backward LSTMState] outputs = jit.annotate(List[Tensor], []) output_states = jit.annotate(List[Tuple[Tensor, Tensor, Tensor]], []) # XXX: enumerate https://github.com/pytorch/pytorch/issues/14471 i = 0 for direction in self.directions: state = states[i] out, out_state = direction(input, state) outputs += [out] output_states += [out_state] i += 1 return torch.cat(outputs, -1), output_states def init_stacked_lstm(num_layers, layer, first_layer_args, other_layer_args): layers = [layer(*first_layer_args)] + [layer(*other_layer_args) for _ in range(num_layers - 1)] return nn.ModuleList(layers) class StackedLSTM2(jit.ScriptModule): __constants__ = ['layers'] def __init__(self, num_layers, layer, first_layer_args, other_layer_args): super(StackedLSTM2, self).__init__() self.layers = init_stacked_lstm(num_layers, layer, first_layer_args, other_layer_args) @jit.script_method def forward(self, input, states): # type: (Tensor, List[List[Tuple[Tensor, Tensor, Tensor]]]) -> Tuple[Tensor, List[List[Tuple[Tensor, Tensor, Tensor]]]] # List[List[LSTMState]]: The outer list is for layers, # inner list is for directions. output_states = jit.annotate(List[List[Tuple[Tensor, Tensor, Tensor]]], []) output = input # XXX: enumerate https://github.com/pytorch/pytorch/issues/14471 i = 0 for rnn_layer in self.layers: state = states[i] output, out_state = rnn_layer(output, state) output_states += [out_state] i += 1 return output, output_states Thanks again.
st180638
When i use “with autograd.detect_anomaly()” i get the following error messages: RuntimeError: Function 'Sigmoidbackward' returned nan values in its 0th output RuntimeError: Function 'DivBackward0' returned nan values in its 0th output RuntimeError: Function 'CudnnConvolutionBackward' returned nan values in its 0th output RuntimeError: Function torch::jit::(anonymous namespace)::DifferentiableGraphBackward returned nan values in its 2th output Can anyone tell me if it is caused by overflow/division by zero ,etc.? Edit: I narrowed it down, the problem is in the following lines, but still can’t resolve it: ingate = ingate + ingate_2 forgetgate = forgetgate + forgetgate_2 outgate = outgate + (torch.mm(cy-cx, self.weight_ch_dc_cur.t()) + self.bias_ch_dc_cur + torch.mm(cy, self.weight_ch_cur.t()) + self.bias_ch_cur )
st180639
Hi Theocharis, I am having similar issue. I am trying to run the existing LSTM code from fastrnn provided in github, but when I am back-propagating I get similar error. Did you get a chance to resolve this error? The same input runs fine for native LSTM.
st180640
Hi, Have you resolve this issue? I am facing the same problem. I have posted my issue [here] (Reproducing a code using Residual LSTM but getting Nan values in gradients).
st180641
Hi. I wrote this question 1 tagged as quantization, but the problem seems to be in how to save the model using torch.jit.save while containing external modules. Is this possible? I really appreciate any help you can provide.
st180642
There are two parts: The official answer is that what you can do is to provide a custom operator in C++ (like eg torchvision does for eg nms) and then use that through torch.ops.mymodule.opname. This is compatible with the JIT. Including saving and loading. The JIT has a Python fallback (if you tag a function @torch.jit.ignore and call that from your JITed function. This will let you trace a model, but you won’t be able to save it. You could register a “stub” op and reflect that back to Python. Or write a little surgery helper to replace the Python fallbacks with that stub op before saving and change it back after loading. Best regards Thomas
st180643
I’m trying to compile PF-AFN 4 model to jit. I got the following error when I compile the model with cupy.cuda.compile_with_cache to jit. NVRTCError: NVRTC_ERROR_COMPILATION (6) During handling of the above exception, another exception occurred: CompileException Traceback (most recent call last) cupy/util.pyx in cupy.util.memoize.decorator.ret() /usr/local/lib/python3.7/dist-packages/cupy/cuda/compiler.py in compile(self, options) 440 except nvrtc.NVRTCError: 441 log = nvrtc.getProgramLog(self.ptr) --> 442 raise CompileException(log, self.src, self.name, options, 'nvrtc') 443 444 CompileException: /tmp/tmpan1ut480/3b7c153ce98d06488f1cbac8793f6dff_2.cubin.cu(16): error: identifier "tensor" is undefined 1 error detected in the compilation of "/tmp/tmpan1ut480/3b7c153ce98d06488f1cbac8793f6dff_2.cubin.cu". To Reproduce This is a colab to reproduce the error. colab.research.google.com Google Colaboratory 4 This is a minimum code. @cupy.util.memoize(for_each_device=True) def cupy_launch(strFunction, strKernel): return cupy.cuda.compile_with_cache(strKernel).get_function(strFunction) kernel_Correlation_rearrange = " .... " import torch import torch.nn as nn class Net(nn.Module): def __init__(self): super(Net, self).__init__() def forward(self, x_warp_after, x_cond): cupy_launch('kernel_Correlation_rearrange', cupy_kernel('kernel_Correlation_rearrange', { 'intStride': 1, 'input': x_warp_after, 'output': x_cond }))( ) return x_warp_after, x_cond net = Net().cuda() input1 = torch.randn([1, 256, 8, 6]).cuda() input2 = torch.randn([1, 256, 8, 6]).cuda() trace_model = torch.jit.trace(net, [input1, input2]) Expected behavior I think the above error occurs when I use cupy.cuda.compile_with_cache. Environment PyTorch Version (e.g., 1.0): 1.8.1+cu101 OS (e.g., Linux): Ubuntu 18.04.5 LTS (x86_64) How you installed PyTorch (conda, pip, source): pip Build command you used (if compiling from source): no Python version: 3.7 (64-bit runtime) CUDA/cuDNN version: 11.0.221 GPU models and configuration: GPU 0: Tesla T4 Any other relevant information:
st180644
Based on the error message it seems that cupy is unable to compile the PyTorch methods and I’m unsure if this would even be supported. Do you have any resources claiming that this should work and some examples demonstrating it?
st180645
@ptrblck Thank you for your reply! You can check this example from the following colab. colab.research.google.com Google Colaboratory 14
st180646
Thanks for the code! It shows the error, which might be helpful for debugging, but my previous question was regarding the expectations that this would be supported. Do you have any working example, demos, blog posts etc., which explain how cupy can be used, as I’m unfamiliar with it?
st180647
Different API will result in different group such as FusionGroup and CudaFusionGroup. Why? And what is different?
st180648
These are actually different fuser generations. The “classic” 1st-gen fuser only did pointwise ops and created FusionGroup nodes. The newer fuser developed by a team at NVIDIA creates CUDAFusionGroup. To round off the trio, there is TensorExprGroup nodes created by the TensorExpr/NNC fuser developed by a team at FB. The latter two also support some reductions. A while ago, I wrote a blog on the various fusers 6. Best regards Thomas
st180649
Thanks for your reply. I read your blog first. It seems that the mechanism is not easy to figure out.
st180650
The JIT optimization steps probably are among the most sophisticated bits in PyTorch (along with the dispatcher,…). For a deep dive on one of the fusers, I can also enthusiastically recommend Christian Sarofeen’s talk 6 (I think you need to register to see it). Best regards Thomas
st180651
Excellent! I will look into it and will email you when encounter any question. Much thanks.
st180652
[PyTorch 1.6] I try to torch.jit.trace or torch.jit.script alexnet, the graph displays “prim::DifferentiableGraph”. However, sometimes, it only fuses some small network into a “FusionGroup”. When prim::DifferentiableGraph and FusionGroup happens? If I want to use cuda fusion(nvrtc), how to close prim::DifferentiableGraph and turn on “FusionGroup”? Thanks.
st180653
In some neural networks, the covn layer is derived from torch.nn.conv2d, which will cause an exception when scripting the module. The derived conv2d looks like: Class Conv2d(torch.nn.conv2d): …
st180654
Could you include a more detailed repro? Ideally, it should be a self-contained script that we can basically copy + paste and run.
st180655
Hi, I’m trying to understand how the compiler works and have been reading the documentation. In the section on the with statement 2, I see that prim::Exit returns a Tensor in TorchScript graphs. Why does prim::Exit have a return value at all? I could be wrong, but it seems impossible for any other code to actually do anything with the return value. CC @SplitInfinity who I think was involved in implementing the with statement.
st180656
Solved by SplitInfinity in post #4 My earlier statement was not completely accurate. __exit__ is allowed to return anything that can be interpreted as a boolean, which includes certain kinds of Tensors.
st180657
See this section of the Python language reference for more information 1. prim::Exit corresponds to __exit__ on the class being used as a context manager and is allowed to return a boolean value.
st180658
My earlier statement was not completely accurate. __exit__ is allowed to return anything that can be interpreted as a boolean, which includes certain kinds of Tensors.
st180659
I retrained a model based on deespeech2 to transcribe audio to test by following this blog post 2. Then I followed the PyTorch mobile documentation 1 because I would like to use the model on a mobile android/ios device. To be able to run the model I need to load and transform the wav audio 16000 Hz into a Mel spectrogram. I am doing this by using the transforms.MelSpectrogram function. However right now I have this function inside the Dataset module. Is it correct to add this transformation inside the Model module before forward? By adding this transformation inside the model will create problems when I call torch.jit.script and torch.jit.save functions?
st180660
I think you can add the transformation inside the model as long as scripting the model doesn’t complain about unsupported methods. A quick check of transforms.MelSpectrogram shows that PyTorch methods seem to be used, so I guess it would work. If I’m not mistaken, @tom also used a similar workflow by adding transformations into the model to export it to his mobile in a blog post 5.
st180661
One thing that is a limitation here is the coverage of stft/fft in the backend. stft maps to the internal _fft_r2c 2 and that in turn is implemented for only through MKL on the CPU 5. I have an upcoming tutorial (in a week or two) for keyword recognition on ARM (but Raspberry rather than mobile) where I use the M5 model from the tutorial to avoid the need for spectrogram. It also has a few wrinkles. For Linux on ARM you can always get numpy’s fft (like librosa does) but that might not be as straightforward on mobile. Best regards Thomas
st180662
I am trying to measure the time spent in model inference (forward() call on torch::jit::script::Module). torch::NoGradGuard no_grad; model_outputs = torch_model.forward(input_tensors); However, the timing is coming out to be really small. I found the reason is because of the default Asynchronous mode for GPU operations described here: https://pytorch.org/docs/master/notes/cuda.html#asynchronous-execution 5 The description provides a solution for python using torch.cuda.Event abstraction, but I could not find analogous abstraction in C++. Coming back to the example above, I believe the synchronization occurs when data is read out of the model_outputs. Is this understanding correct? I tried the following solution to capture the correct timestamp on c10::cuda::getCurrentCUDAStream().stream() cudaLaunchHostFunc( c10::cuda::getCurrentCUDAStream().stream(), TimestampCaptureCallback, reinterpret_cast<void*>(&compute_end_ns)); This gives me seemingly accurate results but is this solution generic and work for all kind of models? There is a risk in using internals which are not properly documented.
st180663
Solved by ptrblck in post #3 I’m not sure about the TimestampCaptureCallback approach, but in any case you could add manual synchronizations via: auto stream = at::cuda::getCurrentCUDAStream(); // or at::cuda::getDefaultCUDAStream()) AT_CUDA_CHECK(cudaStreamSynchronize(stream)); to make sure your timings are correct. Alternat…
st180664
I’m not sure about the TimestampCaptureCallback approach, but in any case you could add manual synchronizations via: auto stream = at::cuda::getCurrentCUDAStream(); // or at::cuda::getDefaultCUDAStream()) AT_CUDA_CHECK(cudaStreamSynchronize(stream)); to make sure your timings are correct. Alternatively you should also be able to use cudaDeviceSynchronize directly: cudaError_t err = cudaDeviceSynchronize(); bool isEQ = err == cudaSuccess; ASSERT_TRUE(isEQ);
st180665
The first approach does sound good but won’t it have an adverse affect on the performance? I thought cudaLaunchHostFunc method to be less intrusive. What are your reservations about this approach? The second alternative will not work for us because we might have multiple models deployed and being served simultaneously on the GPU including models from TensorRT, TensorFlow, ONNX and pyTorch. Synchronizing the whole device will affect all these models negatively. See Triton Inference Server 1 for more reference. If there are no other ways then our best bet would be to go by the first method.
st180666
I’m not familiar with your use case and don’t know what exactly you would like to profile, but based on the docs 2: Host functions without a mandated order (such as in independent streams) execute in undefined order and may be serialized. you would need to check, if it fits your use case. tanmay2592: but won’t it have an adverse affect on the performance? If you are synchronizing on a custom or the default stream used in PyTorch it will impact the performance, but again I might not understand your use case and how you would like to time kernel executions without synchronizations/events.
st180667
I’m not familiar with your use case I understand. I will try to simplify my use case. I am trying to time the duration between from when the model execution started till when the outputs are first ready to be read with minimal impact to performance. I am not interested in profiling individual kernel executions. With our discussion so far this is a possible solution: 1. Record start timestamp 2. model_outputs = torch_model.forward(input_tensors) 3. auto stream = at::cuda::getCurrentCUDAStream(); // or at::cuda::getDefaultCUDAStream()) AT_CUDA_CHECK(cudaStreamSynchronize(stream)); 4. Record end timestamp 5. Read output tensor from `model_outputs`. The alternative I propose is to attach a single host function to just capture the end of the execution on the default stream. The only assumption being the execution on all the streams being used by pytorch was over when default stream ends execution. The description of default stream being the one doing most of the work adds some credibility to above assumption. Hence, the warning should not apply here. Something like below: 1. Record start timestamp 2. model_outputs = torch_model.forward(input_tensors) 3. Record end timestamp in a callback on default stream. 4. Read output tensor from `model_outputs`. Description of default stream: The default stream is where most computation occurs when you aren’t explicitly using streams. Link 1
st180668
Hey all, I wrote up a tutorial on how to integrate a new backend into PyTorch JIT. The process is relatively simple and only requires around 2-3 PyTorch specific API calls. It can be found here 123.
st180669
It is very useful for me! Thanks for sharing! I want to implement a third-party hardware backend compiler, is there any more examples or documents? Thanks in advance!
st180670
Take a look at the torch/csrc/jit/backends/backend.h header. You can see an example of a backend registered using this API in the pytorch/glow repository 10.
st180671
When compiling to TorchScript either with tracing or scripting, I often have problems with operations that depend explicitly on tensor sizes. Sometimes sizes get hardcoded as constants, breaking compatibility with variable batch sizes. Other times PyTorch complains that I’m converting to torch integers to Python ints, which will be traced as constants. I think these problem are based on my poor understanding of how torch.Size objects work, and how they are traced. This pull request mentions the issue and says that torch.Size objects basically are tuples of torch ints. The use of PyTorch integers is what allows for correct tracking. But when I check, the data type of each entry of a tensor size is an int. How do torch.Size objects work internally? Of should I manipulate them in order to get correct TorchScript compiling? Additional note: I know that tracing isn’t guaranteed to compile data-dependent operations correctly, but: often tracing is more convenient than scripting. Manipulations of tensor sizes should be recorded correctly by tracing I’ve added a MWE to showcase the problem I’m facing. Define the following toy class: class Foo(nn.Module): """Toy class that plays with tensor shape to showcase tracing issue. """ def __init__(self): nn.Module.__init__(self) def forward(self, x): new_shape = (x.shape[0], 2*x.shape[1]) # incriminated instruction x2 = torch.empty(size=new_shape) x2[:, ::2] = x x2[:, 1::2] = x + 1 return x2 and run the test code: x = torch.randn((3, 5)) # create example input foo = Foo() traced_foo = torch.jit.trace(foo, x) # trace print(traced_foo(x).shape) # obviously this works print(traced_foo(x[:, :4]).shape) # but fails with a different shape! So, here the problem is that the tensor sized are hardcoded as constants, instead of being traced. How can I overcome this issue?
st180672
I’m trying to export a PyTorch model to TorchScript via scripting and I am stuck. I’ve created a toy class to showcase the issue: import torch from torch import nn class SadModule(nn.Module): """Takes a (*, 2) input and runs it thorugh a linear layer. Can optionally use a skip connection. The usage of the skip connection or not is an architectural choice. """ def __init__(self, use_skip: bool): nn.Module.__init__(self) self.use_skip = use_skip self.layer = nn.Linear(2, 2) def forward(self, x): if self.use_skip: x_input = x x = self.layer(x) if self.use_skip: x = x + x_input return x It basically consists of only a linear layer and an optional skip connection. If I try to script the model using mod1 = SadModule(False) scripted_mod1 = torch.jit.script(mod) I get the following error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-10-a7ebc7af32c7> in <module> ----> 1 scripted_mod1 = torch.jit.script(mod) ~/Software/miniconda3/envs/pytorch3d/lib/python3.8/site-packages/torch/jit/_script.py in script(obj, optimize, _frames_up, _rcb) 895 896 if isinstance(obj, torch.nn.Module): --> 897 return torch.jit._recursive.create_script_module( 898 obj, torch.jit._recursive.infer_methods_to_compile 899 ) ~/Software/miniconda3/envs/pytorch3d/lib/python3.8/site-packages/torch/jit/_recursive.py in create_script_module(nn_module, stubs_fn, share_types) 350 check_module_initialized(nn_module) 351 concrete_type = get_module_concrete_type(nn_module, share_types) --> 352 return create_script_module_impl(nn_module, concrete_type, stubs_fn) 353 354 def create_script_module_impl(nn_module, concrete_type, stubs_fn): ~/Software/miniconda3/envs/pytorch3d/lib/python3.8/site-packages/torch/jit/_recursive.py in create_script_module_impl(nn_module, concrete_type, stubs_fn) 408 # Compile methods if necessary 409 if concrete_type not in concrete_type_store.methods_compiled: --> 410 create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs) 411 torch._C._run_emit_module_hook(cpp_module) 412 concrete_type_store.methods_compiled.add(concrete_type) ~/Software/miniconda3/envs/pytorch3d/lib/python3.8/site-packages/torch/jit/_recursive.py in create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs) 302 property_rcbs = [p.resolution_callback for p in property_stubs] 303 --> 304 concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults) 305 306 RuntimeError: x_input is not defined in the false branch: File "<ipython-input-7-d08ed7ff42ec>", line 12 def forward(self, x): if self.use_skip: ~~~~~~~~~~~~~~~~~ x_input = x ~~~~~~~~~~~ <--- HERE x = self.layer(x) if self.use_skip: and was used here: File "<ipython-input-7-d08ed7ff42ec>", line 16 x = self.layer(x) if self.use_skip: x = x + x_input ~~~~~~~ <--- HERE return x So, basically TorchScript isn’t able to recognise that for mod1 the True branch of either if statement won’t ever be used. Moreover, if we create an instance that actually uses the skip connection, mod2 = SadModule(True) scripted_mod2 = torch.jit.script(mod2) we will get another error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-21-b5ca61d8aa73> in <module> ----> 1 scripted_mod2 = torch.jit.script(mod2) ~/Software/miniconda3/envs/pytorch3d/lib/python3.8/site-packages/torch/jit/_script.py in script(obj, optimize, _frames_up, _rcb) 895 896 if isinstance(obj, torch.nn.Module): --> 897 return torch.jit._recursive.create_script_module( 898 obj, torch.jit._recursive.infer_methods_to_compile 899 ) ~/Software/miniconda3/envs/pytorch3d/lib/python3.8/site-packages/torch/jit/_recursive.py in create_script_module(nn_module, stubs_fn, share_types) 350 check_module_initialized(nn_module) 351 concrete_type = get_module_concrete_type(nn_module, share_types) --> 352 return create_script_module_impl(nn_module, concrete_type, stubs_fn) 353 354 def create_script_module_impl(nn_module, concrete_type, stubs_fn): ~/Software/miniconda3/envs/pytorch3d/lib/python3.8/site-packages/torch/jit/_recursive.py in create_script_module_impl(nn_module, concrete_type, stubs_fn) 408 # Compile methods if necessary 409 if concrete_type not in concrete_type_store.methods_compiled: --> 410 create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs) 411 torch._C._run_emit_module_hook(cpp_module) 412 concrete_type_store.methods_compiled.add(concrete_type) ~/Software/miniconda3/envs/pytorch3d/lib/python3.8/site-packages/torch/jit/_recursive.py in create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs) 302 property_rcbs = [p.resolution_callback for p in property_stubs] 303 --> 304 concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults) 305 306 RuntimeError: x_input is not defined in the false branch: File "<ipython-input-18-ac8b9713c789>", line 17 def forward(self, x): if self.use_skip: ~~~~~~~~~~~~~~~~~ x_input = x ~~~~~~~~~~~ <--- HERE x = self.layer(x) if self.use_skip: and was used here: File "<ipython-input-18-ac8b9713c789>", line 21 x = self.layer(x) if self.use_skip: x = x + x_input ~~~~~~~ <--- HERE return x So in this case TorchScript doesn’t understand that both ifs will always be true and that in fact x_input is well defined. To avoid the issue, I could split the class into two subclasses, as in: class SadModuleNoSkip(nn.Module): """Takes a (*, 2) input and runs it thorugh a linear layer. Can optionally use a skip connection. The usage of the skip connection or not is an architectural choice. """ def __init__(self): nn.Module.__init__(self) self.layer = nn.Linear(2, 2) def forward(self, x): x = self.layer(x) return x class SadModuleSkip(nn.Module): """Takes a (*, 2) input and runs it thorugh a linear layer. Can optionally use a skip connection. The usage of the skip connection or not is an architectural choice. """ def __init__(self): nn.Module.__init__(self) self.layer = nn.Linear(2, 2) def forward(self, x): x_input = x x = self.layer(x) x = x + x_input return x However, I am working on a huge code base and I would have to repeat the process for many classes, which is time consuming and could introduce bugs. Moreover, often the modules I’m working on are huge convolutional nets and the ifs just control the presence of an additional batch normalization. It seems to me undesirable to have to classes that are identical in 99% of the blocks, save for a single batch norm layer. Is there a way in which I can help TorchScript with its handling of branches?
st180673
I’ve opened an issue on GitHub 5, the maintainers were able to help me solve the problem. Marking constant attributes with Final is the way to go, but it’s a feature that’s not yet available with the JIT compiler of the main releases (as of May 7, 2021).
st180674
Consider some algorithm that looks like this: phi1 = torch.tensor(torch.empty(size=(p, p, m, m)) , dtype = torch.float, requires_grad = True) phi2 = torch.tensor(torch.empty(size=(p, p, m, m)) , dtype = torch.float, requires_grad = True) for s in range§ : for k in range(s): phi1[s, k] = phi1[s-1, k] - phi1[s, s] @ phi2[s-1, s-k] phi2[s, k] = phi2[s-1, k] - phi2[s, s] @ phi1[s-1, s-k] … [bunch of other functions] lposterior = function of phi1 and phi2 which yields a constant lposterior.backward() When I ran this i got a bunch of errors e.g. in-line errors etc. (presumably because I am slicing phi1 and phi2 and this is not allowed). This algo works fine without the ‘lposterior.backward()’ line. What can I do to make autodiff work here?
st180675
I think (just hypothesising) this is due to the torch.empty tensors. You need to initialise the tensors in order for you to have grads for the .backward() method. You could initialise the phis with random values and then check to see if the algorithm works and indeed is capable of autograd.
st180676
I have compiled an SSD-based object detection model in PyTorch with torch.jit.script(model). I benchmarked the scripted and the original models on Tesla K80 GPU (AWS p2 instance). Looks like the scripted model is slower than the original model. Averaged over 100 images: Original model: 0.1787 seconds per image Scripted model: 0.1928 seconds per image I also benchmarked a ResNet50 model, got similar slow-down. Original ResNet50: 0.0281 Scripted ResNet50: 0.0303 I was expecting some speed-up, and disappointed by the slow-down. Is this normal, or could I have missed something?
st180677
Could you share the code you’ve used to profile the code? Note that CUDA operations are executed asynchronously, so you would have to synchronize the timer via torch.cuda.synchronize() before starting and stopping the timers.
st180678
I am aware of the synchronization. I measure by averaging over 100 forward passes (no difference with/without synchronization). If I use with torch.jit.optimized_execution(False): I get similar time as the original model. model = torchvision.models.resnet50(pretrained=False).cuda() model.eval() model = torch.jit.script(model) # Disable all gradient computations torch.set_grad_enabled(False) # Load & transform the image # warm-up embedding = model(image) t1 = time.time() N = 100 for i in range(N): embedding = model(image) t2 = time.time() t = t2 - t1 print('Time: {:.4f}, avg: {:.4f}'.format(t, t / N)) I wonder if TorchScript is sensitive to GPU architecture (faster on the recent GPUs, but slower on old GPUs?). I have not done the same comparison on the new GPUs yet (in any case, I am interested in the old GPUs as they will be used in prod).
st180679
Not yet. I am guessing it might be related to two things: (1) Model’s network architecture, (2) Type of GPU. I will update here if/when I find a definitive answer.
st180680
I compared the performance(speed) of Torchvision’s Squeezenet original model with torch.jit.script(model) which I expected to speed up because Torchscript was asynchronous, but the original model was faster. What’s the reason? Which one did I miss? scripted model : 0.2353s original model : 0.1644s
st180681
Solved by googlebot in post #4 as I said, do something like: net = jit.script(models.squeezenet1_0(pretrained=True).cuda().eval()) net(x); net(x) and only then measure time (e.g. %timeit net(x))
st180682
it is not asynchronous (beyond cuda kernel launches, which is not related to jit), just python-less execution mode with optimizations one thing I’ve seen, is that some jitted operations incorrectly enable requires_grad but simpler explanation is that you’re not measuring it right - time the THIRD call of compiled model (actually, from your screenshot it seems you’re compiling twice [i.e. two model objects], which is also incorrect). reason is that profiling mode executor creates optimized bytecode on second call.
st180683
Thanks for replying. The time was measured separately between torch.jit.script and the original model. like this, torch.jit.script(models.squeezenet1_0(pretrained=True).cuda().eval()) torch.jit.script(models.squeezenet1_0(pretrained=True).cuda().eval()) #models.squeezenet1_0(pretrained=True).cuda().eval() #models.squeezenet1_0(pretrained=True).cuda().eval() Then result is #torch.jit.script(models.squeezenet1_0(pretrained=True).cuda().eval()) #torch.jit.script(models.squeezenet1_0(pretrained=True).cuda().eval()) models.squeezenet1_0(pretrained=True).cuda().eval() models.squeezenet1_0(pretrained=True).cuda().eval() Why exactly is Torchscript slower than pytorch?
st180684
as I said, do something like: net = jit.script(models.squeezenet1_0(pretrained=True).cuda().eval()) net(x); net(x) and only then measure time (e.g. %timeit net(x))
st180685
As you say, Torchscript is faster. Is this the result of different ways of optimizing it? I want a detailed explanation.
st180686
In a nutshell, “compilation” analyzes whole functions, with knowledge about variable types - some optimizations are done at this level (e.g. dead code elimination) python bytecode interpreter is not used to execute generated code - more specialized executor for statically typed code supposedly works faster fusion optimizations further compile specialized cuda kernels, so e.g. a.mul(b).add(c) is computed in one go some patterns have specialized optimizations, e.g. conv+batchnorm
st180687
What is the difference between “RecursiveScriptModule” and the original model? Is it different how it works internally? I want a detailed explanation.
st180688
Hi there, I would like to know whether TorchScript can be considered as a PyPy but for Pytorch so that we can use TorchScript instead of PyPy?! Is it possible to simultaneously use PyPy to speedup non-pytorch code snippets while TorchScript is used for pytorch module? Best, Ahmed
st180689
I believe pytorch c++ module makes it impossible to switch to PyPy. There is some remote similarity between PyPy and TorchScript, yes.
st180690
PyTorch should use multi-processing instead of multi-threading due to GIL policy, but can torchscript have multi-threading effect because it is JIT? Is it correct that JIT does not follow GIL policy? So, is it correct to use Python interpreter when running Python and JIT interpreter when using Torch script?
st180691
Hi, I am trying to modify the “torch/csrc/autograd/saved_variable.cpp” file, I call some python function in the file. I made the following modification: #include <torch/csrc/python_headers.h> #include <pybind11/pybind11.h> #include <torch/csrc/utils/object_ptr.h> #include <torch/csrc/autograd/python_variable.h> #include <torch/csrc/Exceptions.h> // original include SavedVariable::SavedVariable(const Variable& variable, bool is_output, bool is_inplace_view) { auto mymodule = THPObjectPtr(PyImport_ImportModule(“mymodule”)); // prepare inputs // call corresponding python functions // get c++ object from python results } Which cmake file should I change to make it compile? If I do not change anything, it will throw the error “/home/ec2-user/pytorch/build/lib/libtorch_cpu.so: undefined reference to `PyInstanceMethod_Type’” I think I need to link the python target and include the python lib somewhere, but where should I put it? Thanks a lot!
st180692
Suppose I have a model like this class Model(nn.Module): def __init__(self): self.part1 = nn.Linear(10, 10) self.part2 = nn.Linear(10, 10) def forward(self, x): x1 = self.part1(x) x2 = self.part2(x) return x1 + x2 In eager mode, I’m sure it’s not executed in parallel but when I torch.jit.script the model, does torchscript automatically turn them into to async? If so does the optimization depend on device being cpu or cuda?
st180693
I did a simple experiment and found out torch.jit.fork made code slower on both cpu and cuda import torch import torch.nn as nn class WithoutFork(nn.Module): def __init__(self): super(WithoutFork, self).__init__() self.part1 = nn.Linear(10, 1000) self.part2 = nn.Linear(10, 1000) self.part3 = nn.Linear(10, 1000) def forward(self, x): x1 = self.part1(x) x2 = self.part2(x) x3 = self.part3(x) return x1 + x2 + x3 class WithFork(nn.Module): def __init__(self): super(WithFork, self).__init__() self.part1 = nn.Linear(10, 1000) self.part2 = nn.Linear(10, 1000) self.part3 = nn.Linear(10, 1000) def forward(self, x): f1 = torch.jit.fork(self.part1, x) f2 = torch.jit.fork(self.part2, x) f3 = torch.jit.fork(self.part3, x) fut = [f1, f2, f3] xs = [torch.jit.wait(f) for f in fut] return torch.stack(xs, 0).sum(0) without_fork = WithoutFork().to('cuda') with_fork = WithFork().to('cuda') s_without_fork = torch.jit.script(without_fork) s_with_fork = torch.jit.script(with_fork) x = torch.randn(100, 10).cuda() %%timeit _ = without_fork(x) >> The slowest run took 20.22 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 5: 113 µs per loop %%timeit _ = with_fork(x) >> The slowest run took 17.37 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 5: 174 µs per loop %%timeit _ = s_without_fork(x) >> The slowest run took 524.84 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 5: 88 µs per loop %%timeit _ = s_with_fork(x) >> The slowest run took 25.88 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 5: 238 µs per loop # CPU without_fork = WithoutFork() with_fork = WithFork() s_without_fork = torch.jit.script(without_fork) s_with_fork = torch.jit.script(with_fork) x = torch.randn(100, 10) %%timeit _ = without_fork(x) >> The slowest run took 6.67 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 5: 528 µs per loop %%timeit _ = with_fork(x) >> The slowest run took 8.37 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 5: 785 µs per loop %%timeit _ = s_without_fork(x) >> 1000 loops, best of 5: 516 µs per loop %%timeit _ = s_with_fork(x) >> 1000 loops, best of 5: 829 µs per loop So torchscript is indeed doing optimizations answering my own question torchscript-optimization?
st180694
[Check the number of inter-op parallelism threads you are using]. It could also be the case that your ops take so little time that the threading overheard is not worth the asynchronous processing.
st180695
Is it possible to do computation simultaneously on host cpu device and gpu device ?
st180696
You mean, training a model on the CPU and the GPU simultaneously? I can’t think of any reason you would want to do that because the CPU is so much slower that anything it gets would be computed faster by being queued for the GPU.
st180697
Simple Conv2d Function cannot be scripted and reports Runtime Error. Here is my simple Conv2d module, I want to script it using torch.jit.script. import torch class Conv2dCell(torch.nn.Module): def __init__(self): super(Conv2dCell, self).__init__() def forward(self, x): conv = torch.nn.Conv2d(1, 3, 3, stride=1) output = conv(x) return output m = Conv2dCell() scripted_m = torch.jit.script(m) Running this piece of code will give the following error message: Traceback (most recent call last): File “conv2d.py”, line 13, in scripted_m = torch.jit.script(m) File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/jit/init.py”, line 1261, in script return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile) File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 305, in create_script_module return create_script_module_impl(nn_module, concrete_type, stubs_fn) File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 361, in create_script_module_impl create_methods_from_stubs(concrete_type, stubs) File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 279, in create_methods_from_stubs concrete_type._create_methods(defs, rcbs, defaults) File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/jit/init.py”, line 1108, in _compile_and_register_class _jit_script_class_compile(qualified_name, ast, rcb) RuntimeError: Arguments for call are not valid. The following variants are available: _pair(float[2] x) → (float[]): Expected a value of type ‘List[float]’ for argument ‘x’ but instead found type ‘Tensor’. _pair(int[2] x) → (int[]): Expected a value of type ‘List[int]’ for argument ‘x’ but instead found type ‘Tensor’. The original call is: File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/nn/modules/conv.py”, line 336 padding=0, dilation=1, groups=1, bias=True, padding_mode=‘zeros’): kernel_size = _pair(kernel_size) ~~~~~ <— HERE stride = _pair(stride) padding = _pair(padding) ‘Conv2d.init’ is being compiled since it was called from ‘Conv2d’ File “conv2d.py”, line 8 def forward(self, x): conv = torch.nn.Conv2d(1, 3, 3, stride=1) ~~~~~~~~~~~~~~~ <— HERE output = conv(x) return output ‘Conv2d’ is being compiled since it was called from ‘Conv2dCell.forward’ File “conv2d.py”, line 8 def forward(self, x): conv = torch.nn.Conv2d(1, 3, 3, stride=1) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE output = conv(x) return output I am using PyTorch 1.5.1 and python 3.6.13, could someone help me identify the problem?
st180698
You are recreating a conv layer in the forward, which is most likely wrong. (The usual way would be to initialize it in the __init__ method and use it in the forward) Note that this would initialize a new layer in each forward pass and thus it won’t be trained. If that’s intended, could you explain your use case a bit more?
st180699
Hi ptrblck, No special use case here. I just want to know if this (cannot define Conv2d layer in forward function) is a restriction in Pytorch? Is there formal documentation for this? And also I notice that we can actually define some simple ops in forward function like add or relu, what is the difference between simple ops and more complicated ops like Conv2d?