id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st182000 | Thanks. But I got an error.
my code is like this:
class BidirectionalLSTM(torch.jit.ScriptModule):
__constants__ = ['rnn']
# Inputs hidden units Out
def __init__(self, nIn, nHidden, nOut):
super(BidirectionalLSTM, self).__init__()
self.rnn = nn.LSTM(nIn, nHidden, bidirectional=True)
self.embedding = nn.Linear(nHidden * 2, nOut)
@torch.jit.script_method
def forward(self, input):
recurrent, _ = self.rnn(input)
T, b, h = recurrent.size()
t_rec = recurrent.view(T * b, h)
output = self.embedding(t_rec) # [T * b, nOut]
output = output.view(T, b, -1)
return output
xx = BidirectionalLSTM(256, 512, 512)
xx.save("xxh.pt")
got an error
could not export python function call <python_value>. Remove calls to python functions before export.:
@torch.jit.script_method
def forward(self, input):
recurrent, _ = self.rnn(input)
~~~~~~~~ <--- HERE
I add one line of code:
__constants__ = [ 'rnn']
and got this:
TypeError: 'LSTM' object for attribute 'rnn' is not a valid constant.
Valid constants are:
1. a nn.ModuleList
2. a value of type {bool, float, int, str, NoneType, function, device, layout, dtype}
3. a list or tuple of (2) |
st182001 | Hello,
I am trying to deploy a pytorch model with dropout layers using torchscript into the c++ api. However, I run into an issue where the parameters which can be loaded using the regular python script and the torchscript model are different. When setting up the model by inheriting from torch.jit.ScriptModule (as is necessary to export the annotated model) I observe that for every dropout layer, a parameter named *.training which is empty is created and causes a problem when loading weights from a file stream, since the model weights exported from the model inherited from nn.Module don’t have a parameter associated with the dropout layers (see the example outputs below).
I realize that it is uncommon to preserve dropout layers outside of training, however in this case, the model is supposed to retain the dropout layers to exhibit stochastic behavior.
Below is 2 versions of the code i run for the model, one which uses pytorch and the other which i try to create a torchscript module, along with the code I have attached the output comparison between the state dictionary of the torchscript and the pytorch modules.
Below is the output from printing the first few parameters in the model state dictionary when the model is created using torch.jit.ScriptModule (notice the empty tensor parameter for layer 2, “fc.2.training”)
Model's state_dict: fc.0.bias torch.Size([1280])
fc.0.weight torch.Size([1280, 74])
fc.1.weight torch.Size([1])
fc.2.training torch.Size([])
fc.3.bias torch.Size([896])
fc.3.weight torch.Size([896, 1280])
Below is the output from printing the first few parameters in the model state dictionary when the model is created using nn.Module (here notice there is no parameter associated with layer 2)
Model's state_dict: fc.0.weight torch.Size([1280, 74])
fc.0.bias torch.Size([1280])
fc.1.weight torch.Size([1])
fc.3.weight torch.Size([896, 1280])
fc.3.bias torch.Size([896])
Below is a photo of the 2 python programs used to produce the results
Dropout_code.PNG1795×912 90.5 KB
If anyone can help me figure out how to export my trained model so that I can load it into a C++ program with the dropout preserved, that would be much appreciated. |
st182002 | The training parameter was a hack that is not needed anymore, this PR 17 fixes it so it won’t show up in the state dict anymore. Could you post your model as code so we can run it and repro your issue to make sure it’s fixed? Thanks! |
st182003 | (post withdrawn by author, will be automatically deleted in 24 hours unless flagged) |
st182004 | import torch
import torch.nn as nn
import pickle
from torch.autograd import Variable
import numpy as np
import pypcd
class Encoder_End2End_Annotated(torch.jit.ScriptModule):
__constants__ = ['encoder']
def __init__(self):
super(Encoder_End2End_Annotated, self).__init__()
self.encoder = nn.Sequential(nn.Linear(16053, 256), nn.PReLU(), #adds dropouts are not expected
nn.Linear(256, 256), nn.PReLU(),
nn.Linear(256, 60))
@torch.jit.script_method
def forward(self, x):
x = self.encoder(x)
return x
class MLP_NN_Annotated(torch.jit.ScriptModule):
__constants__ = ['fc']
def __init__(self):
super(MLP_NN_Annotated, self).__init__()
self.fc = nn.Sequential(
nn.Linear(74, 1280), nn.PReLU(), nn.Dropout(),
nn.Linear(1280, 896), nn.PReLU(), nn.Dropout(),
nn.Linear(896, 512), nn.PReLU(), nn.Dropout(),
nn.Linear(512, 384), nn.PReLU(), nn.Dropout(),
nn.Linear(384, 256), nn.PReLU(), nn.Dropout(),
nn.Linear(256, 128), nn.PReLU(), nn.Dropout(),
nn.Linear(128, 64), nn.PReLU(), nn.Dropout(),
nn.Linear(64, 32), nn.PReLU(),
nn.Linear(32, 7))
@torch.jit.script_method
def forward(self, x):
out = self.fc(x)
return out
# Creates the script
encoder = Encoder_End2End_Annotated()
MLP = MLP_NN_Annotated()
#modified to load weights
device = torch.device('cpu')
cae_filename = 'cae_encoder_140.pkl'
mlp_filename = 'mlp_PReLU_ae_dd140.pkl'
encoder.load_state_dict(torch.load(cae_filename, map_location=device))
# Print model's state_dict
print("Model's state_dict:")
for param_tensor in MLP.state_dict():
print(param_tensor, "\t", MLP.state_dict()[param_tensor].size())
#print(MLP.state_dict())
MLP.load_state_dict(torch.load(mlp_filename, map_location=device))
encoder.save("encoder_annotated.pt")
encoder.save("mlp_annotated.pt") |
st182005 | import torch.nn as nn
import torch.jit as jit
class TestModule(jit.ScriptModule):
def __init__(self):
super().__init__()
self.linear = nn.Linear(16, 16)
m = TestModule()
print(m.linear.in_features)
The code above throws AttributeError
>>> m.linear.in_features
Traceback (most recent call last):
File "/home/qbx2/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py", line 1197, in __getattr__
return ScriptModule.__getattr__(self, attr)
File "/home/qbx2/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py", line 1102, in __getattr__
return Module.__getattr__(self, attr)
File "/home/qbx2/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 535, in __getattr__
type(self).__name__, name))
AttributeError: 'WeakScriptModuleProxy' object has no attribute 'in_features'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/qbx2/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py", line 1200, in __getattr__
return getattr(self.__dict__["_original"](), attr)
AttributeError: 'NoneType' object has no attribute 'in_features'
Looking at the code (https://github.com/pytorch/pytorch/blob/master/torch/jit/ 4init.py#L1226), I think WeakScriptModuleProxy should have copied in_features and other fields to self here.
I’m just posting this issue here to check if any workaround for this already exists.
Thank you. |
st182006 | When nn modules are used in a ScriptModule, they get wrapped in an internal class that only copies the nn module’s buffers, submodules, and parameters. So as a workaround you could do something like self.linear.weight.shape[1] to get the in_features.
Really we should be copying everything though, could you file an issue on GitHub 2 with the same example code you posted here? |
st182007 | I have a pytorch script which accepts Optional[Tuple[int, int]] as the input. The script is exported to “script.pb”.
Then in C++ api, the script.pb is loaded. But how can I create C++ obj of type Optional[Tuple[int, int]]? Is there any documentation? |
st182008 | Looks like there was a bug exposed by this use case…, the following will apply once this fix 9 is merged.
Our C++ value type (IValue) doesn’t have any concept of optional/not, it is either some concrete value (e.g. Tuple[int, int]) or None. Only the JIT’s type system knows what things can be optional. You can see a small example of how to construct one here 8.
The C++ API tests 8 may also be helpful. |
st182009 | @driazati let’s make an issue tracking improved C++ API documentation, especially of IValues, etc… I’m seeing a lot of questions on it, and what we have now is not great. |
st182010 | Just curious, in which case you need to create C++ obj of optional type? I think if you have Optional[Tuple[int, int]], you can always pass concrete objs into. |
st182011 | There is already an issue open, https://github.com/pytorch/pytorch/issues/17165 48 |
st182012 | I am writing a graph transform that needs to make modifications on a model based on graph information. My question is, how do I keep references between module names in the model and node names in the graph?
In other words, how do I reference parameters in the parent module when I am in a child module in the model? As far as I can tell, there is no graph information until the model has been compiled.
For example, let’s look at a snippet from Resnet18
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
...
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
...
The node layer2.0.downsample.0 gets its input from the node bn1, but there is no way to tell this until the graph is compiled. In my case, when I traverse the resnet model and I get to layer2.0.downsample.0, I want to know the number of out_channels from the parent node.
Stealing some ideas from the hiddenlayer project, I know I can use the jit compiler to get the graph
trace, out = torch.jit.get_trace_graph(model, torch.onnx._optimize_trace(trace, torch.onnx.OperatorExportTypes.ONNX)
torch_graph = trace.graph()
which can then be traversed in trace order. However, all the model names and such are lost at this point. I could write my transformer by compiling a lookup table by traversing through the graph in trace order, and then applying the changes from the lookup table back to the model. But how would I keep references to the original modules while traversing the graph?
Any help is welcome, thanks! |
st182013 | You don’t know what your parent is. And that’s a design decision rather than a missing feature. For one thing you can use a single model several times.
If you want to pass runtime information to modules, pass them in as arguments (or use a functional interface in the first place).
Best regards
Thomas |
st182014 | All!
First of all, I’m new to this forum so excuse me if I’m not addressing the right audience.
I’m working on getting BERT forward and backprop traces from PyTorch jit and I ran into a problem: I seem to have encountered a case where the tensor dimensions don’t line up with my expectations. I’ll attach the full graph dump, but it’s unwieldy, so let me summarize:
The forward graph has the following relevant lines:
%x.8 : Float(1, 3, 10) = aten::add(%output.6, %bias, %28)
%self_size.3 : int[] = aten::size(%x.8)
%57 : int = prim::Constant[value=-1](), scope: BERT/SublayerConnection
%114 : int[] = prim::ListConstruct(%57), scope: BERT/SublayerConnection/LayerNorm[norm]
%mean : Float(1, 3, 1) = aten::mean(%x.8, %114, %44), scope: BERT/SublayerConnection/LayerNorm[norm]
%307 : int[] = aten::size(%mean)
The backprop graph has three inputs that are important here, with the following connections:
%76 : int[], <== %114 (input[76] connected to output[40])
%self_size.2 : int[], <== %self_size.3 (input[106] connected to output[70])
%108 : int[], <== %307 (input[108] connected to output[72])
And these values propagate through the backprop graph as follows:
%130 : int = prim::Constant[value=0](), scope: BERT/BERTEmbedding[embedding]/TokenEmbedding[token]
%300 : int = aten::select(%76, %130)
%225 : Tensor, %226 : Tensor = prim::GradOf[name="aten::sub"](%224)
block0():
%227 : Tensor = aten::_grad_sum_to_size(%224, %self_size.2)
%228 : Tensor = aten::neg(%224)
%229 : Tensor = aten::mul(%228, %134), scope: BERT/SublayerConnection/LayerNorm[norm]
%230 : Tensor = aten::_grad_sum_to_size(%229, %108), scope: BERT/SublayerConnection/LayerNorm[norm]
-> (%227, %230)
%301 : Tensor = aten::unsqueeze(%226, %300)
%128 : bool = prim::Constant[value=0](), scope: BERT/BERTEmbedding[embedding]/TokenEmbedding[token]
%302 : Tensor = aten::expand(%301, %self_size.2, %128)
So, if I read this right:
%128 is just a constant ‘false’, saying that the expand is an explicit one.
The second parameter (%self_size.2) is the list [1,3,10]
The first parameter (%301) is the result of an unsqueeze(%226, -1), where %226 is _grad_sum_to_size(%229, %108). %108 in turn is [1,3,1].
Winding it all forward again:
%226 will have the shape of %108, that is [1,3,1]
%301 will have one extra dimension added to it to the end by the unsqueeze, so it becomes [1,3,1,1]
Finally in the last line, it seems we try to expand a Tensor of the shape [1,3,1,1] by an expansion list of [1,3,10].
This doesn’t seem right, in fact it blows up if I try to do this in Python.
So, can you help me figure out where my logic is incorrect? What should I look at, what do I misunderstand about these operations?
The full graphs and their connectivity vectors are here: http://www.modularcircuits.com/download/bert.log 3
Thanks,
Andras |
st182015 | Hi Andras,
Thanks for asking here. I don’t know how exactly the program you have is wrong. But if it blows up in Python already, you can use pdb to debug on your python (non-jit) program, and see what’s wrong in the intermediate steps. Put something in your program __import__('pdb').set_trace() and you can see where it becomes wrong, so that we might have more context. |
st182016 | Thanks for the reply.
What I meant by ‘blowing up’ is that if I manually call ‘expand’ with a tensor of dimensions [1,3,1,1] and an expansion list of [1,3,10], I get a runtime error stating that the expansion list is shorter than the number of dimensions of the input tensor. Which of course makes complete sense, but for the life of me I can’t figure out how that wouldn’t be the case in this particular graph.
My original goal is not to execute the graph, merely to extract type information from it for each of the values. and that’s where of course I run into trouble with this particular subsection of it.
Thanks again,
Andras |
st182017 | Hi,
I have traced and saved a model that gives as output a python namedtuple. How do I go about accessing the fields of this output in the C++ api?
For eg -
from collections import named_tuple
NT = namedtuple(‘NT’, [‘output1’, ‘output2’])
def f(x):
output_tuple = NT(x * 2, x / 2)
return output_tuple
I am able to trace and save the above function using torch.jit.trace and would like to index into this output using the keyword arguments for NT in C++.
Thanks |
st182018 | Solved by driazati in post #2
namedtuple does not work out of the box yet (though it is on our roadmap). In your example it is de-sugared to a regular tuple. You can see what’s going on under the hood with the .code property of the traced code. For example
from collections import namedtuple
MyTuple = namedtuple('MyTuple', ['out… |
st182019 | namedtuple does not work out of the box yet (though it is on our roadmap). In your example it is de-sugared to a regular tuple. You can see what’s going on under the hood with the .code property of the traced code. For example
from collections import namedtuple
MyTuple = namedtuple('MyTuple', ['output1', 'output2'])
def f(x):
output_tuple = MyTuple(x * 2, x / 2)
return output_tuple
traced = torch.jit.trace(f, (torch.ones(2, 2)))
print(traced.code)
Outputs
def forward(self,
x: Tensor) -> Tuple[Tensor, Tensor]:
_0 = (torch.mul(x, CONSTANTS.c0), torch.div(x, CONSTANTS.c0))
return _0
In C++ you can access the tuple like this
torch::IValue output = my_module->forward(...);
std::vector<torch::IValue>& tuple_elements = output->toTuple().elements(); |
st182020 | I have trained a deep learning model using unet architecture in order to segment the nuclei in python and pytorch. I would like to load this pretrained model and make prediction in C++. For this reason, I obtained trace file(with pt extension). Then, I have run this code:
int main(int argc, const char* argv[]) {
Mat image;
image = imread("C:/Users/Sercan/PycharmProjects/samplepyTorch/test_2.png", CV_LOAD_IMAGE_COLOR);
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load("C:/Users/Sercan/PycharmProjects/samplepyTorch/epistroma_unet_best_model_trace.pt");
module->to(torch::kCUDA);
std::vector<int64_t> sizes = { 1, 3, image.rows, image.cols };
torch::TensorOptions options(torch::ScalarType::Byte);
torch::Tensor tensor_image = torch::from_blob(image.data, torch::IntList(sizes), options);
tensor_image = tensor_image.toType(torch::kFloat);
auto result = module->forward({ tensor_image.to(at::kCUDA) }).toTensor();
result = result.squeeze().cpu();
result = at::sigmoid(result);
cv::Mat img_out(image.rows, image.cols, CV_32F, result.data<float>());
cv::imwrite("img_out.png", img_out);
}
Image outputs ( First image: test image, Second image: Python prediction result, Third image: C++ prediction result):
concatenated.png768×256 114 KB
As you see, C++ prediction output is not similar to python prediction output. Could you offer a solution to fix this problem? |
st182021 | This seems like a bug (maybe related to #18617 2), could you file a report 1 on GitHub? Thanks! |
st182022 | Hi,
I created an issue at:
github.com/pytorch/pytorch.github.io
Issue: pytorch blog the-road-to-1_0 has example code that failed to work 6
opened by liqunfu
on 2019-04-06
This is regarding sample at:
https://pytorch.org/blog/the-road-to-1_0/:
from torch.jit import script
@script
def rnn_loop(x):
hidden = None
for x_t in x.split(1):
x, hidden = model(x, hidden)
return x
I cannot make...
However, I am thinking maybe here is the right place to ask for help. Thus I copied my ask here:
This is regarding sample at:
https://pytorch.org/blog/the-road-to-1_0/: 12
from torch.jit import script
@script
def rnn_loop(x):
hidden = None
for x_t in x.split(1):
x, hidden = model(x, hidden)
return x
I cannot make it to work. Here is my code:
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
def test_ScriptModelRNN():
class SimpleRNNCell(nn.Module):
def **init** (self, input_size, hidden_size):
super(SimpleRNNCell, self). **init** ()
self.linear_h = nn.Linear(input_size, hidden_size)
def forward(self, inp, h_0):
h = self.linear_h(inp)
return h + h_0, h
with torch.no_grad():
sequence_len, input_size, hidden_size = 4, 3, 2
model = SimpleRNNCell(input_size, hidden_size)
hidden = torch.zeros(1, hidden_size)
# # test cell
# cell_input = torch.randn(input_size)
# cell_output, hidden = model(cell_input, hidden)
# import pdb; pdb.set_trace()
# #
@torch.jit.script
def rnn_loop(x):
hidden = None
for x_t in x.split(1):
x, hidden = model(x_t, hidden)
return x
input = torch.randn(sequence_len, input_size)
output = rnn_loop(input)
I am getting:
Exception has occurred: RuntimeError
for operator (Tensor 0, Tensor 1) -> (Tensor, Tensor):
expected a value of type Tensor for argument '1' but found Tensor?
@torch.jit.script
def rnn_loop(x):
hidden = None
for x_t in x.split(1):
x, hidden = model(x_t, hidden)
~~~~~~ <--- HERE
return x
:
@torch.jit.script
def rnn_loop(x):
hidden = None
for x_t in x.split(1):
x, hidden = model(x_t, hidden)
~~~~~ <--- HERE
return x
File "/home/liqun/pytorch/torch/jit/ **init** .py", line 751, in script
_jit_script_compile(mod, ast, _rcb, get_default_args(obj))
File "/home/liqun/Untitled Folder/test_onnx_export.py", line 218, in test_ScriptModelRNN
@torch.jit.script
File "/home/liqun/Untitled Folder/test_onnx_export.py", line 282, in
test_ScriptModelRNN()
File "/home/liqun/.conda/envs/py36/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/liqun/.conda/envs/py36/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/liqun/.conda/envs/py36/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname) |
st182023 | Solved by richard in post #2
Thanks for the question, @liqun.
The problem looks like you’re passing None as the hidden:
def rnn_loop(x):
hidden = None
**for** x_t **in** x.split(1):
x, hidden = model(x_t, hidden)
However, you’re not handling None hidden values in your model.
Please see the following file https://github.com/… |
st182024 | Thanks for the question, @liqun.
The problem looks like you’re passing None as the hidden:
def rnn_loop(x):
hidden = None
**for** x_t **in** x.split(1):
x, hidden = model(x_t, hidden)
However, you’re not handling None hidden values in your model.
Please see the following file https://github.com/pytorch/benchmark/blob/master/rnns/fastrnns/custom_lstms.py 67 for some samples of custom RNNs written with TorchScript. |
st182025 | Thanks Richard!
The link is very helpful. I will take a close look.
BTW, if I assign a tensor value instead of None to the hidden, like this:
hidden = torch.zeros(1, hidden_size)
I am getting:
python value of type ‘int’ cannot be used as a value:
@torch.jit.script
def rnn_loop(x):
hidden = torch.zeros(1, hidden_size)
~~~~~~~~~~~ <— HERE
for x_t in x.split(1):
x, hidden = model(x_t, hidden)
return x
File “/home/liqun/pytorch/torch/jit/init.py”, line 751, in script
_jit_script_compile(mod, ast, _rcb, get_default_args(obj))
Actually this was the error I have been getting. The None assignment was just to workaround this error to see how far I can go.
And if I assign a tensor value to the hidden outside of the loop, I am getting:
python value of type ‘Tensor’ cannot be used as a value:
@torch.jit.script
def rnn_loop(x):
#hidden = torch.zeros(1, hidden_size)
for x_t in x.split(1):
x, hidden = model(x_t, hidden)
~~~~~~ <— HERE
Thanks again,
Liqun |
st182026 | Hi,
I am trying to save images at validation time after almost every epoch to see how my network is learning but I am getting this error.
Here is my code to save images.
def test(epoch, test_loss_list):
model.eval()
for batch_idx, (subject) in enumerate(validation_loader):
image = subject['image_data']
mask = subject['gt_data']
if params['cuda']:
image, mask = image.cuda(), mask.cuda() #Loading images into the GPU and ignore the affine.
with torch.no_grad():
output = model(image)
# loss = criterion(output, mask)
loss = dice_loss(output, mask)
test_loss_list.append(loss.data.item())
# Saving the image and its mask in its own folder
save_image(subject['image_name'], image.cpu().detach().numpy(), np.array(list(subject['affine'])), epoch, params['epoch_dir'])
save_image(subject['image_name'], mask.cpu().detach().numpy(), np.array(list(subject['affine'])), epoch, params['epoch_dir'], mask = True)
if batch_idx % int(params['log_interval']) == 0:
print('Test Epoch: {} [{}/{} ({:.0f}%)]\tAverage DICE Loss: {:.6f}'.format(
epoch, batch_idx * len(image), len(validation_loader.dataset),
100. * batch_idx / len(validation_loader), loss.data.item()))
for param_group in optimizer.param_groups:
print("Learning rate: ", param_group['lr'])
sys.stdout.flush()
def save_image(image_name, image, affine, epoch, folder, mask = False):
"""
parameters:
image_name : takes in a string of the image name
image : expecting a numpy array
affine : expecting a numpy affine
epoch : the epoch count whichever you're running from
folder : the epoch where stuff is gonna be stored in
"""
c = nib.Nifti1Image(image, affine)
os.mkdir(os.path.join(folder, epoch, image_name))
if mask == True:
c = np.array(c, dtype = np.int8)
nib.save(c, os.path.join(folder, epoch, image_name, image_name + '_mask.nii.gz'))
else:
nib.save(c, os.path.join(folder, epoch, image_name, image_name + '.nii.gz'))
return
This is the error generated
save_image(subject['image_name'], image.cpu().detach().numpy(), np.array(list(subject['affine'])), epoch, params['epoch_dir'])
ValueError: only one element tensors can be converted to Python scalars
Can anyone tell me where I may be going wrong? |
st182027 | The stack trace
runfile('/somestuff/pytorch_projects/trainer.py', wdir='/somestuff/pytorch_projects')
File "/home/someone/anaconda3/envs/pytorch/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 786, in runfile
execfile(filename, namespace)
File "/home/someone/anaconda3/envs/pytorch/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/somestuff/pytorch_projects/trainer.py", line 318, in <module>
main()
File "/somestuff/pytorch_projects/trainer.py", line 298, in main
test(i, test_loss_list)
File "/somestuff/pytorch_projects/trainer.py", line 252, in test
save_image(subject['image_name'], image.cpu().detach().numpy(), np.array(list(subject['affine'])), epoch, params['epoch_dir'])
ValueError: only one element tensors can be converted to Python scalars |
st182028 | Thanks for the information!
That doesn’t really clear things up, so could you please print the shapes and values of each argument you are passing?
print(subject['image_name'])
print(image.cpu().detach().numpy().shape)
... |
st182029 | subject['image_name'] #this is a string, at least that's what my dataloader is returning
image #this is a tensor object of shape [1, 128, 128, 128]
I am new to Pytorch so it is hard to figure this out on how to detach and save it.
Here is my dataloader object
class SomethingDataset(Dataset):
def __init__(self, csv_file, root_dir):
self.df = pd.read_csv(csv_file, header = None)
self.root = root_dir
def __len__(self):
return len(self.df)
def __getitem__(self, subejct_id):
# folder_path = self.df[subject_id, 0]
image_name = self.df.iloc[subject_id, 0]
image_path = os.path.join(self.df.iloc[subject_id, 1])
gt_path = os.path.join(self.df.iloc[subject_id, 2])
image = nib.load(image_path)
gt = nib.load(gt_path)
image_data = np.reshape(image.get_fdata().astype(np.float32), (1, 128, 128, 128))
gt_data = np.reshape(gt.get_data().astype(np.float32), (1, 128, 128, 128))
affine = image.affine
sample = {'image_name':image_name, 'image_data':image_data, 'gt_data':gt_data, 'affine':affine}
return sample |
st182030 | What do the other arguments print out ( np.array(list(subject['affine'])), epoch, params['epoch_dir'])?
Although your image dimension look strange, I assume nib.Nifti1Image should handle it.
Could you try to debug your code line by line? E.g. I’m not sure if nib.Nifti1Image takes images in this shape or without the batch dimension, so you could try to call image = image.squeeze() before casting it. |
st182031 | The Batch Dimension is such a big problem while saving the images. Thank you for the hint. I will try to run this and update soon. |
st182032 | Is it good time to use jit in production level?
or tensorflow serving is better? |
st182033 | Hi.
is there a difference between
x = F.leaky_relu(self.in_2(self.conv1(x)), inplace=True)
and
x = self.conv1(x)
x = self.in_2(x)
x = F.leaky_relu(x, inplace=True)
Does writing in a single line mean multiple feature maps won’t be created?
What are the pros and cons? |
st182034 | Both code snippets will create the same output.
It’s basically a question of your coding style.
In the second example, you could add some debugging print statement slightly easier, e.g. in case you would like to see the shape of the intermediate activation. |
st182035 | Thanks, I thought that the first one would use less memory as compared to the second one. It’s not the case, right? |
st182036 | That shouldn’t be the case, since x is reused and thus will be overwritten while Autograd will take care of creating the same computation graph. |
st182037 | If I make a new module to use in a NN, should I always be adding the @weak_module annotation, or in general, when would I want to use this for my own modules?
Specifically in my case, I was considering subclassing torch.nn.conv2d so that I could manipulate the weights before sending it through the normal conv2d module, so should I add @weak_module to my new subclass?
The original conv2d source 2 has it, so wasn’t sure if my modules also need it. |
st182038 | Solved by driazati in post #4
@weak_module and @weak_script only delay compilation until the module they’re attached to is used, so there would be no performance change with it. |
st182039 | You shouldn’t need to worry about the @weak_module annotation, that is internal to JIT to make the nn library compatible and low overhead with JIT, it is not a public API so it means use it at your own risk.
In practice, you don’t need to care about that annotation in most cases. If you want to extend a nn class and use it in TorchScript, I encourage you to make a ScriptModule as indicated by the JIT documentation (https://pytorch.org/docs/stable/jit.html 12) |
st182040 | But would using @weak_module and @weak_script make my custom nn changes faster, or no? |
st182041 | @weak_module and @weak_script only delay compilation until the module they’re attached to is used, so there would be no performance change with it. |
st182042 | Hi,
I found a strange behavior (maybe it’s normal, idk) during a JIT conversion of one of my model.
When I use the jit capabilities to export my model with torch.jit.trace(model, torch.randn(1, 2, 10, 10, 10)), if I have a torch.nn.functional.interpolate(x, scale_factor=2, model="trilinear", align_corners=True) inside the forward pass, the jit model seems to be working with an input of size (1, 2, 10, 10, 10) strictly.
Here is a small script to reproduce the behavior:
import torch
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv = torch.nn.Conv3d(5, 1, 3, padding=1, bias=False)
def forward(self, x):
new_x = self.conv(x)
up_x = torch.nn.functional.interpolate(
new_x, scale_factor=2, mode="trilinear", align_corners=True)
return up_x
inp_5 = torch.randn(1, 5, 5, 5, 5)
inp_10 = torch.randn(1, 5, 10, 10, 10)
inp_15 = torch.randn(1, 5, 15, 15, 15)
model = Model()
model.eval()
trace = torch.jit.trace(model, inp_10)
trace.save("trace.pth")
result_model_5 = model(inp_5)
result_model_10 = model(inp_10)
result_model_15 = model(inp_15)
t_model = torch.jit.load("trace.pth")
result_t_model_5 = t_model(inp_5)
result_t_model_10 = t_model(inp_10)
result_t_model_15 = t_model(inp_15)
print("Shape 5, {} ||| {}".format(result_model_5.shape, result_t_model_5.shape))
print("Shape 10, {} ||| {}".format(result_model_10.shape, result_t_model_10.shape))
print("Shape 15, {} ||| {}".format(result_model_15.shape, result_t_model_15.shape))
torch.allclose(result_model_5, result_t_model_5)
torch.allclose(result_model_10, result_t_model_10)
torch.allclose(result_model_15, result_t_model_15)
Outputs:
Shape 5, torch.Size([1, 1, 10, 10, 10]) ||| torch.Size([1, 1, 20, 20, 20])
Shape 10, torch.Size([1, 1, 20, 20, 20]) ||| torch.Size([1, 1, 20, 20, 20])
Shape 15, torch.Size([1, 1, 30, 30, 30]) ||| torch.Size([1, 1, 20, 20, 20])
Traceback (most recent call last):
File "main.py", line 37, in <module>
torch.allclose(result_model_5, result_t_model_5)
RuntimeError: The size of tensor a (10) must match the size of tensor b (20) at non-singleton dimension 4
Is it normal behavior? |
st182043 | Solved by Michael_Suo in post #4
Tracing doesn’t understand dynamic control flow, so sometimes it will “constant-ify” shapes in your model. Try turning your model in to a ScriptModule and using TorchScript; it should fix this problem. |
st182044 | Seems Odd,
import torch
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv = torch.nn.Conv3d(5, 1, 3, padding=1, bias=False)
def forward(self, x):
new_x = self.conv(x)
up_x = torch.nn.functional.interpolate(
new_x, scale_factor=2, mode="trilinear", align_corners=True)
return up_x
inp_5 = torch.randn(1, 5, 5, 5, 5)
inp_10 = torch.randn(1, 5, 10, 10, 10)
inp_15 = torch.randn(1, 5, 15, 15, 15)
model = Model()
model.eval()
trace = torch.jit.trace(model, inp_10)
trace.save("trace.pth")
result_model_5 = model(inp_5)
result_model_10 = model(inp_10)
result_model_15 = model(inp_15)
print("Shape 5, {} ||| {}".format(result_model_5.shape, result_model_5.shape))
print("Shape 10, {} ||| {}".format(result_model_10.shape, result_model_10.shape))
print("Shape 15, {} ||| {}".format(result_model_15.shape, result_model_15.shape))
t_model = torch.jit.load("trace.pth")
result_t_model_5 = t_model(inp_5)
result_t_model_10 = t_model(inp_10)
result_t_model_15 = t_model(inp_15)
print("Shape 5, {} ||| {}".format(result_model_5.shape, result_t_model_5.shape))
print("Shape 10, {} ||| {}".format(result_model_10.shape, result_t_model_10.shape))
print("Shape 15, {} ||| {}".format(result_model_15.shape, result_t_model_15.shape))
torch.allclose(result_model_5, result_t_model_5)
torch.allclose(result_model_10, result_t_model_10)
torch.allclose(result_model_15, result_t_model_15
Execute this and revert, please. |
st182045 | I don’t see any difference with your proposal:
Shape 5, torch.Size([1, 1, 10, 10, 10]) ||| torch.Size([1, 1, 10, 10, 10])
Shape 10, torch.Size([1, 1, 20, 20, 20]) ||| torch.Size([1, 1, 20, 20, 20])
Shape 15, torch.Size([1, 1, 30, 30, 30]) ||| torch.Size([1, 1, 30, 30, 30])
Shape 5, torch.Size([1, 1, 10, 10, 10]) ||| torch.Size([1, 1, 20, 20, 20])
Shape 10, torch.Size([1, 1, 20, 20, 20]) ||| torch.Size([1, 1, 20, 20, 20])
Shape 15, torch.Size([1, 1, 30, 30, 30]) ||| torch.Size([1, 1, 20, 20, 20])
Traceback (most recent call last):
File "test_ans.py", line 39, in <module>
torch.allclose(result_model_5, result_t_model_5)
RuntimeError: The size of tensor a (10) must match the size of tensor b (20) at non-singleton dimension 4
There is still an issue(?) with the model generated with the JIT. When you say revert you mean retest? |
st182046 | Tracing doesn’t understand dynamic control flow, so sometimes it will “constant-ify” shapes in your model. Try turning your model in to a ScriptModule and using TorchScript; it should fix this problem. |
st182047 | Thanks for your time!
I made it works with something like this:
class Interpolate(torch.jit.ScriptModule):
__constants__ = ["scale_factor", "mode", "align_corners"]
def __init__(self, scale_factor=2.0, mode="nearest", align_corners=None):
super(Interpolate, self).__init__()
self.scale_factor = scale_factor
self.mode = mode
self.align_corners = align_corners
@torch.jit.script_method
def forward(self, X):
return nn.functional.interpolate(X, scale_factor=self.scale_factor,
mode=self.mode, align_corners=self.align_corners)
I finally get the desired output:
Shape 5, torch.Size([1, 1, 10, 10, 10]) ||| torch.Size([1, 1, 10, 10, 10])
Shape 10, torch.Size([1, 1, 20, 20, 20]) ||| torch.Size([1, 1, 20, 20, 20])
Shape 15, torch.Size([1, 1, 30, 30, 30]) ||| torch.Size([1, 1, 30, 30, 30])
True
True
True |
st182048 | I am trying to learn more about the JIT compiler and was implementing the examples from the documentation, particularly this one (from https://pytorch.org/docs/stable/jit.html#torch.jit.ScriptModule 2):
import torch
@torch.jit.script
def foo(x, y):
if x.max() > y.max():
r = x
else:
r = y
return r
The execution of the code resulted in this error:
image.png963×451 12.4 KB
Do you guys have any ideas on what is wrong? |
st182049 | We made a change to make the bool casting more strict, try: if bool(x.max() > y.max()): instead. We need to update the docs to this effect, I’ll file a GH issue. |
st182050 | github.com/pytorch/pytorch
Issue: [jit] be more permissive with bool casting. 9
opened by suo
on 2019-03-23
In the TorchScript docs there is an example that doesn't compile:
import torch
@torch.jit.script
def foo(x, y):
if x.max() > y.max():
r = x
...
jit |
st182051 | Hello,
What’s the best way to write TorchScript code which does this:
class S(torch.jit.ScriptModule):
def __init__(self):
self.tensor_constant = torch.ones(2)
@torch.jit.script_method
def forward(self):
return self.tensor_constant + 2
S()
It fails with
attribute 'tensor_constant' of type 'Tensor' is not usable in a script method (Tensors must be added to a module as a buffer or parameter):
In other words, in TorchScript how can I use a tensor populated using a different module?
Thanks,
Omkar |
st182052 | There are 2 things wrong with the code
You created a new class ‘S’ which subclassed torch.jit.ScriptModule. Now after creating the class S, you have to call it’s super-constructor i.e. you have to run the __init__() function of the class you are subclassing from (which in your case is init() function of torch.jit.ScriptModule). It is done with this code super().__init__()
When creating torch scripts you have to use register buffers (as JIT compiles the module for you, so it must have info about everything)
class S(torch.jit.ScriptModule):
def __init__(self):
super().__init__()
self.register_buffer('tensor_constant', torch.ones(2, dtype=torch.float))
@torch.jit.script_method
def forward(self):
return self.tensor_constant + 2
S()
Ask for clasrifications. |
st182053 | I have a model whose predict method takes an argument of type Dict[str, Tensor].
I have been able to successfully serialize and save the model as a ScriptModule.
Now I want to test the model from c++ front-end, but I am not able to figure out how to create a test-input using the c++ front-end having the required type of Dict[str, Tensor].
Another related question: How can I save an input of this type from python so that I will be able to later load it the form c++ frontend in order to test the ScriptModule with this input? |
st182054 | I put up an end-to-end example 45 that should help. The dict construction in C++ can be found here 12. Please follow up if anything is still unclear.
For your other question, we don’t support that yet but it shouldn’t take too long to add, you can track it here: https://github.com/pytorch/pytorch/issues/18286 17 |
st182055 | I am trying to trace nvidia’s tacotron 2 model and interface with it via. the C++ frontend.
Running the traced function via. the Python frontend works just fine, and reports results as expected.
Through the C++ frontend however, it complains of a list of weights being fed into tacotron’s decoder LSTM not being of equal size (as each weight parameter is of different size/type).
image.png1272×529 83.5 KB
The code used to trace and export out the model is as follows (nvidia’s implementation is linked here: https://github.com/NVIDIA/tacotron2 4):
import numpy as np
import torch
from hparams import create_hparams
from text import text_to_sequence
from train import load_model
hparams = create_hparams()
hparams.sampling_rate = 22050
tacotron = load_model(hparams)
tacotron.load_state_dict(torch.load("tacotron2_statedict.pt", map_location='cpu')['state_dict'])
tacotron.eval()
print(tacotron)
text = "This is some random text."
sequence = np.array(text_to_sequence(text, ['english_cleaners']))[None, :]
sequence = torch.autograd.Variable(torch.from_numpy(sequence)).long()
traced_tacotron = torch.jit.trace(tacotron.inference, sequence, optimize=False, check_trace=False)
traced_tacotron.save("tacotronzzz.pt")
print(tacotron.inference(sequence))
Here is the C++ frontend code:
#include <iostream>
#include <torch/script.h>
#include <torch/torch.h>
using namespace std;
int main() {
shared_ptr<torch::jit::script::Module> tacotron = torch::jit::load("tacotronzzz.pt");
assert(tacotron != nullptr);
return 0;
}
If it helps, I can also provide a download to tacotronzzz.pt.
Any help is much appreciated; if this is actually a bug, I’m happy to dig into the internals of libtorch and see if this could be fixed in any way. |
st182056 | Just to add some reference links regarding how Lists in the IR are interpreted:
github.com
pytorch/pytorch/blob/ac00e85e36c236f141d0621bfbbbbe8c9ffeefd1/torch/csrc/jit/script/compiler.cpp#L2224
// if the list is non-empty use type_of(list[0])
// otherwise assume it is List[Tensor]
TypePtr elem_type = TensorType::get();
if (type_hint && type_hint->kind() == TypeKind::ListType) {
elem_type = type_hint->expect<ListType>()->getElementType();
} else if (!values.empty()) {
elem_type = values.at(0)->type();
}
for (auto v : values) {
if (!v->type()->isSubtypeOf(elem_type)) {
throw ErrorReport(tree)
<< "Lists must contain only a single type, expected: "
<< *elem_type << " but found " << *v->type() << " instead";
}
}
Value* result =
graph->insertNode(graph->createList(elem_type, values))->output();
return result;
} break;
case TK_TUPLE_LITERAL: {
auto ll = TupleLiteral(tree);
github.com
pytorch/pytorch/blob/ac00e85e36c236f141d0621bfbbbbe8c9ffeefd1/torch/csrc/jit/ir.cpp#L1237
n->i_(attr::end, end);
std::vector<TypePtr> output_types;
for (auto i = beg; i < end; ++i) {
output_types.push_back(tuple_type->elements().at(i));
}
auto tt = TupleType::create(std::move(output_types));
n->output()->setType(tt);
return n;
}
Node* Graph::createList(const TypePtr& elem_type, at::ArrayRef<Value*> values) {
auto n = create(prim::ListConstruct, values);
for (const auto& v : values) {
AT_ASSERT(v->type()->isSubtypeOf(elem_type));
}
n->output()->setType(ListType::create(elem_type));
return n;
}
Node* Graph::createListUnpack(Value* v, size_t size) {
ListTypePtr list_type = v->type()->expect<ListType>();
TypePtr elem_type = list_type->getElementType(); |
st182057 | This in general seems to be a problem with tracing LSTM.
Here is the class traced:
class Encoder(nn.Module):
"""Encoder module:
- Three 1-d convolution banks
- Bidirectional LSTM
"""
def __init__(self, hparams):
super(Encoder, self).__init__()
convolutions = []
for _ in range(hparams.encoder_n_convolutions):
conv_layer = nn.Sequential(
ConvNorm(hparams.encoder_embedding_dim,
hparams.encoder_embedding_dim,
kernel_size=hparams.encoder_kernel_size, stride=1,
padding=int((hparams.encoder_kernel_size - 1) / 2),
dilation=1, w_init_gain='relu'),
nn.BatchNorm1d(hparams.encoder_embedding_dim))
convolutions.append(conv_layer)
self.convolutions = nn.ModuleList(convolutions)
self.lstm = nn.LSTM(hparams.encoder_embedding_dim,
int(hparams.encoder_embedding_dim / 2), 1,
batch_first=True, bidirectional=True)
def forward(self, x, input_lengths):
for conv in self.convolutions:
x = F.dropout(F.relu(conv(x)), 0.5, self.training)
x = x.transpose(1, 2)
# pytorch tensor are not reversible, hence the conversion
input_lengths = input_lengths.cpu().numpy()
x = nn.utils.rnn.pack_padded_sequence(
x, input_lengths, batch_first=True)
self.lstm.flatten_parameters()
outputs, _ = self.lstm(x)
outputs, _ = nn.utils.rnn.pad_packed_sequence(
outputs, batch_first=True)
return outputs
def inference(self, x):
for conv in self.convolutions:
x = F.dropout(F.relu(conv(x)), 0.5, self.training)
x = x.transpose(1, 2)
self.lstm.flatten_parameters()
outputs, _ = self.lstm(x)
return outputs
This yields the IR which makes use of lists whose elements are of multiple types (each element is a Float tensor, though of different shape/size).
%hx.1 : Float(2, 1, 256) = aten::zeros(%105, %106, %107, %108), scope: LSTM
%146 : Tensor[] = prim::ListConstruct(%hx.1, %hx.1), scope: LSTM
%147 : Float(1024, 512) = prim::Constant[value=<Tensor>](), scope: LSTM
%148 : Float(1024, 256) = prim::Constant[value=<Tensor>](), scope: LSTM
%149 : Float(1024) = prim::Constant[value=<Tensor>](), scope: LSTM
%150 : Float(1024) = prim::Constant[value=<Tensor>](), scope: LSTM
%151 : Float(1024, 512) = prim::Constant[value=<Tensor>](), scope: LSTM
%152 : Float(1024, 256) = prim::Constant[value=<Tensor>](), scope: LSTM
%153 : Float(1024) = prim::Constant[value=<Tensor>](), scope: LSTM
%154 : Float(1024) = prim::Constant[value=<Tensor>](), scope: LSTM
%155 : Tensor[] = prim::ListConstruct(%147, %148, %149, %150, %151, %152, %153, %154), scope: LSTM
%156 : bool = prim::Constant[value=1](), scope: LSTM
%157 : int = prim::Constant[value=1](), scope: LSTM
%158 : float = prim::Constant[value=0](), scope: LSTM
%159 : bool = prim::Constant[value=0](), scope: LSTM
%160 : bool = prim::Constant[value=1](), scope: LSTM
%161 : bool = prim::Constant[value=1](), scope: LSTM
%memory : Float(1!, 41, 512), %163 : Float(2, 1, 256), %164 : Float(2, 1, 256) = aten::lstm(%input.14, %146, %155, %156, %157, %158, %159, %160, %161), scope: LSTM |
st182058 | whoops, missed this somehow, sorry for the late reply. We have made a change in master that allows lists to hold tensors of different shapes/sizes. If you try a nightly build it should work for you |
st182059 | I wish to do forward and backward execution in ScriptModule and then retrieve input and output tensors for each node in execution graph. Is this possible?
I’ve code like below:
trace, out = torch.jit.get_trace_graph(model, args)
out.backward(gradient=torch.ones(out.size()))
torch.onnx._optimize_trace(trace, torch.onnx.OperatorExportTypes.ONNX)
torch_graph = trace.graph()
for node in torch_graph.nodes():
# How to get input and output tensor for this node?
# Additionally is it possible to get parameters (weight/bias) for this node? |
st182060 | Could you explain a bit about what you’re trying to accomplish? Generally we don’t recommend that people use the python IR bindings as the API is not stable and you can get into some C+±y trouble. Thanks! |
st182061 | hi, i can’t find source defination of Tensor.bmm which i think is a matrix operation method? can kindly tell which file has it even written in c++ I am still interesting. thanks. |
st182062 | Look in https://github.com/pytorch/pytorch/tree/master/aten/src/ATen/native 186 for the relevant implementations |
st182063 | Hi !
I am currently trying to trace a custom RNN model to be able to execute it faster (I think it would make sense given how it uses for loops, but feel free to correct me if I’m wrong), for which a minimal example would be as follows :
import torch
from torch.nn import Module, Parameter
from torch.autograd import Variable
class minimal_ex(Module):
def __init__(self, input_size=1, recurrent_size=1):
super(minimal_ex, self).__init__()
n_h, n_i = recurrent_size, input_size
self.input_size = input_size
self.recurrent_size = recurrent_size
self.activation = lambda x: x
self.mask_rec = Parameter(torch.ones(n_h, n_h), requires_grad=False)
self.mask_in = Parameter(torch.ones(n_h, n_i), requires_grad=False)
self.neuron_signs = Parameter((torch.ones(n_h)).float(), requires_grad=False)
self.w_i = Parameter(torch.ones(recurrent_size, input_size))
self.w_h = Parameter(torch.zeros(recurrent_size, recurrent_size))
self.bias = Parameter(torch.zeros(recurrent_size))
self.cuda()
def forward(self, inp):
h_0 = Variable(torch.zeros(1, self.recurrent_size)).cuda()
batch_size, seq_len, dim = inp.shape
w_i = (self.w_i * self.mask_in).transpose(0,1)
w_h = (self.w_h * self.mask_rec).transpose(0,1)
w_h_with_signs = torch.mm(torch.diag(self.neuron_signs), w_h)
h = FloatTensor(batch_size, seq_len, self.recurrent_size)
h[:, -1, :] = h_0
for t in torch.arange(seq_len):
h[:, t, :] = self.activation(torch.mm(inp[:, t, :].clone(), w_i) +
torch.mm(h[:, t-1, :].clone(), w_h_with_signs) - self.bias)
return h
bs = 64
seq_len = 500
in_size = 2
inp = torch.FloatTensor(bs, seq_len, in_size).uniform_().cuda()
model = minimal_ex(input_size=in_size, recurrent_size=128)
traced_forward = torch.jit.trace(model, inp)
When I do so, the output contains several warnings :
TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
h = FloatTensor(batch_size, seq_len, self.recurrent_size)
RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won’t change the number of iterations executed (and might lead to errors or silently give incorrect results).
‘incorrect results).’, category=RuntimeWarning)
TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
torch.mm(h[:, t-1, :].clone(), w_h_with_signs) - self.bias)
TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the repeated trace. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 26, 0] (0.48441100120544434 vs. 3.6893488147419103e+19) and 4095999 other locations (100.00%)
_check_trace([example_inputs], func, executor_options, module, check_tolerance, _force_outplace)
I was expecting some warnings (in particular, if I change the seq_len I would not expect it to work, but this is fine), but I don’t understand why the output of the traced model blows up while the Python one does not.
Is there something wrong with the way I use trace?
Thank you in advance for your help |
st182064 | Hey, are you sure that repro is exactly what you are running? When I try to run it on our master branch, I get a few runtime errors. (some are trivial, like an unqualified call to FloatTensor).
Just want to make sure we are running exactly the same code so I can look into your issue |
st182065 | Sorry about that, I had removed a local import that I thought was irrelevant but defined FloatTensor = torch.cuda.FloatTensor
I edited the code to directly use torch.FloatTensor, and surprisingly it makes a difference and the model does not blow up anymore (but still gives wrong results) :
New code :
import torch
from torch.nn import Module, Parameter
from torch.autograd import Variable
class minimal_ex(Module):
def __init__(self, input_size=1, recurrent_size=1):
super(minimal_ex, self).__init__()
n_h, n_i = recurrent_size, input_size
self.input_size = input_size
self.recurrent_size = recurrent_size
self.activation = lambda x: x
self.mask_rec = Parameter(torch.ones(n_h, n_h), requires_grad=False)
self.mask_in = Parameter(torch.ones(n_h, n_i), requires_grad=False)
self.neuron_signs = Parameter((torch.ones(n_h)).float(), requires_grad=False)
self.w_i = Parameter(torch.ones(recurrent_size, input_size))
self.w_h = Parameter(torch.zeros(recurrent_size, recurrent_size))
self.bias = Parameter(torch.zeros(recurrent_size))
self.cuda()
def forward(self, inp):
h_0 = Variable(torch.zeros(1, self.recurrent_size)).cuda()
batch_size, seq_len, dim = inp.shape
w_i = (self.w_i * self.mask_in).transpose(0,1)
w_h = (self.w_h * self.mask_rec).transpose(0,1)
w_h_with_signs = torch.mm(torch.diag(self.neuron_signs), w_h)
h = torch.FloatTensor(batch_size, seq_len, self.recurrent_size).cuda()
h[:, -1, :] = h_0
for t in torch.arange(seq_len):
h[:, t, :] = self.activation(torch.mm(inp[:, t, :].clone(), w_i) +
torch.mm(h[:, t-1, :].clone(), w_h_with_signs) - self.bias)
return h
if __name__ == '__main__':
print(torch.__version__)
print(torch.version.cuda)
bs = 64
seq_len = 500
in_size = 2
inp = torch.FloatTensor(bs, seq_len, in_size).uniform_().cuda()
model = minimal_ex(input_size=in_size, recurrent_size=128)
traced_forward = torch.jit.trace(model, inp)
New output :
1.0.0
9.0.176
rep.py:37: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
h = torch.FloatTensor(batch_size, seq_len, self.recurrent_size).cuda()
/users/fanthomme/miniconda3/envs/env_these/lib/python3.6/site-packages/torch/tensor.py:427: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won’t change the number of iterations executed (and might lead to errors or silently give incorrect results).
‘incorrect results).’, category=RuntimeWarning)
rep.py:42: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
torch.mm(h[:, t-1, :].clone(), w_h_with_signs) - self.bias)
/users/fanthomme/miniconda3/envs/env_these/lib/python3.6/site-packages/torch/jit/init.py:642: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[34, 459, 0] (0.0 vs. 1.991980791091919) and 4095999 other locations (100.00%)
_check_trace([example_inputs], func, executor_options, module, check_tolerance, _force_outplace)
I might also add that running that script several times always gives 100% discrepancy, but the exact position and value slightly differ. And the model is running on a Tesla K40c
Please do not hesitate if you have any other question, and thanks a lot |
st182066 | Hi Everyone,
I have a problem regarding JIT of Pytorch.
github.com/pytorch/pytorch
Implement 'to' on ScriptModules
by vfdev-5
on 01:13PM - 18 Dec 18 UTC
changed 2 files
with 6 additions
and 1 deletions.
This link says that to is implemented in the Pytorch and ready for use but as I try to use it, it doesn’t function, I even have uninstalled and reinstalled the whole thing 3 time and I have made sure that I have latest version of pytorch which is 1.0.1 which is also shown in the picture below
The thing is, changes mentioned in the above github link doesn’t appear in the pytroch I install with command conda install pytorch torchvision cudatoolkit=9.0 -c pytorch which are in the file test/test_jit.py and torch/jit/__init__.py.
I also tried to install from source but it gives weird errors and doesn’t install.
Any idea what’s happening here? |
st182067 | Not sure, if it’ll help, but could you try to install the nightly builds and check it again? |
st182068 | Hi
I am trying to visualize intermediate layer outputs generate by one input image during inference of a PyTorch model. Preferably I would like to do this from a traced graph, for example one from the torchvision modelzoo.
I.e. if I have a model file created like this
import torch
import torchvision
org_model = torchvision.models.resnet18(pretrained=True)
traced_net = torch.jit.trace(org_model, torch.rand(1, 3, 224, 224))
torch.jit.save(traced_net, "resnet.pth")
Then I want to be able to load that model and output the activations of for example “layer1”
traced_model_loaded = torch.jit.load("resnet.pth")
input_ = torch.rand(1, 3, 224, 224)
layer1_act = traced_model_loaded.layer1(input_)
Is this possible? If not, can I in some way modify the original PyTorch model so that an arbitrary number of layer activations becomes accessible? Using forward hooks does not seem to be supported.
Thanks! |
st182069 | I think you can always output the activations using the org_model, or debug in python as you want, and if you think it’s good, then do the tracing and serialization.
If you want to see the intermediates in the traced model, you can still modify the original model and add print stmts etc to debug it. |
st182070 | Yes, it would however require me to have to original model. What I wanted to do was to create a generic way to visualize the layer activations of an arbitrary layer (or channel) at inference. However, have realized that it is not possible, that you need the original model to be able to do this. The easiest way of doing this when having the model was to register forward hooks, which then outputs the resulting activations during a forward pass. So I got the functionality I wanted, but not with the traced graph. |
st182071 | My code is
import torch
import torch.nn.functional as F
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
class ScriptNet(torch.jit.ScriptModule):
def __init__(self, n_features, out_size):
super().__init__()
self.fc1 = torch.jit.trace(torch.nn.Linear(n_features, 50), torch.rand(1, n_features))
self.fc2 = torch.jit.trace(torch.nn.Linear(50, out_size), torch.rand(1, 50))
@torch.jit.script_method
def forward(self, x):
x = F.dropout(F.relu(self.fc1(x)), 0.5)
return F.softmax(self.fc2(x))
net = ScriptNet(10,2).to(device)
I got this error:
RuntimeError: to is not supported on TracedModules
I know I can save the ScriptModule and use torch.jit.load(modulefile, map_location=‘cuda’) to load the module into GPU, but I wonder if I can directly move the module to GPU without saving and loading. |
st182072 | Solved by justusschock in post #3
You can’t do this with .to but net.cuda() should work. |
st182073 | What version of PyTorch are you using? This should be available in 1.0 or the nightlies |
st182074 | net.cuda() works. Thank you. Do you think Pytorch should add .to() to ScriptModules? |
st182075 | To answer your last question: @tom told me, that this is enabled on github master and nightlies. If you want that feature, you should probably switch to one of these. |
st182076 | I have 1.0.1 version and it is still giving this error:
Traceback (most recent call last):
File "<ipython-input-2-0cb739d0d1fb>", line 1, in <module>
runfile('/home/hiwi/Desktop/HIWI_Data/Combustion_NN_Model/train.py', wdir='/home/hiwi/Desktop/HIWI_Data/Combustion_NN_Model')
File "/home/hiwi/anaconda3/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 668, in runfile
execfile(filename, namespace)
File "/home/hiwi/anaconda3/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/home/hiwi/Desktop/HIWI_Data/Combustion_NN_Model/train.py", line 70, in <module>
main(config, args.resume)
File "/home/hiwi/Desktop/HIWI_Data/Combustion_NN_Model/train.py", line 42, in main
train_logger=train_logger)
File "/home/hiwi/Desktop/HIWI_Data/Combustion_NN_Model/trainer/trainer.py", line 16, in __init__
super(Trainer, self).__init__(model, loss, metrics, optimizer, resume, config, train_logger)
File "/home/hiwi/Desktop/HIWI_Data/Combustion_NN_Model/base/base_trainer.py", line 21, in __init__
self.model = model.to(self.device)
File "/home/hiwi/anaconda3/lib/python3.6/site-packages/torch/jit/__init__.py", line 1280, in fail
raise RuntimeError(name + " is not supported on TracedModules")
RuntimeError: to is not supported on TracedModules
this is how it is being implemented:
class BaseTrainer:
"""
Base class for all trainers
"""
def __init__(self, model, loss, metrics, optimizer, resume, config, train_logger=None):
self.config = config
self.logger = logging.getLogger(self.__class__.__name__)
# setup GPU device if available, move model into configured device
self.device, device_ids = self._prepare_device(config['n_gpu'])
self.model = model.to(self.device)
if len(device_ids) > 1:
self.model = torch.nn.DataParallel(model, device_ids=device_ids)
self.loss = loss
self.metrics = metrics
self.optimizer = optimizer
self.epochs = config['trainer']['epochs']
self.save_freq = config['trainer']['save_freq']
self.verbosity = config['trainer']['verbosity']
self.train_logger = train_logger
# configuration to monitor model performance and save best
self.monitor = config['trainer']['monitor']
self.monitor_mode = config['trainer']['monitor_mode']
assert self.monitor_mode in ['min', 'max', 'off']
self.monitor_best = math.inf if self.monitor_mode == 'min' else -math.inf
self.start_epoch = 1
# setup directory for checkpoint saving
start_time = datetime.datetime.now().strftime('%m%d_%H%M%S')
self.checkpoint_dir = os.path.join(config['trainer']['save_dir'], config['name'], start_time)
# setup visualization writer instance
writer_dir = os.path.join(config['visualization']['log_dir'], config['name'], start_time)
self.writer = WriterTensorboardX(writer_dir, self.logger, config['visualization']['tensorboardX'])
# Save configuration file into checkpoint directory:
ensure_dir(self.checkpoint_dir)
config_save_path = os.path.join(self.checkpoint_dir, 'config.json')
with open(config_save_path, 'w') as handle:
json.dump(config, handle, indent=4, sort_keys=False)
if resume:
self._resume_checkpoint(resume)
and I am using the traced modules like this:
Model File
import torch
import torch.nn as nn
import torch.nn.functional as F
from base import BaseModel
import json
import argparse
class CombustionModel(BaseModel):
def __init__(self, num_features=7):
super(CombustionModel, self).__init__()
sizes = self.get_botleneck_size() #sizes for bottlenecks
self.Fc1 = nn.Linear(in_features = 2, out_features = 500, bias=True)
self.Fc2 = nn.Linear(in_features = 500, out_features = 500, bias=True)
self.Fc3_bottleneck = nn.Linear(in_features = 500, out_features = sizes[0], bias=True)
self.Fc4 = nn.Linear(in_features = sizes[0], out_features = 500, bias=True)
self.Fc5_bottleneck = nn.Linear(in_features = 500, out_features = sizes[1], bias=True)
self.Fc6 = nn.Linear(in_features = sizes[1], out_features = 500, bias=True)
self.Fc7_bottleneck = nn.Linear(in_features = 500, out_features = sizes[2], bias=True)
self.Fc8 = nn.Linear(in_features = sizes[2], out_features = 500, bias=True)
self.Fc9_bottleneck = nn.Linear(in_features = 500, out_features = sizes[3], bias=True)
self.Fc10 = nn.Linear(in_features = sizes[3], out_features = 500, bias=True)
self.Fc11_bottleneck = nn.Linear(in_features = 500, out_features = sizes[4], bias=True)
self.Fc12 = nn.Linear(in_features = sizes[4], out_features = num_features, bias=True)
def get_botleneck_size(self):
parser = argparse.ArgumentParser(description='BottleNeck')
parser.add_argument('-c', '--config', default='config.json', type=str,
help='config file path (default: None)')
args = parser.parse_args()
config = json.load(open(args.config))
bottleneck_size = config['arch']['bottleneck_size']
if type(bottleneck_size) is list:
if len(bottleneck_size) == 5: #comparing it to 5 because we have 5 bottlenecks in the model
pass
else:
raise Exception("bottleneck's list length in config.json file is not equal to number of bottnecks in model's structure")
return bottleneck_size
elif type(bottleneck_size) is int:
list_tmp = []
for i in range(5):
list_tmp.append(bottleneck_size)
bottleneck_size = list_tmp
del(list_tmp)
return bottleneck_size
@torch.jit.script_method
def forward(self, x):
'''
This function computes the network computations based on input x
built in the constructor of the the CombustionModel
'''
'''First Layer'''
x = self.Fc1(x)
x = F.relu(x)
'''First ResNet Block'''
res_calc = self.Fc2(x)
res_calc = F.relu(res_calc)
res_calc = self.Fc3_bottleneck(res_calc)
x = F.relu(torch.add(x, res_calc))
'''Second ResNet Block'''
res_calc = self.Fc4(x)
res_calc = F.relu(res_calc)
res_calc = self.Fc5_bottleneck(res_calc)
x = F.relu(torch.add(x, res_calc))
'''Third ResNet Block'''
res_calc = self.Fc6(x)
res_calc = F.relu(res_calc)
res_calc = self.Fc7_bottleneck(res_calc)
x = F.relu(torch.add(x, res_calc))
'''Fourth ResNet Block'''
res_calc = self.Fc8(x)
res_calc = F.relu(res_calc)
res_calc = self.Fc9_bottleneck(res_calc)
x = F.relu(torch.add(x, res_calc))
'''Fifth ResNet Block'''
res_calc = self.Fc10(x)
res_calc = F.relu(res_calc)
res_calc = self.Fc11_bottleneck(res_calc)
x = F.relu(torch.add(x, res_calc))
'''Regression layer'''
return self.Fc12(x)
this is base model file
import logging
import torch
import numpy as np
class BaseModel(torch.jit.ScriptModule):
"""
Base class for all models
"""
def __init__(self):
super(BaseModel, self).__init__()
self.logger = logging.getLogger(self.__class__.__name__)
def forward(self, *input):
"""
Forward pass logic
:return: Model output
"""
raise NotImplementedError
def summary(self):
"""
Model summary
"""
model_parameters = filter(lambda p: p.requires_grad, self.parameters())
params = sum([np.prod(p.size()) for p in model_parameters])
self.logger.info('Trainable parameters: {}'.format(params))
self.logger.info(self) |
st182077 | Are you sure you’re using 1.0.1? Try putting a print(torch.__version__) in your model somewhere. This change 13 should have fixed the issue and is available in the latest release. |
st182078 | The version is the same as you mentioned.
Now I am trying to incorporate what you suggested |
st182079 | I uninstalled the pytorch and redid it again but the changes you suggested doesn’t appear in the installing directory.
I also tried to clone it form source like this:
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
and then the changes appeared in the pytorch/test/test_jit.py and torch/jit/__init__.py but as I tried to install it via running this command:
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py install
I got this kind of error:
caffe2/CMakeFiles/caffe2_gpu.dir/build.make:210147: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cudnn/AffineGridGenerator.cpp.o' failed
make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cudnn/AffineGridGenerator.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
CMakeFiles/Makefile2:6469: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/all' failed
make[1]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
File "setup.py", line 710, in <module>
build_deps()
File "setup.py", line 282, in build_deps
build_dir='build')
File "/home/hiwi/pytorch/tools/build_pytorch_libs.py", line 255, in build_caffe2
check_call(['make', '-j', str(max_jobs), 'install'], cwd=build_dir, env=my_env)
File "/home/hiwi/anaconda3/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['make', '-j', '4', 'install']' returned non-zero exit status 2.
Any idea what’s happening here? |
st182080 | Find below a self contained example:
import torch
from torch import nn
T = 10
F = 20
device = torch.device('cuda')
print('Generating data')
data = (torch.rand(1, T, F) * 0.1).to(device)
print('Loading model')
model = nn.LSTM(F, F, num_layers=1, batch_first=True, bidirectional=True, dropout=0)
model = model.eval().to(device)
print('Tracing model')
tmodel = torch.jit.trace(model, (data,))
tmodel.save('/tmp/test.pt')
print('Productionazing model')
pmodel = torch.jit.load('/tmp/test.pt', map_location=device)
print('Forwarding data')
with torch.no_grad():
o1 = model(data)[0]
o2 = tmodel(data)[0]
o3 = pmodel(data)[0]
assert (o1 == o2).all() # WORKS
assert (o2 == o3).all() # FAILS
The above example shows 3 different versions of the same model:
model: raw LSTM
tmodel: traced LSTM
pmodel: dumped and loaded tmodel
model and tmodel calculate the same output. On the contrary. pmodel outputs wrong values + NaNs.
Reproducible in both versions: (a) 1.0.0 and (b) 1.0.1
Am I doing something wrong? |
st182081 | Solved by imaluengo in post #3
Issue no longer reproducible with torch-nightly-1.0.0.dev20190304. |
st182082 | Just updated the code by replacing .cuda() with .to(device). Bug is still reproducible when using device = torch.device('cuda') but seems to the code seems to work with device = torch.device('cpu'). |
st182083 | It means you cannot share parameters (like weights) between modules and trace the forward() successfully. Consider decomposing your model into pieces that don’t share parameters. |
st182084 | Thanks for your reply.But I still don’t understand how to share parameters between modules, and how can I decompose it. |
st182085 | Hi!
I’m trying to load a PyTorch model in C++, using JIT. The model is defined as follows:
class JitModel(torch.jit.ScriptModule):
def __init__(self):
super(JitModel, self).__init__()
self.n_layers = 5
self.n_features = 14
self.fc1 = torch.nn.Linear(14, 14)
self.fc2 = torch.nn.Linear(14, 14)
self.fc3 = torch.nn.Linear(14, 14)
self.fc4 = torch.nn.Linear(14, 14)
self.fc5 = torch.nn.Linear(14, 14)
self.out = torch.nn.Linear(14, 1)
self.normalise = torch.nn.BatchNorm1d(14)
@torch.jit.script_method
def forward(self, x):
_x = x
# _x = self.normalise(_x)
_x = F.relu(self.fc1(_x))
_x = F.relu(self.fc2(_x))
_x = F.relu(self.fc3(_x))
_x = F.relu(self.fc4(_x))
_x = F.relu(self.fc5(_x))
_x = torch.sigmoid(self.out(_x))
return _x
where
F = torch.nn.functional
This model is used for training in a Python3 script and then saved using the JIT save() function.
When loaded with
torch.jit.load("model.pt") (in Python)
the model is loaded correctly.
When loaded with
torch::jit::load("model.pt") (in C++)
this happens:
terminate called after throwing an instance of 'torch::jit::script::ErrorReport'
what():
Return value was annotated as having type Tuple[] but is actually of type Optional[Tuple[]]:
op_version_set = 0
def _check_input_dim(self,
input: Tensor) -> Tuple[]:
_0 = torch.ne(torch.dim(input), 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... <--- HERE
if _0:
_1 = torch.ne(torch.dim(input), 3)
else:
_1 = _0
if _1:
ops.prim.RaiseException("Exception")
else:
pass
def forward(self,
Aborted (core dumped)
If I comment the line:
self.normalise = torch.nn.BatchNorm1d(14)
in the definition, the C++ script loads the model correctly.
Really no idea of what’s wrong with this implementation. |
st182086 | This looks like a bug in our side. I’ve failed a Github issue 25, feel free to follow along there. |
st182087 | Hi @Anthair,
Thanks for raising this issue, i tried to reproduce it, but it works all fine on my side (even if with self.normalise uncommented), c++ frontend loads the model correctly. Can you verify if this is still the case in our latest nightly? If it is still the case, can you share your environment? it might be a environment only issue. |
st182088 | Hi @wanchaol,
thank you for the help.
Using the latest (20190222) nightly, the model is loaded in C++ (or at least, it doesn’t crash when calling the load function). However, it still crashes with the stable 1.0.0.
I’ll cleanup the environment and try again with the stable.
Thanks again for helping with the issue. |
st182089 | @Anthair 1.0.1 is our latest stable version as it contains bunch of bug fixes from 1.0.0, please feel free to try out 1.0.1 and see if the error still bumps out or not. If you want to try out our latest feature, you can stick to our nightlies or build the master on your own |
st182090 | @wanchaol, thanks for the link. I downloaded 1.0.1 and I can confirm that the ::Load function does not crash. However, using either the nightly 20190222 or the stable 1.0.1, using the model for inference results in a crash:
Model Loaded
0x12321b0
terminate called after throwing an instance of 'torch::jit::JITException'
what():
Exception:
operation failed in interpreter:
op_version_set = 0
def forward(self,
x: Tensor) -> Tensor:
_0 = torch.ne(torch.dim(x), 2)
if _0:
_1 = torch.ne(torch.dim(x), 3)
else:
_1 = _0
if _1:
ops.prim.RaiseException("Exception")
~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
else:
pass
_2 = bool(self.normalise.training)
if _2:
_3 = True
else:
_3 = _2
if _3:
_4 = torch.add_(self.normalise.num_batches_tracked, 1, 1)
Aborted (core dumped)
The loading and inference code is:
#include <torch/script.h> // One-stop header.
#include <ATen/ATen.h>
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}
auto module = torch::jit::load(argv[1]);
assert(module != nullptr);
std::cout << "Model Loaded\n";
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::rand({14,}));
std::cout << module << std::endl;
auto output = module->forward(inputs);
}
I ran it on two different, totally independent setups, with the same results.
Here is the training file, for testing:
https://drive.google.com/file/d/1jEgTXMRFkUa1pynOK_rP0Lir39nzhtt8/view?usp=sharing 2 |
st182091 | @Anthair thanks a lot for the follow up, I will try to reproduce it and get back to you with more details |
st182092 | @Anthair OK I looked into your code, When you have batchnorm1d, your input must be 2d or 3d, refer to the doc here 3, changing the input to something like torch::rand(2, 14) works well. |
st182093 | @wanchaol, ok it was a stupid error on my side. Thanks for the help, now the inference works fine, I’d say we can close.
PS: I really would like to check the weights and the general state of the model, is there something similar to load_state_dict() in C++? That would make eveything easier. |
st182094 | Yeah for the load_state_dict thing, I think it should be on our roadmap for c++ frontend. Feel free to create a issue to get on track |
st182095 | I have a model that has a reshape operation inside it (essentially to do something like group normalisation, but different). I reshape such that the channel dimension becomes two channels, sum over one of them, divide by it and then reshape it back.
This works fine while training and testing, but when I jit.trace the model I get a malformed model, where the ‘self’ gets overwritten (see the ‘self=…’ line). As seen here in part of the code.py:
x_70 = torch.add_(x_69, input_65, alpha=1)
_288 = ops.prim.NumToTensor(torch.size(x_70, 0))
_289 = int(_288)
_290 = int(_288)
self = ops.prim.NumToTensor(torch.size(x_70, 1))
_291 = int(self)
_292 = ops.prim.NumToTensor(torch.size(x_70, 2))
_293 = int(_292)
_294 = int(_292)
_295 = ops.prim.NumToTensor(torch.size(x_70, 3))
_296 = int(_295)
_297 = int(_295)
_298 = ops.prim.NumToTensor(torch.size(x_70, 4))
_299 = int(_298)
_300 = int(_298)
_301 = [_290, int(torch.div(self, CONSTANTS.c0)), 4, _294, _297, _300]
x_71 = torch.reshape(x_70, _301)
When I replace ‘self’ with ‘self_19’ it’s allright, and I can load the model.
However I also have issues exporting in ‘onnx’ which complains about the reshape operation.
And I have troubles then running the model in the C++ API, the model does not work on GPU on linux (but works on CPU on LINUX, and both GPU and CPU on Windows).
I have a feeling all these problems are related, is there something known about the reshape operation that causes this? |
st182096 | Thanks for the report! Seems like it may be a problem with our serialization code. Could you provide a small module/script that reproduces the problem so that we can investigate? |
st182097 | Ok to reproduce it I have a reshaping operation. It is essential that ‘view’ gets a shape that is calculated partly from another shape, as that seems to cause the trouble. I added a linear layer after that to make sure ‘self’ is used again and it fails:
#!/usr/bin/ipython3
import torch
class Example(torch.nn.Module):
def init(self):
super(Example, self).init()
def forward(self, x):
s = x.shape
b = x.view(x.shape[0],x.shape[1]//2,2)
accum = b.sum(1, keepdim=True)
b = b * accum
return b.view(*s)
class ExampleNested(torch.nn.Module):
def init(self):
super(ExampleNested, self).init()
self.ex = Example()
self.lin = torch.nn.Linear(4,4)
def forward(self, x):
x = self.ex(x)
x = self.lin(x)
return x
a = torch.randn(4,4)
example = ExampleNested()
traced = torch.jit.trace(example, a)
traced.save(“trace.tmp”)
This gives me:
op_version_set = 0
def forward(self,
x: Tensor) -> Tensor:
_0 = ops.prim.NumToTensor(torch.size(x, 0))
_1 = int(_0)
_2 = ops.prim.NumToTensor(torch.size(x, 1))
_3 = int(_2)
_4 = ops.prim.NumToTensor(torch.size(x, 0))
_5 = int(_4)
self = ops.prim.NumToTensor(torch.size(x, 1))
_6 = [_5, int(torch.div(self, CONSTANTS.c0)), 2]
b_1 = torch.view(x, _6)
accum = torch.sum(b_1, [1], True)
b = torch.mul(b_1, accum)
input = torch.view(b, [_1, _3])
_7 = torch.addmm(self.lin.bias, input, torch.t(self.lin.weight), beta=1, alpha=1)
return _7 |
st182098 | Btw to follow up, I’ll add that I only see this problem happening with the last operation being done. So it doesn’t matter how many of such layers I connect together, only the last operation gets the wrong ‘self’ naming without the number at the end. |
st182099 | I see from the issue that the bugfix is in 1.0.1! I’ll try it now, since it seems the pip package is updated. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.